text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Versatile Single-Element Ultrasound Imaging Platform using a Water-Proofed MEMS Scanner for Animals and Humans
Single-element transducer based ultrasound (US) imaging offers a compact and affordable solution for high-frequency preclinical and clinical imaging because of its low cost, low complexity, and high spatial resolution compared to array-based US imaging. To achieve B-mode imaging, conventional approaches adapt mechanical linear or sector scanning methods. However, due to its low scanning speed, mechanical linear scanning cannot achieve acceptable temporal resolution for real-time imaging, and the sector scanning method requires specialized low-load transducers that are small and lightweight. Here, we present a novel single-element US imaging system based on an acoustic mirror scanning method. Instead of physically moving the US transducer, the acoustic path is quickly steered by a water-proofed microelectromechanical (MEMS) scanner, achieving real-time imaging. Taking advantage of the low-cost and compact MEMS scanner, we implemented both a tabletop system for in vivo small animal imaging and a handheld system for in vivo human imaging. Notably, in combination with mechanical raster scanning, we could acquire the volumetric US images in live animals. This versatile US imaging system can be potentially used for various preclinical and clinical applications, including echocardiography, ophthalmic imaging, and ultrasound-guided catheterization.
Ultrasound (US) imaging is widely used to obtain in vivo structural, functional, and molecular information based on the acoustic properties of tissue and other anatomical components in animals and humans. Typical US imaging systems employ multi-element US array transducers and multi-channel US transmitting and receiving electronics. However, this approach makes the US system relatively complex, bulky, and expensive. Particularly, the manufacturing process is more complicated and costly for high frequency US array transducers, which hinders the wide-spread use of high resolution US imaging. Nevertheless, the use of high frequency US imaging has become increasingly desirable in various preclinical (echocardiography, oncology, ophthalmology) and clinical (ultrasound-guided catheterization, musculoskeletal imaging, ophthalmology) applications [1][2][3][4][5][6][7][8] .
Temporal resolution is a major limitation of single-element based US imaging. Although this modality requires only single-channel electronics and a single-element transducer, which are relatively simple, compact, and cost-effective, its real-time imaging ability is less than that of array-based US imaging. Scanning is required to achieve cross-sectional B-mode imaging with a single-element transducer, and both mechanical linear and mechanical sector scanning have been intensively explored. Mechanical linear scanning translates a single-element transducer mechanically with a motorized stage, but the achievable frame rate is limited by the low scanning speed. For faster imaging, mechanical sector scanning rotates a single-element transducer within a given angular range about a fixed point 1,2,9 . Although it can achieve real-time US imaging, sector scanning requires customized US transducers that are compact and lightweight to reduce the inertial load of the rotating system.
Spatial resolution, on the other hand, is an important advantage of single element based US imaging over array-based imaging. A single element transducer with a central circular aperture has a larger effective area than an array transducer with a rectangular aperture. Given the same operating frequency and the same aperture The system workflow is as follows: first, the input parameters (e.g., the US pulse repetition frequency (PRF), data size, number of steps in the volumetric imaging, and step size) are entered into a graphical user interface (GUI) developed in LabVIEW. We used a PRF of 10 kHz and a B-mode data size of 125 × 2048 double floating point pixels, which depends on the PRF of the pulser-receiver and MEMS scanning while the axial pixel size does on the sampling rate of a digitizer. Second, a data acquisition (DAQ) board (NI-PCIe-6321, National Instruments, USA) generates the analog output to steer the acoustic mirror and the digital output to trigger the entire system. This analog output is amplified by both an operation amplifier (TCA0372, ON Semiconductor, USA) and isolated DC-to-DC converter (NDS6D2415C, Murata Power Solutions, Japan). A pulser-receiver (5073PR, Olympus NDT Inc., USA) transmits pulses followed by the external trigger generated in the DAQ. Then, a high-speed digitizer (ATS9350, Alazar Technologies Inc., Canada) receives the reflected US signal at a sampling rate of 100 MS/s and with a dynamic range of 400 mV. The GUI program performs image processing steps on the raw data.
In the image processing steps, the DC components of each A-line of raw data are initially removed by averaging them, and a band-pass filter is employed to remove noise. Then, time gain compensation reduces the differences in the intensities of the signals with depth, caused by tissue attenuation. Frequency demodulation and log compression follow, and finally scan conversion is employed to create a B-mode image of the actual scanned shape. Finally, the developed LabVIEW program shows real-time B-mode images.
In 3D volumetric imaging, we define local and global volumetric data. Local volumetric data is obtained with the single-axis MEMS scanner along the X-axis and the single-axis motorized stage along the Y-axis, which scan perpendicular to each other. To get a large FOV image for global volumetric data, this step is repeated with an additional single-axis motorized stage along the X-axis. These local volumetric data are merged to generate one global mosaic dataset in MATLAB, using a Gaussian window for apodization to reduce seams. We use our own volume rendering program developed using the volume ray-casting algorithm, and parallel computing with a graphics processing unit (GPU). The detailed specifications of the MEMS-US system are summarized in Table 1 ( Supplementary Fig. S1).
In vitro phantom imaging. To demonstrate 3D imaging, we used the MEMS-US-TT system to image a leaf skeleton phantom whose stem end was held in place on the bottom of the water tank beneath the weight of a glass slide. When the water tank was filled, the buoyant main body of the leaf skeleton floated slightly up, giving the skeleton a three-dimensional structure. We obtained six local volumetric datasets of the leaf skeleton phantom to form a global volumetric image. Then, the maximum intensity projection (MIP) image was processed in MATLAB ( Fig. 2(a)). Figure 2(b,c) show global cross-sectional B-mode images of the leaf skeleton cut along the lines A-A' and B-B' , respectively. As one can see, the seams between the local volumetric datasets are clearly removed. The curved shape of the leaf skeleton is shown well in the 3D volume rendered images ( Fig. 2
(d) and
Supplementary Movie S1) 31 . These results demonstrate that the imaging method and the image reconstruction algorithm used in this study are suitable for in vivo experiments.
In vivo small animal imaging. We investigated the in vivo imaging capability of the MEMS-US-TT system by acquiring B-mode and M-mode images of a mouse heart at 40 frames per second (fps). Figure 3(a,b) are snapshots of real-time B-mode imaging acquired at two different angles. Figure 3(a) shows the left ventricle (LV) anterior wall (AW), labeled 1; the LV papillary muscle (PM), labeled 2; and the LV posterior wall (PW), labeled 3. Figure 3(b) shows the left atrium (4), the aortic valve (5), the pulmonary artery (6), the right atrium (7), the pulmonary valve (8), and the right ventricular outflow tract (9). The inner structures are well shown, compared with the image acquired by an array US transducer 32 .
We then successfully visualized the dynamics of a beating mouse heart through M-mode imaging. As shown in Fig. 3(c), the contraction and expansion of the LVAW and the LVPW are appear clearly in a repetitive pattern, and one irregular pattern at 3 seconds is also apparent, showing respiration. To calculate the heartbeat rate, we analyzed the power spectrum along the line B-B' in the M-mode image, and found that the heartbeat repeated with a frequency of 3.7 Hz (Fig. 3(d)). A mouse's heart typically beats at between 310 and 840 beats per minute (bpm). In this experiment, however, anesthesia decreased the heartbeat to 222 bpm.
Next, we obtained a 3D image of the entire mouse heart by performing global mosaic imaging, with the results shown in Fig. 3(e) and Supplementary Movie S2. The 3D volume rendered image also shows the LVAW and the LVPW. Following the same imaging procedures, we also obtained B-mode images of the mouse throat region as seen in Fig. 3(f). The aortic arch (10) and the jugular vein (11) are identifiable. The aortic arch was pulsating in response to the heart's pumping, so its M-mode image has a repetitive pattern (Fig. 3(g)). The irregularities at 1.8 and 5 seconds indicate respirations. We could verify the pulsation period of the aortic arch as 4.12 Hz, which www.nature.com/scientificreports www.nature.com/scientificreports/ corresponds to 247 bpm, through power spectrum analysis of the line D-D' in the M-mode image (Fig. 3(h)). Similarly, we acquired a 3D volumetric image of the throat, in which we could identify the carotid artery, in an YZ plane cross-section ( Fig. 3
(i) and Supplementary Movie S3).
In vivo human imaging. We explored the in vivo imaging ability of the MEMS-US-HH system by applying it to a human volunteer's wrist. Figure 4(a) is a cross-sectional B-mode image of the volunteer's wrist in a relaxed state without compression, showing a radial artery and veins. Once we compressed the wrist with the MEMS-US-HH system, the veins disappeared but the radial artery, relatively speaking, maintained its shape and thickness ( Fig. 4(b)). Figure 4(c,d) are M-mode images simultaneously acquired from the radial artery and the vein, respectively. The red and blue lines in Fig. 4(c,d) indicate the beginnings of compression and relaxation, respectively. The thickness of the radial artery is unchanged, although the position is slightly shifted. However, the veins are almost completely blocked during the compression.
When we held the system in the fixed position, we were able to identify the arterial pulsation. Figure 4(e), an M-mode image of the radial artery in a static condition, shows a repetitive pattern generated by the pulsation. We performed a power spectrum analysis on the line B-B' and could quantify that the radial artery's pulsation period was 1.1 Hz, which corresponds to 66 bpm ( Fig. 4(f)). This rate fits within the normal resting heart rate of between 60 and 100 bpm.
Discussion
We developed the first in vivo 2D and 3D US imaging system with a single-element US transducer and a water-proofed MEMS scanner. In small animal US imaging studies, it is difficult to position and fix a US probe to accurately locate internal structures (e.g., heart, artery, or vein) because a cross-sectional B-mode image is completely disturbed when the angle or position of the US probe is slightly mispositioned. Therefore, the tabletop-mode US imaging system was used in the small animal studies. Because it uses two single-axis motorized stages in combination with the MEMS scanner, the MEMS-US-TT system could precisely image 2D and 3D internal structures in small animals. We were also able to monitor dynamic movements, such as the heart beating and aorta pulsating, because a high frame rate of 40 Hz was achieved by the rapid rotation of the MEMS scanner. Currently, the system faces third problems. First, because of the fixed focal zone, the image quality becomes poor in out-of-focus regions. This problem can be alleviated with synthetic aperture focusing technology using the coherence factor 9,33,34 . Second, the imaging speed is currently limited by the fixed 10 kHz PRF of our pulser-receiver. Third, global 3D volumetric imaging cannot be performed seamlessly due to motion artifacts between consecutive B-mode images (e.g., respiration and heartbeat). These problems can be overcome by image processing methods, such as compressive sensing, and electrocardiogram (ECG) gating technology [35][36][37][38] . In addition, the continuously steered mirror also could lead the motion artifact because of the directivity of acoustic. However, the mirror's steering angle in the time gap while the acoustic beam travels from transmission to receiving was negligible in comparison with the numerical aperture of the transducer. www.nature.com/scientificreports www.nature.com/scientificreports/ We believe that the MEMS-US-TT system has great potential for preclinical cardiac disease studies, where it can measure such cardiac performance parameters as the ejection fraction, stroke volume, and fractional shortening. In addition, the 3D volumetric imaging capability of this system can benefit oncology by showing an entire tumor's shape. Moreover, ophthalmology widely uses ultrasound biomicroscopy with a high frequency US transducer.
We can easily transform this simple and compact MEMS scanner into another platform, MEMS-US-HH system, for in vivo human imaging. We demonstrated the real-time human vessel imaging capability and quantified the radial artery's pulsation. Further, with a high PRF of the pulser-receiver, Doppler imaging and elastography will also be feasible 39 . Because our US imaging platform can be easily transformed, we can adapt any transducer to specific clinical condition. For example, a high frequency (20-30 MHz) US transducer will be used in US-guided radial artery catheterization and ophthalmology, and a low frequency (7.5-10 MHz) US transducer performs musculoskeletal imaging. The length of the near field, N, is given by: where D is the element diameter, f is the frequency of the transducer, and c is the speed of sound. The length of focal zone, F Z , of this transducer is 4.1 mm, which is given by: where F is the focal length.
To measure spatial resolution of our system, we implemented B-scan imaging of a tungsten wire with a diameter of 50 μm (Supplementary Fig. S1(a)). We extracted the line pixels following lateral and axial direction, then fitted a line spread function (LSF). The full widths at half maximum (FWHM) of the LSFs were considered as the lateral and axial resolution, respectively (Supplementary Fig. S1(b,c)).
Fabrication of a water-proofed MEMS scanner.
We developed a water-proofed MEMS scanner to steer acoustic paths rapidly for real-time B-mode US imaging. The fabrication process is detailed in previous studies [11][12][13] . Briefly, the MEMS scanner has a mirror assembly and body assembly. The mirror itself is moved rapidly and precisely by electromagnetic force generated between permanent magnets in the mirror assembly and coils in the body assembly. The mirror assembly is composed of an acoustically reflective mirror, two neodymium magnets, and a polydimethylsiloxane (PDMS) layer with tongues that act as rotation hinges. The PDMS layer can be steered because it has low stiffness. The acoustic mirror is made from a silicon carbide wafer which has high acoustic impedance and hence high US reflectivity. This mirror is attached on the PDMS layer, and the two neodymium magnets are embedded in the PDMS layer. The body assembly is composed of an aluminum holder and two coils that are wrapped around steel rods and tightly embedded in the aluminum holder. The mirror assembly and the body assembly form one unit because the rotation hinges of the PDMS layer are adhered to the body part. The two coils in the body are closely aligned with the two neodymium magnets in the mirror assembly. When an AC voltage is applied to these two coils, an electromagnetic field is generated: the acoustic mirror is steered continuously by the attractive and repulsive forces between the magnets and coils. The water-proofed MEMS scanner used in this study has a scanning speed of 40 fps as the resonance frequency. This water-proofed MEMS scanner are mounted in a 3D printed fixture with the customized transducer. This fixture makes the angle between the acoustic path and the mirror 45° to sweep the acoustic path.
In vitro phantom imaging. Ultrasonic images of a leaf skeleton were obtained by the MEMS-US-TT system. The leaf skeleton sample was fixed on a glass slide and positioned within a water tank. We defined the MEMS scanning direction as X, the elevational direction of the MEMS module as Y, and the depth direction as Z. One local volumetric dataset was acquired using the MEMS scanner and the Y-axis motorized stage, and it covered 10 mm × 25 mm × 6 mm along the X, Y, and Z axes, respectively. After the first local volumetric dataset was acquired, the X-axis motorized stage was moved in a 4 mm step, and the next local volumetric dataset was obtained. Finally, the six local datasets were merged into one global volumetric dataset. Therefore, the total imaging range was 30 mm × 25 mm × 6 mm along the X, Y, and Z axes, respectively.
In vivo small animal imaging. In vivo small animal imaging was conducted using the MEMS-US-TT system, following regulations and guidelines approved by the Institutional Animal Care and Use Committee (IACUC) of Pohang University of Science and Technology (POSTECH). Healthy BALB/c mice (POSTECH Biotech Center) at an age of 6 to 9 weeks were used for the experiments. The mice were initially anesthetized with isoflurane (1 L/min of oxygen and 0.75% isoflurane) using a gas system (VIP 3000 Veterinary Vaporizer, Midmark, USA). Then, the hair was removed using an electric shaver and a depilatory. We applied US gel (Ecosonic, SANIPIA, Republic of Korea) to the animals, and then placed them in close contact with the outside of vinyl film on the bottom of the water tank.
The size of one cross-sectional B-mode image was 13 mm × 16 mm along the X and Z axes, respectively. The lateral length was defined as the maximum MEMS scanning range of 13 mm, and the axial depth was defined as 16 mm, which was affected by the focal length of the transducer and the pulse-receiver's voltage. When we monitored dynamic movements, such as heartbeat, real-time B-mode imaging was implemented at 40 fps. The acoustic focus was located approximately 6 mm below the mouse skin for heart imaging, and 3 mm below for throat imaging.
When we acquired volumetric data on the mouse heart, one local volumetric dataset covered the 13 mm × 10 mm × 16 mm area (X, Y, and Z axes), and the X-axis motorized stage moved in 4 mm steps. Four local datasets with an acquisition time of 33 seconds for all four were acquired to form a large FOV global volumetric dataset. The post image processing (e.g., demodulation and scan conversion) took about 35 seconds, and the volume merging took about 15 seconds. For throat imaging, one local dataset covered the 13 mm × 15 mm × 16 mm area (X, Y, and Z axes), and nine local datasets were used to get one global volumetric image. It took the acquisition time of 80 seconds, the post image processing of 111 seconds, and the volume merging time of 32 seconds. The heart images measured 25 mm × 10 mm × 16 mm respectively (X, Y, and Z axes), and the throat images measured 29 mm × 15 mm × 16 mm (X, Y, and Z axes). We also could see a real-time B-mode image while acquiring 3D volumetric data.
In vivo human imaging. In vivo human imaging was performed with the MEMS-US-HH system. All experimental procedures followed a clinical protocol approved by the Institutional Review Board (IRB) at POSTECH. A healthy volunteer gave fully informed consent for imaging of her wrist.
The experiment proceeded as follows. First, the palm of the volunteer was positioned pointing upwards, and US gel was applied to the wrist above the radial artery. Then, B-mode images were captured at 40 fps over a FOV measuring 13 mm × 16 mm (X and Z axes). To differentiate the radial artery and veins, we repeatedly compressed and relaxed the volunteer's wrist with our system. Additionally, we monitored the pulsation of the radial artery. | 4,484.2 | 2020-04-16T00:00:00.000 | [
"Physics"
] |
Numerical Simulation of Fine Particle Solid-Liquid Two-Phase Flow in a Centrifugal Pump
To study the effect of fine particle size and volume concentration on the performance of solid-liquid two-phase centrifugal pump, the mixture multiphase flow model, RNG k-ε turbulence model, and SIMPLEC algorithm were used to simulate the two-phase flow of the centrifugal pump. The effects of particle size and volume concentration on internal pressure distribution, solid volume distribution, and external characteristics were analyzed. The results show that under the design discharge conditions, with the increase of particle size and volume concentration, the internal pressure of the flow field will decrease, and the volume fraction of solid phase in the impeller passage will also decrease as a whole. The solid particles gradually migrate from the suction surface to the pressure surface, and the particles in the volute channel are mainly concentrated in the flow channel near the outlet side of the volute. With the increase of particle size and volume concentration, the negative pressure value at the inlet of centrifugal pump increases, the total pressure difference at the inlet and outlet decreases, and the head and efficiency decrease accordingly.
Introduction
Solid-liquid two-phase centrifugal pump is one of the key power equipment for the hydraulic transport of solid-phase materials. It is widely used in various fields of national economy such as water conservancy engineering, petrochemical industry, marine metal mineral mining, and urban sewage treatment. e presence of solid particles makes the transport efficiency and reliability of this kind of centrifugal pump lower than that of the same structure of clean water pump. ere have been a lot of numerical simulation and experimental research work at home and abroad at present [1][2][3][4][5], but most of the existing research focuses on the low concentration of solid-liquid two-phase flow and rarely involves the problem of dense fine particle solid-liquid twophase flow. Up to now, the influence of dense fine particles on the flow field in centrifugal pump is still unclear, and the mechanism of dense fine particles solid-liquid two-phase centrifugal pump has not been revealed. Compared with the low concentration solid-liquid two-phase flow, the force of liquid and solid phase in high concentration solid-liquid two-phase flow is stronger [6][7][8][9]. e liquid-phase flow drives the movement of solid particles, and the loss of momentum and turbulent kinetic energy of solid particles in turn affects the liquid-phase flow. Particles collide with each other frequently, and the movement of particles is affected not only by the liquid phase in the pump, but also by the characteristics of particles. erefore, it is of great significance to study the influence of particle size and volume concentration on the flow performance of centrifugal pump [10][11][12][13].
Due to the complexity of the solid-liquid two-phase flow in the centrifugal pump, the experimental research method is costly and time-consuming, and it is difficult to have a direct understanding of its internal flow state [14]. With the development of computational fluid dynamics, the experts at home and abroad have carried out in-depth research on solid-liquid two-phase flow in centrifugal pump based on the CFD method [15][16][17][18]. Liu et al. [19] used CFD technology to simulate the solid-liquid two-phase flow field in a chemical process pump, calculated the flow field in the pump under different particle sizes and concentrations, and studied the distribution of the solid-liquid two-phase flow of the double-suction pump. Li et al. [20] took a spiral centrifugal pump as the research object and analyzed the distribution and change rule of the initial solid-phase volume fraction along the internal flow field and its influence on the internal flow field of the spiral centrifugal pump. Zhang et al. [21] adopt the mixture model and moving grid technology to systematically study the influence of the properties of solid particles on the performance of centrifugal pump, including particle size, volume fraction, and density, and put forward the no-overload performance prediction of double channel pump and a calculation method that will substantial increase the accuracy in performance prediction. Cheng et al. [22] researched five different particle diameters and four different particle densities based on the particle model to study their influence on solid volume concentration distribution, solidphase slip velocity, and external hydraulic characteristic in the double-blade sewage pumps. Liu et al. [23] calculated the unsteady flow field of solid volume fraction C v � 0% and C v � 20% in a multistage pump to analyze the influence of the addition of particles in the pump on its performance characteristic, and the distribution and movement of particles in impeller and guide vane are obtained at the same time. Song et al. [24] studied the internal and external characteristics of a vortex pump when the solid particle volume concentration is 5%, 10%, and 15%, and the results of the expression of pump head and efficiency are compared with the results of simulations.
At present, some progress has been made in numerical simulation of centrifugal pump flow performance [25][26][27][28][29][30][31][32][33][34][35][36]. However, the overall flow performance of the centrifugal pump, especially the two-phase flow performance under the condition of high concentration of fine particles, is not sufficiently studied due to the complexity of solid-liquid two-phase flow in the pump. On basis of the above research, the present study investigated a 1PN/4-3kw centrifugal pump to study the influence on the internal flow distribution in the pump for various particle size and volume concentration and to compare the results with the single-phase numerical simulation results of clean water. e results of this study are helpful to develop dense fine particle solidliquid two-phase flow centrifugal pumps, improve transport efficiency, and reduce operating costs.
Computational Model and Mesh Generation
2.1. Model Parameters. 1PN/4-3 KW single-stage centrifugal pump was selected as the calculation model. e basic design parameters are as follows: flow Q � 16 m 3 /h, head H � 13m, and rotation n � 1450r/min. e main geometrical parameters of the impeller are as follows: inlet diameter d 1 � 50 mm, outlet diameter d 2 � 25 mm, blade number Z � 5, the impeller is a semiopen impeller, the solid particles are glass beads, and ρ � 2450 kg/m 3 . Unigraphics NX software was used to conduct 3D modeling of the pump, and the 3D model and hydraulic model of the centrifugal pump are shown in Figure 1.
Compute Domains and Grids.
In order to make the numerical calculation results close to the real situation, an inlet extension section was added before the inlet of the suction chamber and an outlet extension section after the outlet of the volute respectively, so as to fully develop the water flow. e computational domain includes inlet extension section, impeller, volute, and outlet extension section. In order to improve the accuracy of simulation results, the unstructured mesh with good adaptability was used to grid the whole flow passage and the area with large pressure and velocity gradient was partially encrypted. A total of 6 grid schemes were set up to test and verify the grid's independence under steady flow of clean water at the centrifugal pump design condition, and the number of grids increased gradually from 1258424 to 4921565. e predicted head corresponding to different grid numbers is listed in Table 1, and the grid independence graph is shown in Figure 2. As it can be seen that the head is almost unchanged from the fourth grid, the calculated heads differed by less than 0.5%. Considering the computing performance of the computer, the fourth set of grid was chosen for the simulation, final overall mesh number of the model is 3,385,676, and the node number is 578,448. e calculation domain and mesh of the centrifugal pump are shown in Figure 3.
e Basic Assumptions.
e solid-liquid two-phase flow in the model pump is extremely complex. In order to simplify the calculation and improve the accuracy of the numerical simulation results, the following assumptions are adopted: (1) e continuous phase (water) is an incompressible fluid, the particle phase is a continuous term, and the physical properties of each phase are constant (2) e particle phase is a spherical glass bead with uniform particle size, regardless of the change of particle shape (3) Internal flow of the pump is treated as steady flow with water as principle phase, solid particle as secondary phase (4) Axial velocity of inlet is well-distributed, and solid particles and water are evenly mixed with same velocity
Multiphase Flow Model.
Considering the interaction between solid and liquid phases, because the particle size is small (≤1 mm), solid particles can be treated as a continuous medium, and the phase coupling is quite strong. erefore, the mixture model is adopted, and its continuity equation and momentum equation are as follows. Continuity equation: Among them, ρ m is the mixture density, kg/m 3 , and v → m is the average mass velocity, m/s. Momentum equation: Among them, μ m is the coefficient of mixing viscosity, Pa·s; F → is the volume force, N; n is the number of phase; α k is the volume fraction of the k th term, %; ρ k is the k th density, kg/m 3 ; and v → dr ,k is the k th term drift velocity, m/s. Slip velocity v → qp is defined as the velocity of the second phase (p) relative to the main phase (q): en, the relationship between drift velocity and slip velocity is From the continuity equation of the second phase (p), the volume fraction equation of the second phase can be obtained as follows:
Boundary Conditions.
e RNG k-ε turbulence model was used to simplify and close the equations. e SIMPLEC algorithm was used for numerical solution, and the convergence accuracy was set as 10 −6 . e inlet boundary condition adopts the velocity inlet, only considers the axial velocity, and does not consider the fluid or particle moving in other directions, and it can be obtained according to the design flow rate and inlet pipe diameter. e outlet boundary condition chooses the free flow outlet, and the inlet and outlet turbulence intensity is consistent with the default value of 5%. e impeller wall is set to rotate, the other walls are static, the boundary condition of the wall is no slip condition, and the standard wall function is adopted near the wall.
e particle diameter was adjusted by defining the solidphase parameters in the mixed model, and the volume concentration was achieved by setting the inlet solid-phase volume fraction in the boundary conditions.
Calculation Results and Analysis
In order to explore the influence of particle size and volume concentration on centrifugal pump performance during solid-liquid two-phase flow transportation, the following calculation scheme is formulated: (1) At the design flow rate and the solid-phase concentration of 10%, the two-phase flow field under five working conditions with particle size of 0.01 mm, 0.05 mm, 0.1 mm, 0.15 mm, and 0.2 mm was numerically simulated (2) At the designed flow rate and the particle size of 0.1 mm, the two-phase flow field under five working conditions with particle concentration of 10%, 15%, 20%, 25%, and 30% s was numerically simulated
External Characteristics of Centrifugal Pump When
Conveying Clean Water. Firstly, numerical simulation was carried out for the external characteristics of the centrifugal pump when transporting clean water and the numerical simulation results were compared with the test results, as shown in Figure 4. It can be seen that the flow-head curve trend of the numerical simulation and the test is basically the same, and the head of the numerical simulation is higher than that of the test, because the simulation value does not take the error factors caused in the casting process of the centrifugal pump into account, such as the surface roughness of the impeller and volute. e flow-efficiency curve of the numerical simulation is also consistent with the variation trend of the test value. When the flow is close to the working point, there is little difference between them. e maximum error between the numerical calculation and pump test results is less than 8%; therefore, it is reasonable to believe that the model and method adopted in the numerical simulation of two-phase flow of the test pump are reasonable. Figure 5 shows the cloud diagram of centrifugal pump pressure distribution with different particle sizes when C v � 10%. It can be seen from the figure that the final pressure cloud map changes little after different small-size particles are input into the flow passage of the centrifugal pump while the volume fraction is fixed. e pressure at the inlet of the flow passage is negative, and it can be analyzed that the particles at the inlet collide with each other, leading to the decrease of the pressure, and the negative pressure will lead to the cavitation of the centrifugal pump. As the particle size increases, the chance of collision also increases, which aggravates the negative pressure and finally leads to the continuous decline of the cavitation performance of the centrifugal pump. e pressure of the impeller passage is not completely symmetrical and consistent. e pressure near the outlet pipe of the volute is larger than that of other parts. After careful observation, it was found that the pressure at the position close to the outlet pipe in the condition of 0.01 mm particle size was higher than that of 0.2 mm. erefore, it was speculated that as the particle size increased, the overall pressure inside the flow field tended to decrease.
Influence of Particle Size on Internal Flow Performance.
As the particle size increases, the pressure changes little in the impeller passage, but after the two-phase flow enters the pressure chamber, the pressure begins to change obviously, especially the pressure along the volute wall generally decreases. It can be seen that the pressure changes little at the impeller inlet but decreases obviously at the volute outlet, and the pressure changes very sharply on both sides of the tongue. Because the fluidity of solid particles decreases as the particle size increases, the energy consumed by conveying particles increases, resulting in the continuous drop of total pressure in the pump. e macroscopic manifestation is that the efficiency of centrifugal pump gradually decreases and the head decreases gradually. Figure 6 shows the liquid-phase velocity distribution in the middle section. e velocity increases gradually from the impeller inlet to outlet. At the same radial distance, the relative velocity at the pressure side of the blade is lower than that at the suction side in the latter part of blade. With the change of particle size, the liquid velocity distribution has little change. e high-speed region appears near the back side of the impeller outlet, and there is wake at the outlet of the impeller channel.
As the particle size increases, the wake area of impeller gradually increases, the high-speed region near the tongue increases, and the low-speed region at the outlet of volute increases. is is because when the particle size increases, the flow performance of particles decreases, the particle concentration near the back of the impeller decreases, the relative liquid concentration increases, thus cause the range Shock and Vibration of high-speed region near the back side of the impeller increases gradually. As the particle size increases, the number of particles reflux into the impeller channel from the tongue decreases, the concentration of the liquid phase increases, and the range of the high-speed zone gradually increases, indicating that the size of the particle has a serious impact on the reflux near the tongue. Figure 7 shows the cloud diagram of solid-phase volume distribution of centrifugal pump impeller with different particle sizes when C v � 10%. It can be seen from the figure that the distribution trend of particle volume concentration of different particle sizes is consistent. Since the inlet boundary condition is set with the same initial velocity, the solid-phase volume concentration distribution is relatively uniform near the impeller inlet. As the flow progresses, the impeller takes effect and the solid volume concentration distribution changes. With the increase of particle size, the concentration of solid particles on the pressure surface of impeller is significantly higher than that on the suction surface, and the volume concentration of solid particles in impeller channel decreases as a whole. e reason is that with the increase of particle size, the centrifugal force of the impeller increases, and the solid particles deviate from the suction surface of the impeller and move towards the pressure surface.
As the particle size increases, the particle concentration at the head of the blade working face increases, and the particle concentration at the back of the blade decreases along the direction of the impeller passage. When the particle size is 0.01 mm, the concentration distribution on the blade is relatively average. From the inlet to the outlet of the impeller, the particle concentration on the working surface at the inlet is higher, decreases along the impeller passage and increases along the back of the blade. As the particle size increases, the particle concentration at the inlet of the blade-working face increases, and the particle concentration at the back side decreases. When the particle size is 0.2 mm, it can be clearly seen that the particle concentration distribution on the back side of the blade is extremely uneven. e larger the particle size is, the greater the inertia force on the particle is, and the more obvious the migration from the working face to the back is. erefore, it is concluded that the particle diameter has a great influence on the overall concentration distribution of the working face and the back side of the blade.
Influence of Volume Concentration on Internal Flow
Performance. Figure 8 shows the pressure distribution cloud diagram of centrifugal pump with different solid-phase volume concentrations at d � 0.1 mm. It can be found from the figure that the influence of the change of volume concentration on the pressure of the full flow passage in the centrifugal pump is similar to that of the change of particle size. When the particle size is fixed, with the increase of solid volume concentration, the negative pressure at the inlet of the impeller increases, the possibility of cavitation increases, the pressure difference between pump inlet and outlet decreases, and the head of the centrifugal pump decreases.
As the particle volume fraction increases, the negative pressure at the impeller inlet gradually increases, and the negative pressure mainly distributes on the suction surface of the blade head. e volute diameter changes the most at the tongue, so the pressure changes obviously here. When conveying the mixed medium, the influence of particle volume fraction on the inlet and outlet of the pump and near wall area of the volute is relatively large. With the increase of the volume fraction of particles, the collisions between particles and the wall become more frequent, and the average density of the fluid increases. erefore, the more energy is needed to transport the mixed medium, which leads to the increasing negative pressure at the inlet and outlet of the impeller and the increasing total pressure in the pump. Shock and Vibration It can be seen from Figure 9 that the liquid-phase velocity along the impeller channel increases gradually, and the smaller the volute diameter is, the greater the velocity is.
ere is a high-speed area near the tongue, and a high-speed area appears near the back side of the impeller outlet.
As the particle volume fraction increases, the liquid velocity at the impeller outlet increases obviously, and the high-speed region in the water pressure chamber is mainly located at the position with small volute diameter. On the whole, the change of particle volume fraction does not affect the distribution of liquid velocity, indicating that the particle volume fraction has little effect on the liquid velocity. Figure 10 shows the cloud diagram of solid-phase volume distribution of centrifugal pump impeller with different volume concentrations at d � 0.1 mm. It can be seen from the figure that during the movement of particles from the inlet to the outlet, the solid-phase volume concentration on the suction surface of the blade is significantly higher than that on the pressure surface. With the increase of initial concentration of solid-phase volume, the particles tend to move towards the pressure surface, C v � 10%, the particle at a certain inlet angle into the impeller flow channel, and the back part of the particle impact blade, with the deepening of the flow, under the action of inertial force to be near blade pressure side; with the increase of volume concentration, the impact on the back of the trend gradually disappears, and that solid-phase volume concentration has a certain influence on particle size distribution. e particle concentration along the back of the volute first decreases and then increases, and the range of particle concentration decreases gradually to the blade tail with the increase of the volute diameter. In the area near the wall of the volute, the particle concentration gradually increases with the increase of the volute diameter, and the particle concentration is the highest at the maximum volute diameter; the particle concentration is generally higher at the head of impeller working face and the tail wake of blade back. With the increase of particle volume fraction, the particle concentration in the impeller increases gradually, but the overall distribution remains unchanged, which indicates that the change of particle volume fraction only affects the particle concentration in the pump but does not affect the overall distribution of particles in the pump. Shock and Vibration
Effects of Solid Particles on Pump Characteristics.
In this paper, the internal flow field of the centrifugal pump with particle diameter of 0.01 mm, 0.05 mm, 0.1 mm, 0.15 mm, and 0.2 mm and volume concentration of 10%, 15%, 20%, 25%, and 30% under the design flow condition was simulated. e influence of particle diameter and volume concentration on the head and efficiency of the centrifugal pump was analyzed through the simulation results. In order to express the calculation results more intuitively, it is shown in the chart form as follows. Figure 11 shows that the head and efficiency of the centrifugal pump decrease with the increase of particle diameter. When the particle diameter is less than 0.15 mm, the particle diameter has little influence on the head and certain influence on the efficiency. When the particle diameter is larger than 0.15 mm, the head and efficiency decrease significantly. It can be seen that the particle size of the solid phase suitable for transport by this model pump is smaller than 0.15 mm. e reason for the decline of hydraulic performance may be as follows: with the increase of particle diameter, the greater the energy needed to maintain the suspension drifting in the flow, the more serious the hydraulic loss will be, leading to the deterioration of external characteristics.
It can be seen from Figure 12 that, under the same particle diameter condition, the head and efficiency of the centrifugal pump significantly decrease with the increase of the solid-phase volume concentration, and the change trend is approximately linear. e reason for the decline of hydraulic performance may be that with the increase of the volume concentration of solid phase, the viscosity of the solid-liquid mixture increases, the probability of collision between particles also increases greatly, and the hydraulic loss worsens, resulting in the decline of external characteristic performance.
Conclusions
e mixture multiphase flow model, the extended RNG k-ε turbulence model, and SIMPLEC algorithm were used for numerical simulation of solid-liquid two-phase turbulence in a centrifugal pump using the fluid dynamics software Fluent: (1) With the particle size increases, the negative pressure at the inlet of the impeller intensifies, and the impeller runner pressure is not completely symmetrical and consistent. e pressure near the outlet pipe of the volute is large, and the overall pressure inside the flow field decreases. e solid-phase volume concentration in the impeller passage decreases as a whole, and the solid particles tend to migrate away from the suction of the impeller towards the pressure surface due to the centrifugal force of the impeller.
(2) With the increase of solid-phase volume concentration, similar to the influence of particle size change on pressure, the negative pressure at the inlet of impeller increases, the possibility of cavitation in the pump increases, and the pressure at the outlet decreases. e solid-phase volume concentration on the suction surface of the blade is higher than that on the pressure surface. With the increase of the initial solid-phase volume concentration, the particles also tend to move towards the pressure surface. (3) Under the design flow condition, with the increase of particle size and volume concentration, the negative pressure value at the inlet of the centrifugal pump increases, the total pressure difference at the inlet and outlet decreases, and the head and efficiency decrease accordingly.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that they have no conflicts of interest. Shock and Vibration 9 | 5,839.4 | 2021-02-17T00:00:00.000 | [
"Engineering"
] |
Nonlinear Feature Extraction for Hyperspectral Images
: In this study non-linear dimension reduction methods have been applied to a hyperspectral image in order to increase the classification accuracy in feature extraction step. Furthermore, image segmentation has been ensured the by taking into consideration the spatial synthesis of hyperspectral images and passing from high-dimensional space to low dimensional space. It has been compared the results obtained from the image segmentation made by taking one pixel from this spatial synthesis. The advantages of the effects of the results of the dimension reduction techniques made by facing neighbor pixels on the segmentation of hyper-spectral image have been displayed in the experimental results part.
Introduction
With the advance of remote sensing technology in the recent years hyperspectral image scanners have become very popular in many scientific area such as geosciences and medicine. If hyperspectral images are compared with multispectral images, hyperspectral images contain much richer data. However, the presence of hundreds of adjacent spectral bands in the hyperspectral images negatively affects the pattern recognition algorithm. That is why the most considerable problem of the hyperspectral image processing is the dimension reduction and feature extraction. Feature extraction and dimension reduction techniques are very relevant to each other in their nature. By extraction of new features from original spectral bands as a parallel process of dimension reduction, classification and segmentation algorithms can be used more effectively from machine learning and pattern recognition area in a lower dimensional space. Dimension reduction methods can be realized in two ways: feature selection or feature extraction. In feature selection methods only a small subset of highly informative bands are chosen Feature extraction refers to the mapping of high dimensional data onto a lower dimensional space. In feature extraction methods features are extracted using linear or nonlinear functions of the original bands. In this study dimension reduction, band selection and especially characteristic inference is done. Furthermore, by benefiting from the characteristic inference from hyperspectral images and using class labels with informative methods, it is tried to perform class analysis. There are many methods to execute the characteristic inference. The oldest and hence the most known of these techniques is the linear technique named PCA. However, linear techniques are not sufficient in data inference. It is needed to have non linear techniques to classify complex data. In the recent years, many non linear techniques have been used for this purpose. In addition to the non linear techniques protecting global characteristics such as Kernel PCA, Isomap, Diffusion Maps we have also used non-linear techniques protecting local characteristics such as LLE, Hessian LLE, Laplacian Eigenmaps, LLC. [9] Furthermore, it has used PCA to compare non-linear techniques. As second step, it have been obtained the 5, 10, 15, 20 and 50 spatial neighborhoods for the available data set created with this characteristic inference methods and tried to classify the images. Each data is expressed as a sample point. This sample point is named as pixel. The vectors created for these pixels are valid for all bands and have a specified value. Is the class of a pixel is determined; the class of its neighborhood next to it is the same and represents its spatial neighborhood [1]. This spatial neighborhood for a pixel of an image is to take n×n neighborhood around it and to discipline to image. The proximity of spatial neighborhood between two points is measured with Euclidean distance. In spatial neighborhoods, two close data points are similar and their probability to be in the same class is very high. As distance increases, similarity decreases for n×n neighbor. In the second part, after displaying the linear and nonlinear techniques, the data set has been presented as well the study in the third part. In the last part, the experimental results have been showed as well as advantages of this study. Classification performance can be kept same or can be increased.
Methodology
In the real world, it is been struggled too much data. It is needed too much memory, calculation to correctly classify the data. With the reduction of the data it is possible to use it efficiently and to reduce cost. And to perform this operation without any data loss, you need dimension reduction methods ensuring dimension reduction and decreasing band number. This is specially an important step for the processing of high inference dimensional data. The characteristic inference is done on bands with linear or nonlinear methods. In this study it has been done characteristic inference by using class data [14]. In the following method of dimensionality reduction used in this study are presented .
Diffusion Maps
Diffusion Maps technique is based on Markov chain base. The diffusion distance between two points x and y is known as casual ___________________________________________________________ 1 Computer Engineering Department, Electric-Electronic Faculty, Yildiz Technical University, Esenler, 34220, İstanbul/Turkey movement probability. This range possibility is identified as follows: In Equation (1) (x ) shows the over-weighted characteristics. Diffusion Maps and end weight in data graphic are calculated with Gauss kernel function [2]. This method combines all along the graphic all ways and uses diffusion distance to reduce dimension. It is continued to process all along the graphic by throwing minimum eigenvalue and eigenvector.
Kernal PCA
Kernel PCA-KPCA is the expanded feature of PCA method. However, while PCA is a linear method KPCA is a non-linear technique that improves linear techniques. Data dimension is reduced by using Kernel matrix, so as K kernel matrix of the data points x in the form of (x , x ). By changing K kernel matrix inputs, it is possible to find the centers and d eigenvectors. The essential in KPCA is to select the kernel function. Kernel function can be linear kernel, Gauss kernel and polynomial kernel. KPCA ensures quite successful results in face detection, speech recognition.
Principal Component Analysis (PCA)
PCA is the most popular orthogonal linear transformation. PCA is display of the data having the biggest variance in low dimensional space. High variance characteristics are preferred to low variance. By calculating to covariance matrix of X data matrix samples, it is possible to find the M linear mapping increasing the cost function. It removes the eigenvectors having the biggest eigenvalue. PCA used the euclidean distance between x and x data points. PCA transformation is as µ T = W.W orthogonal matrix, µ T linear transformation; whereas W displays the eigenvector corresponding to covariance matrix.
Isomap
Instead of Euclidean distance used in multidimensional scaling algorithm on Isomap weighted graphic, it is a low dimensioned embedding method using geodesic distance. Geodesic distance is the shortest one and it is needed to find neighborhoods between all data points in order to find this distance. Geodesic distance is calculated with neighborhood graphic between data points x (i=1,2,...,n). Furthermore, in Isomap method it is also important to select the neighborhood parameter. The linkage of each data point is known as the closest Euclidean distance in high dimension space. The shortest distance between two points is found with Dijixtra algorithm.
Local Linear Embedding (LLE)
It is a method similar to Isomap. However as distinct from Isomap, it only protects the local characteristics on data point graphic image. In LLE x data point converging to local neighbors and k w weights are matching a hyper smoothing being a linear union of their closest neighbors by preserving the neighbor relations of each data point. w weights are stable against reconfiguration, transformation, slewing, scaling. Various coefficients are created. W=( ) i,j minimizes the cost function. After calculating the weights, it is passed to low dimensioned space by preserving local neighbors.
Hessian LLE
As LLE discrete matrix techniques are used. It calculates the data graphic by using k the closest neighborhood. It performs preliminary analysis by measuring manifold curves of H matrix and displays in low dimensioned space. It is calculated as Tangent Hessians: For each data point, approximate local tangent coordinates are calculated for manifold Hessians by calculating eigenvectors of covariance matrix. The H matrix is performed minimizes curviness of the Afterwards, the matrix created in order to determined eigenvectors of covariance matrix is orthonormalized. Compared to LLE and Laplacian it is slowest and gives worst performance in discrete ones. However it is successful in convergence problems.
Laplacian Eigenmaps
As similar to LLE, it passes to low dimensioned space by preserving manifold local characteristics. Laplacian method creates first G graphic in association with k the closest neighborhood and selects the weight. The end weights for x and x data points are calculated with gauss. y is subjected to x weights in cost function of low dimensioned space image and little distances between x data points. Hence, cost function is minimized with y and y spectral graphic theory. M is stated as grade matrix, W diagonal matrix and Laplacian graphic L=M-W. Eigen decomposition of Laplacian graphic is done and low dimensioned embedding is created.
Locally Linear Coordination (LLC)
LLC performs global editing of linear models by calculating linear models. A mixture of factor analysis is calculated with LLC EM(Expectation Maximization) algorithm. The mixture of factor m gives local data images and , y which are the related responsibilities. W weight matrix shows the weights created with LLE by using . The weights are stated as: (3)
Multidimensional Scaling (MDS)
MDS is a technique based on similarity matrix. It matches as much as possible the points having closest original distance for similarity between N couples of data. This indicates how much the objects resemble to each other. Furthermore, MDS finds also matrix distances that are not like. There are many MDS algorithms and classifies these MDS algorithms depending on entry matrix.
Experimental Study and Results
In this study Salinas data has been used set obtained from 224 bands captured from AVIRIS (Airborne Visible/Infrared Imaging Spectrometer) sensor. Salinas data set is a data set offering high extensional resolution found in the valley of Salinas in California. It is composed of 217 samples, 512 lines and in whole 16 classes.
In this study, in order to ensure at first the processing of all data, it has been obtained the signatures ensuring some characteristics. It is thought that these spectral signatures represent as far as possible the spectral range. The non-linear techniques allowing characteristic inference have been applied not to all data but to these spectral signatures. The data set was obtained by removing 10 and 15 featured vectors thanks to dimension reduction techniques used in this study. Furthermore, RBF and kNN interpolation was used in order to discipline all data set. The data set reduced with RBF was learnt with artificial neural networks and each spectral signature passed to low dimensioned space by using a network structure. kNN interpolation method is found according to k closest samples. In kNN interpolation, weights are calculated by finding the closest neighborhood to normalized image. In this method, spectral signatures and distances of each pixel are found. There after by selecting k=9 value feature vectors are obtained by getting nonlinear dimension reduction method of k spectral signature which is closest to pixel to be reduced. The purpose here is to find the closest mid evaluation to be obtained with non-linear projection methods. This method can be applied on all spectral signatures and ensures dimension reduction. The experimental results for Salinas data set are for 10, 15 dimensions. Salinas data set was reduced to 10, 15 bands by using 224 bands extensional union. Segmentation and classification success were measures by using 5, 10, 15, 20 and 50 neighborhoods. Segmentation results obtained by using one pixel as well as segmentation results obtained by taking neighborhoods for 16 different classes were evaluated. Have also displayed the results obtained from a linear method PCA for comparison purpose. The results of the study are shown in the following table.
In the Table I and Table II
Conclusion
In this study, it is have introduced spatial coherence into dimensionality reduction and showed its contribution for the image segmentation. The spatial coherence is introduced by comparing individual pixels based on neighborhood structure of their nonlinear dimension reduction techniques. Furthermore, using spatial coherence have been tried to increase the classification success of dimension reduction techniques. Reduction to 10 bands and 15 bands considering 10-20 neighbors gave the best classification results. Compared to results the classification accuracy decreased when the reduced too few of band or too many of bands. It is important to express the best by taking the appropriate number spatial neighborhood. | 2,892.4 | 2015-12-05T00:00:00.000 | [
"Environmental Science",
"Computer Science",
"Engineering"
] |
The quantitative impact of read mapping to non-native reference genomes in comparative RNA-Seq studies
Sequence read alignment to a reference genome is a fundamental step in many genomics studies. Accuracy in this fundamental step is crucial for correct interpretation of biological data. In cases where two or more closely related bacterial strains are being studied, a common approach is to simply map reads from all strains to a common reference genome, whether because there is no closed reference for some strains or for ease of comparison. The assumption is that the differences between bacterial strains are insignificant enough that the results of differential expression analysis will not be influenced by choice of reference. Genes that are common among the strains under study are used for differential expression analysis, while the remaining genes, which may fail to express in one sample or the other because they are simply absent, are analyzed separately. In this study, we investigate the practice of using a common reference in transcriptomic analysis. We analyze two multi-strain transcriptomic data sets that were initially presented in the literature as comparisons based on a common reference, but which have available closed genomic sequence for all strains, allowing a detailed examination of the impact of reference choice. We provide a method for identifying regions that are most affected by non-native alignments, leading to false positives in differential expression analysis, and perform an in depth analysis identifying the extent of expression loss. We also simulate several data sets to identify best practices for non-native reference use.
Introduction
Sequence read alignment to a reference genome is currently a key step in many common bioinformatics workflows. Researchers frequently encounter situations in which the most appropriate reference genome for a reference-based analysis is not available, and a homologous alternative must be used. This can lead to inaccuracies in mapping and subsequently in quantitation and interpretation. These inaccuracies skew the results of otherwise sound analysis workflows. This study approaches the problem of non-native reference alignment by comparing the effects of read alignment to native and heterologous reference genomes. We describe a PLOS ONE | https://doi.org/10.1371/journal.pone.0180904 July 11, 2017 1 / 21 a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 method to identify false positives caused by improper alignments to the heterologous reference, and examine the underlying causes, to provide a set of best practices for research that makes use of non-native reference genomes. Comparative analysis of microbial genomes since the advent of high-throughput sequencing has shown that prokaryotic genomes are dynamic and can be highly diverse, even among closely related species or strains. Analysis of bacterial genomes through sequencing-based methods such as RNA-Seq has made it possible to rapidly advance our understanding of basic biological function, identify host-pathogen interactions, and engineer microbes for industrial and pharmaceutical applications [1]. It has become apparent in recent years that the highly dynamic nature of prokaryotes has led to extensive genomic diversity. In 2001, sequencing of E. coli O157:H7 identified over 1300 strain-specific genes when compared with E. coli K-12, the strain previously sequenced and thought at the time to be fairly representative of the model organism [2]. The identification of these genes, found to be involved primarily in virulence and metabolism, showed that even closely related strains can differ significantly. Since that time, the availability of sequencing data from multiple strains of the same organism has increased, but due to the vastness of biodiversity in prokaryotes, it is still not uncommon to find that the most appropriate reference genome is not available, or exists only as a draft. Researchers then must resort to using finished evolutionary neighbor reference genomes in their studies, even when the sequence reads they wish to map were produced from a heterologous strain.
Many common analysis pipelines rely on accurate alignment of reads to a corresponding reference genome. Differential expression studies, for example, rely on aligning transcriptome reads to a reference, extracting count data, and examining the differences in transcript read levels for the genome under study. In cases where two or more closely related species or strains are being studied, a common approach is to simply map reads from all organisms to a common reference genome. The assumption is that the differences between closely related microbes are insignificant enough that the results of differential expression will not be influenced, or otherwise, that genes that are absent in one sample or the other should simply be excluded from analysis, while shared genes that have seemingly reasonable read counts in both organisms can be used for differential expression analysis. For example, we previously investigated differences in gene expression in clinical strains of Vibrio vulnificus, when exposed to either artificial seawater or human serum environments, as a model for the expression changes the organism undergoes when infecting a human host. V. vulnificus CMCP6 and V. vulnificus YJ016 expression levels were compared by using the CMCP6 strain as a common reference genome [3]. Using a common reference genome to make comparisons between different strains is also common in eukaryotic systems, and similar methods were used in a comparative study of strains of Bombyx mori [4]. The approach of using a common reference genome for different strains is unable to correctly represent factors that can influence read counts in the non-homologous read set, such as the frequency and density of mismatches due to natural divergence between strains. The degree of error in these studies will be affected by how alignment algorithms handle reads with multiple possible mapping positions, especially when mutations decrease mapping position certainty. Comparing data across strains becomes increasingly less sound as evolutionary distance between read sets and the reference genome increases, and this is particularly true of prokaryotic species, where divergence occurs at an accelerated pace. In this study, we examine the potential impact of using a heterologous reference genome and the effects on read alignment, and by extension differential expression. We show how differences in reference genomes influence read alignment and gene expression results when using common analysis techniques. We then provide an approach for identifying false positives caused when comparing multiple strains or species by means of alignment to a common reference genome, and outline best practices for the use of heterologous reference genomes in cross-strain analyses.
Data and heterologous reference distance
For this study, transcriptome data from two different organisms were used. RNA-Seq data for two strains of Vibrio vulnificus, CMCP6 and YJ016, as described by Williams et al., were used. These data are available under the NCBI Bioproject identifier PRJNA252365. Another publicly available data set, consisting of RNA-Seq data from Escherichia coli strains K12 (MG1655), a common laboratory strain, and strain IAI1, a commensal modal strain, under three experimental conditions [5] was also analyzed (Bioproject identifier: PRJNA310115). The V. vulnificus data set consists of two experimental conditions, human serum and artificial seawater, each having two replicates, while the E. coli data set consists of three experimental conditions, batch, chemostat, and starvation, with each condition having two replicates as well. A summary of the data used in this study can be seen in Table 1. In all cases sequencing was performed using the Illumina HiSeq platform. The same analysis workflow was applied to each data set.
Structural comparisons between reference genomes for both bacterial strains were performed using Mauve [6].
Orthology mapping
In order to make accurate comparisons of data as aligned to heterologous reference genomes, orthology relationships between genes were first determined. All-against-all protein BLAST was used to find orthologous regions between strains. Regions were determined to be orthologous if they showed greater than 95% identity, were at least 200 base pairs in length, and had no more than 5 mismatches at the protein level.
RNA-Seq read mapping
Each RNA-Seq read set was aligned to both potential reference genomes for their respective species using Bowtie2 [7]. In the case of E. coli, all replicates and conditions from both strains K12 and IAI1 were aligned to both the K12 and IAI1 reference genomes. Similarly, all read sets for V. vulnificus were aligned to both the CMCP6 and YJ016 reference genomes. All alignments were performed using Bowtie2's sensitive alignment (-M 3, -N 0, -L 22). Raw read counts were then extracted from each alignment using the featureCounts Bioconductor package [8]. Next, the previously computed orthologous gene mapping information was used to map read count data for all conditions and replicates with orthologous genes. This process was performed on all samples and replicates for both E. coli and V. vulnificus, so that each read set is counted for alignment to both their native reference genome and the heterologous reference genome for their respective species. This makes it possible to make direct comparisons of the effects of mapping identical read data to heterologous genomes. As the process was applied for all conditions to both native and heterologous alignments, cross effects can be identified to increase confidence in observations. An overview of this data processing pipeline is shown in Fig 1.
Differential expression analysis
Differential expression analysis was performed for all strain/condition permutations for each organism, using DESeq2 [9]. Principal component analysis was performed to confirm the integrity of replicates for all read sets.
False positive identification
Differential expression analysis was performed on all permutations of data sets for each organism as aligned to native and heterologous reference genomes. False positives were identified in two ways. When identical datasets were aligned to both references for a single condition, differential expression analysis was performed and the set of differentially expressed genes were taken as false positives. For example, E. coli strain K12, batch condition was aligned to both native and heterologous genomes and differential expression was performed, identifying 15 Orthology is identified between heterologous strains and reads are aligned to both reference genomes. Using the orthology mapping information, extrapolated read alignment counts are compiled such that counts can be compared for each read set as aligned to each reference genome.
https://doi.org/10.1371/journal.pone.0180904.g001 false positives. When identifying false positives across multiple conditions, differential expression for two conditions is performed with both conditions aligned to native and heterologous reference genomes, and false positives are then identified as the set difference between the two differential gene results. For example, when comparing E. coli strain K12, batch condition to the K12 chemostat condition, differential expression is performed on the batch vs chemostat reads as aligned to the K12 genome, and then as aligned to the IAI1 genome. True positives are considered to be the intersection of these two result sets, and false positives are considered to be the difference of the two sets.
Simulation of reads and genomes
For the simulations described, reference genomes were simulated using Simulome [10].
Reference genome simulations were based on the E. coli K12 strain. The simulated reference genome contained 500 genes, with lengths selected in a normal distribution around the mean length of genes for the K12 strain. Each simulated gene was separated from its neighbor by a randomly sized intergenic region. A heterologous version of this reference genome was also simulated, in which each gene contained 35 SNPs, approximately the average number of SNPs observed for false positives identified for E. coli that were caused by SNP induced read loss. Read data for the simulated genome was then simulated using the ART package [11]. Read data was created for the simulated reference genome based on ART's Illumina HiSeq 2500 model, with simulated fold coverage of 150, for read lengths of 50,100, and 150. These parameters were selected to mirror the properties of the actual data for the E. coli data set. The simulated reads were aligned to the simulated reference, which was considered the "native" reference genome, and also to the simulated heterologous reference, in which each gene contained 35 randomly distributed SNPs across the length of each gene. Alignment and read count extraction methods were performed identically to those outlined in the methods section on the actual read data.
Structural comparison of genomes
The E. coli K12 strain is 4,641,652 base pairs in length and the IAI1 strain is 4,700,560 long, for a difference in length of 58,908 base pairs. 64,577 SNPs were identified across the span of both strains, making the approximate level of polymorphic sites 1.38%. The combination of indel and polymorphic differences between these genomes is 2.64%. Structural analysis shows relatively few rearrangement events and broad similarity between these genomes. V. vulnificus CMCP6 and YJ016 are 3,281,866 and 3,354,505 base pairs in length respectively, for a difference in total length of 72,639 base pairs. 46,955 SNPs were identified between the two references, making the difference between the two genomes by polymorphic sites 1.42%. Combining the total differences for polymorphisms and indel events, these strains are approximately 3.61% different from one another. Structural analysis of the V. vulnificus genomes revealed more structural changes than were observed in E. coli, although overall structural similarity is still high. 1570 orthologous regions, approximately 36% of annotated genes, were identified between V. vulnificus strains CMCP6 and YJ016. 2378 orthologous regions, approximately 55% of genes, were identified between E. coli strains K12 and IAI1. Annotation information for each strain was then applied to these orthologous regions, to determine correspondence in read counts between strains at a per-gene level.
False positive identification
By examining the results of differential expression analysis on identical read data, with the choice of reference genome being the only differential factor, any genes that are identified as being differentially expressed for the same condition can be marked as false positives caused by reference-based factors.
For example, by aligning reads from E. coli strain K12, batch condition, to both the K12 reference genome and the heterologous IAI1 reference genome, and then performing differential expression analysis, any genes that are identified as being differentially expressed can be assumed to have been incorrectly identified, as the initial read set is identical and the only differential factor is the reference genome. When examining the reciprocal condition, in which reads generated from the IAI1 strain, batch condition, are aligned to both reference genomes, another set of false positives can be identified, many of which correspond to the false positives identified previously, creating a cross-identification effect for many false positives. Fig 3 shows an example of the log-fold changes for all genes for the E. coli, strain K12, batch condition. In this case, reads generated from strain K12, batch condition, were aligned to both the native reference and the heterologous reference, strain IAI1. As the reads are identical, high correspondence is naturally expected, with variation only being caused due to differences in the reference genome. High correspondence such as this was observed for all conditions and read sets. This level of correspondence indicates that the assumption that these two genomes can be used interchangeably as a reference for read alignment is reasonable. Had this data shown significant deviation, it would have indicated that heterologous alignment was not appropriate. This comparison of alignment to orthologous regions should be applied in cases when a reference genome for a read set is unavailable, but a homologous alternative exists, in order to determine if alignment to the homologous reference if viable. We will return to this point in more detail in the discussion section.
This method generally identifies significantly more false positives than are identified when only a single condition is examined. This compounding of false positives is to be expected as the first method relies on aligning only one read set to two references (2 replicates x 2 alignments each), and the second method must align two read sets to two references (2 replicates x 4 alignments each). Table 2 shows a summary of all false positives identified through both methods.
Cross identification of false positives was also examined to determine if the same regions produce false positives across different read sets. Several false positives were identified from multiple read sets; however, cross identification is not necessarily always present due to naturally occurring differences in expression levels between different strains. Even though a reduction in read counts is typically associated with alignment to a heterologous genome, genes will not necessarily be identified as differentially expressed unless the log-fold change is significantly different with regard to the expected concentration of fragments as determined by dispersion of counts across the entire read set (9). For example, if an E. coli read set from the K12 strain is aligned to both a native and a heterologous genome, and a gene is identified as a false positive through differential expression, we can be confident that read alignment for that gene is being compromised by the reference genome. While it is likely that the same gene will be reciprocally compromised in the corresponding read set from the IAI1 strain, it may or may not be identified as differentially expressed because the overall expression levels in IAI1 may be naturally different from K12, and the log-fold change may not be extreme enough to Read mapping to non-native reference genomes in RNA-Seq analysis identify the gene as differentially expressed with regard to fragment dispersion for the entire IAI1 read set. For this reason, genes are considered to be false positives if they are identified in either case, though special attention was given to genes with cross identification as representative cases of false positive causes in later analysis.
Read counts for false positives tended to be significantly higher when aligned to their native genome than their heterologous counterpart. This is to be expected, as it is likely that differences in genome cause alignment failures for non-native reads. Once false positives were identified, sequence analysis was performed. Nucleotide BLAST was performed on all ortholog pairs to examine the influence of reference sequences on read alignment. For E. coli the mean number of polymorphic sites per gene was 13 between the two reference genomes. Similarly, V. vulnificus strains showed a mean of 12 SNPs per ortholog pair. In all cases, false positives contained two to three-fold increases in SNPs, with E. coli having an average of 28 SNPs per false positive and V. vulnificus having 26. This ratio was also observed with regard to gene length, with the number of SNPs to length ratio for false positives being around three times that of true positives.
Once false positives were identified through differential expression analysis, further examination of the identified genes was conducted to identify the underlying causes for false positive identification. Two specific causes are identified as the primary contributing factors: indel/ duplication events and high-density SNP windows.
Indel & duplication events
One gene that was identified as a false positive of particular interest in E. coli was cusC. This gene was cross-identified for both batch and chemostat conditions for read sets generated from both the K12 and IAI1 strains. For this reason, this gene was selected as a representative sample for the explanation of duplication based false positive identification. cusC is the first gene of an operon consisting of 4 genes. Fig 4 shows an overview of the operon structure [12]. The cusC operon encodes a two-component signal transduction system that is responsive to copper ions, acting as a regulatory system to the pco operon, which provides copper resistance for E. coli [13]. The cusC gene itself is 1373 bases long and has 21 SNPs along its length between the K12 and IAI1 strains. Overall expression for this gene is generally low relative to average expression levels for each genome, and the ratio of SNPs to gene length is also approximately half that of typically identified false positives. Other genes in the operon following cusC are not identified as false positives.
Investigation into the cause of false positive identification of cusC found that incorrect expression levels are created due to an indel event between the two genomes. Fig 5 shows an example of read alignment to this operon for an identical read set as aligned to both native and heterologous genomes [14].
The indel event that can be seen between cusC and cusF between the native (IAI1) and heterologous (K12) genomes causes reads that align to the gapped area that slightly overlap cusC to be counted as expression for the cusC gene, causing a log-fold change in expression between the true and false expression levels of approximately 3.4, a highly significant difference. Examination of the overlapping region, as shown in Fig 6, shows that the reads map poorly to this region, especially in the area overlapping the cusC gene. In addition, the reads in this example were generated from the IAI1 strain and therefore cannot have produced reads in these positions, which implies that these genes are most likely the result of a duplication event and have been mapped to multiple locations. This suggests that elsewhere in the genome a region where these reads map accurately should exist. To investigate this possibility, BLAST was performed on the sequence from the indel point and found five matching positions in the K12 genome and only four in the IAI1 genome.
Each of these positions were individually inspected in both genomes and appear to be orthologous across references, with the additional matching region in the K12 strain being the observed location in cusC. In both genomes, a single case was found where these reads map perfectly, with all other cases showing similarly poor alignments to that shown in Fig 6. It is likely that a duplication occurred in E. coli, in which sequence from this section of genome, showing perfect alignment for these reads, was inserted into other parts of the genome. This specific sequence duplication has occurred in K12 in the cusC operon, but did not occur in the IAI1 strain. When alignment is performed using either the IAI1 or the K12 based reads, because the original sequence that was duplicated still exists elsewhere in both genomes and is expressed, reads from that region incorrectly map to the duplicated region in one genome and not the other, causing a false positive. Interestingly, one of the positions identified as a duplicated region for this same sequence corresponded to another gene, yhbI, which was cross identified as a false positive in all E. coli conditions and read sets. This gene shows the exact same expression profile, with a highly-expressed region of poorly mapped reads aligning near the end of the gene.
This type of improper alignment is due to how bowtie2 handles reads that map to multiple locations. When multiple sites are identified for possible alignment by bowtie2, reads can be mapped to both positions. For this reason, false positives are identified at points where small duplications have occurred within the genome and minimal divergence has occurred at the duplicated points. One possible solution to this might be to consider only uniquely mapped reads, however this would have the effect of removing all reads that map to multiple locations from all possible mapping positions, which would bias the data for the actual mapping position from the opposite direction, removing the false positive identification for cusC and yhbI, but changing the expression levels of the gene where the reads were truly expressed. This is discussed in further detail later in this study.
SNPs
The other primary cause of false positive identification between native and heterologous genomes was read loss caused by SNPs in highly concentrated windows. A majority of false positives identified for all conditions showed significantly higher proportions of polymorphic sites for false positives on average as compared to the mean level of polymorphic sites between genes for the genomes as a whole. False positives identified due to read alignment loss due to SNPs showed a two to three-fold increase in propensity of SNPs with regard to their length, while genes identified as being false positives due to indel/duplication events showed sequence correspondence more similar to average expected levels of difference.
As a representative gene for false positives due to read alignment loss by SNPs, hisD, a gene which codes for histidinol dehydrogenase, was chosen. This gene was selected because it showed very uniform coverage in the native reference genome and because SNPs were distributed widely in various concentrations across the length of the heterologous reference, which makes read loss more visually apparent. Fig 7 shows read alignment for hisD from the E.coli batch condition, with the K12 strain being the native reference and the IAI1 strain being the heterologous reference. This gene has 55 SNPs between the native and heterologous genome across a length of 1305 bases. Read loss can be observed particularly in regions of high SNP density, where read alignment becomes increasingly more difficult due to differences in the reference sequences. The two flanking genes, hisG and hisC also show some moderate read loss, but these genes are not identified as false positives because the read loss is less severe and doesn't cause a significant enough log-fold change to trigger differential expression flagging. Other genes identified as false positives show similar read loss when windows of high-density SNPs are present, with some cases having very distinct windows of loss and otherwise similar coverage between genomes, and still others showing staggered read loss across the gene, as was shown here in the case of hisD.
The type of read loss observed between native and heterologous genomes due to SNPs might be reduced by either relaxing read alignment parameters so that reads can be aligned when higher levels of SNPs are present, or otherwise by through the application of sequencing technologies that produce longer reads. One potential problem of approaching this issue by adjusting alignment parameters is that reads may incorrectly map with higher propensity to incorrect regions, further biasing the read set. This problem would be further compounded if only uniquely mapped reads were used, as the proportion of reads that map to multiple locations would necessarily increase as alignment parameters become less restrictive.
Simulation of mapping at different read lengths
A majority of the false positives identified were caused by read loss in regions with high levels of SNPs. In order to examine if this effect can be mitigated by read length, several simulations were performed.
The simulations showed substantial improvement in accurate alignments between native and heterologous reference genomes as read length increases. This relationship can be seen in Fig 8. Reads of length 50 performed the most poorly in simulations, with all genes showing read loss when aligned to the heterologous reference genome, and those with lower expression levels showing the greatest log-fold changes. Reads aligned to the heterologous genome with length 50 had 19.77% read loss overall. Reads of length 100 performed significantly better, with log-fold changes in read alignment being substantially closer to expected values and becoming increasingly reliable as expression levels increase. These reads showed a significantly better alignment, with a read loss of 8.33%. Reads of 150 in length showed the best performance among simulations, with higher accuracy for all reads over the entire range of expression levels and complete accuracy being reached at lower expression levels than the 50 and 100 read length simulations. Alignment here was again the best of the simulations, with a read loss of only 3.09%. Fig 9 shows an overview of log-fold differences between alignment to native Read mapping to non-native reference genomes in RNA-Seq analysis and heterologous genomes for the three simulated read lengths. Overall these simulations show that heterologous reference use is more reliable with longer read lengths, and that the expected number of false positives caused by polymorphisms will be reduced as read length increases. Log-fold differences in native vs heterologous alignment for different read lengths. Shorter reads handle mapping more poorly and are subject to significant bias in non-native alignments, while bias is minimalized with increasing read length.
Robustness of analysis with varying alignment sensitivity and read depth parameters
An additional variation of this simulation was performed using bowtie2's "-very-sensitive" alignment parameter. The use of this argument increased overall alignment in all cases, reducing read loss to 13.65% for the 50 read length simulation, 4.00% for the 100 read length simulation, and to just 1.41% for the 150 read length simulation. This is a modest improvement over the standard alignment parameters and can be seen in Fig 10 as each curve becoming slightly tighter and approaching accurate read levels across native and heterologous genomes from slightly lower expression levels. The lower range of expression values, however, are not influenced strongly enough for this method to mitigate false positives completely, while it does have value and should be used for native and heterologous alignment issues, the stronger influence appears to come from increases in read length. Next, the effect of read depth was examined. In our sample data, E. coli had an average read coverage of 150x, V. vulnificus had a much greater read depth of around 600x. Simulations of these conditions show that increasing read depth has little influence, simply compressing the range of depth toward the higher end, and maintaining similar ratios of log-fold differences between native and heterologous genomes. The results of this simulation can be seen in Fig 11. Treatment of multiple and unique mapping positions Some false positives were caused between native and heterologous genomes when insertion events that copied short segments into or adjacent to coding regions were present, causing reads from other regions of the genome to incorrectly map to some genes in the non-native genome. One possible approach to removing these false positives is to consider only reads that uniquely map to a single position in the genome as valid reads. This would mean that reads that map incorrectly would not be included in read counts, but also that those reads would not correctly map to their proper location as well. If the correct mapping location for these reads, however, was orthologous between the native and heterologous genomes, the bias introduced from removing these reads should be roughly the same, with the effect of eliminating false positives while maintaining a true ratio of gene expression for genes containing the correct mapping position.
To simulate the effects of this approach, Simulome was used to create a 500-gene simulated reference genome and a mutated variant with insertion events 100 bases in length, which were copied from other random positions in the original reference. This means that in each gene in the variant genome, an insertion of 100 base pairs exists that also has a correct mapping location elsewhere in the genome. Read data was created using ART for the simulated reference genome based on ART's Illumina HiSeq 2500 model, with simulated fold coverage of 150. Fig 12 shows the performance of read alignment for the native and heterologous genomes with ambiguously mapping positions included and only uniquely mapped positions. The condition in which multiple mapping locations were included performed much better, with most genes showing appropriate levels of read alignment across the native and heterologous genomes. This scenario did show several genes that would likely be identified as false positives, which can be seen as being more highly expressed in the heterologous genome. These genes were not present as false positives in the case of unique mapping and returned to a more appropriate read alignment ratio between the native and heterologous genomes, but overall the level of read alignment for the heterologous genome is reduced significantly overall, introducing a bias that is far more extreme than the problem it solves.
Discussion
The use of non-native references genomes relies on the distance between the native and heterologous genomes and the development of high-integrity data that can overcome the naturally Read mapping to non-native reference genomes in RNA-Seq analysis occurring differences between those genomes. Several factors should be taken into account by researchers intending to use non-native references for alignment of read sets. The first step that must always be taken is to identify correspondence between orthologous regions. For example, a researcher with a read set with no complete native reference genome available that Read mapping to non-native reference genomes in RNA-Seq analysis has a potential heterologous genome available for read alignment should first investigate if the heterologous genome is viable for alignment. To do this, de novo assembly of reads into contigs, followed by ortholog identification using BLAST should be performed. Then, by extracting read counts for orthologous regions, correspondence can be examined as shown in Fig 3. Fig 12. Simulation of the reads with ambiguously mapping inserts. Log-fold change in read alignment for native and heterologous genomes shows that less bias is present when Bowtie2 determines alignment for reads with ambiguous mapping positions. https://doi.org/10.1371/journal.pone.0180904.g012 Read mapping to non-native reference genomes in RNA-Seq analysis It is important to mention that the parameters of ortholog identification here are highly relevant. In this study, we performed ortholog identification using very strict parameters (95% identity, length > 200bp, max mismatch = 5) and were able to identify a large subset of orthologous regions with high confidence. By relaxing these ortholog identification criteria, undoubtedly a larger subset of orthologous genes could be identified, however the false positive rate would also correspondingly increase with increased numbers of mismatching regions existing. A researcher intent on using a non-native reference genome for alignment should then properly tune their BLAST ortholog identification parameters to maximize the number of orthologs they can identify between their read set and non-native reference genome, while confirming viability by monitoring the correspondence of a single read set as aligned to both the non-native reference and their de novo assembled contig sets. That is, as long as an identical read set produces strong correspondence when aligned to the native and non-native alignment target, such as that seen in Fig 3 of this study, comparison between those orthologous regions can be considered viable. If that alignment instead becomes increasingly dispersed, the strictness of BLAST parameters for ortholog mapping should be increased. Once the researcher has determined an appropriate level of ortholog identification, additional investigation can be performed, if desired, to further eliminate false positive outliers.
In this study, we have observed that genomes with short reads are particularly vulnerable to false positives when using a heterologous genome for read alignment, even with very strict correspondence between orthologous regions. Most false positives originate from sites with a high frequency of polymorphic sites, with a few false positives being caused by other mutation events. Our E. coli samples, which used 50 base reads, contained several false positives that we were able to identify and subsequently analyze to gain insight into underlying causes of incorrect information that must be considered when working with non-native reference genomes. By contrast, our V. vulnificus data set showed that by using longer reads with more depth that false positives can be largely avoided, having almost no false positives at all between native and heterologous genomes. With this being the case, researchers using non-native reference genomes should be aware of these issues and take appropriate precautions in their analyses by confirming both proper identification of orthologous regions and the use of longer reads to mitigate incorrect alignments that result in false positives for heterologous alignment.
Additional accuracy can be achieved when necessary by researchers using heterologous reference genomes. By performing BLAST analysis between read sets and the reference genome to be used, potential false positives can be identified by searching for those reads which align with the highest ratio of polymorphic positions. While increasing read length is certainly the best way to avoid false positives and incorrect information when using a heterologous genome for read alignment, it is likely that as the distance between read data and reference genome increases, that the improvement observed through the use of longer reads would degrade. In general, if it is known that a heterologous genome will be used in a study, longer reads should be generated whenever possible. Additional improvements can be made by adjusting read alignment parameters, but these tend to be fairly modest, with false positives still being likely to occur even with strict alignment parameters. Overall the level of false positives in both cases is low, with the false positive discovery rate compounding as more complex comparisons involving more alignments of data are performed. Using a non-native reference genome for research seems to be a safe endeavor in general if genetic distances between native and nonnative conditions are not excessively large, however for very sensitive experiments the use of heterologous reference genomes should be approached with caution, as it is possible that some important genes may be subject to bias without the benefit of a native reference genome.
In this study, we have examined the problems that can arise from the use of non-native, heterologous genomes as references for RNA-Seq read alignment. We have described a method for identifying false positives, outlined the underlying causes, and suggested a set of best practices for studies that use non-native reference genomes, that will allow researchers to make informed decisions about how they handle their data analyses. The analysis workflows described in this study can potentially be applied to novel data sets to help investigators estimate whether it is a safe assumption to use a common reference genome-either for ease of analysis, or because complete reference genomes for all species or strains in the study are not yet available. In the case where partial genomic information is available, reciprocal mapping analysis can be applied to orthologous genes in the unambiguously alignable portions. These regions can be analyzed to determine the level of correspondence of results between alternate mappings, and to identify the fraction of potential false positives in the analyzable subset of the data. While this will not provide a complete reciprocal analysis, it does provide a quantitative basis by which to justify use of a heterologous common reference for multiple strains, or potentially to justify the expense of finishing additional strain genomes to provide a more accurate reference if available genomes are not sufficient. | 8,782.2 | 2017-07-11T00:00:00.000 | [
"Biology"
] |
DPP-4 inhibitors may improve the mortality of coronavirus disease 2019: A meta-analysis
Aims DPP-4 inhibitors are predicted to exert a protective effect on the progression of coronavirus disease 2019 (COVID-19). We conducted this meta-analysis to investigate this hypothesis. Methods Four databases, namely, PubMed, Web of Science, EMBASE and the Cochrane Library, were used to identify studies on DPP-4 and COVID-19. The outcome indicators were the mortality of COVID-19. Funnel plots, Begg’s tests and Egger’s tests were used to assess publication bias. Results Four articles were included with a total of 1933 patients with COVID-19 and type 2 diabetes. The use of DPP-4 inhibitors was negatively associated with the risk of mortality (odds ratio (OR) = 0.58 95% confidence interval (CI), 0.34–0.99). Conclusions DPP-4 inhibitors may improve the mortality of patients with COVID-19 and type 2 diabetes. As few relevant studies are available, more large-scale studies need to be performed.
Introduction
A global pandemic of coronavirus disease 2019 (COVID-19) began in 2020. COVID-19 is caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) [1]. The widespread COVID-19 pandemic is reminiscent of two past epidemics of respiratory diseases caused by coronaviruses, the severe acute respiratory syndrome (SARS) epidemic in 2002 [2] and the Middle East respiratory syndrome (MERS) epidemic in 2012 [3]. The three major infectious respiratory diseases caused by coronaviruses that have caused epidemics in the 21st century are SARS, MERS and COVID-19. Because SARS-CoV and MERS-CoV enter and infect cells via dipeptidyl peptidase-4 (DPP-4) [4,5], SARS-CoV-2 may also enter cells by binding to DPP-4. However, recent studies have shown that the SARS-CoV-2 spike protein does not interact with human membrane-bound DPP-4 (CD26) [6,7]. Although DPP-4 does not function as the receptor in SARS-CoV-2 infections, DPP-4 inhibitors (DPP-4is), one of the new oral therapies for diabetes characterized by neutral weight and few adverse effects, is now used to improve insulin secretion as a treatment for T2DM [8], and researchers have speculated on whether DPP-4 inhibitors (DPP-4i) play a role in protecting against COVID-19 and their use as therapeutic drugs to improve outcomes in patients with COVID-19 and type 2 diabetes (T2DM) [9,10]. An increasing number of studies have shown that T2DM is the comorbidity with the strongest negative effect on the prognosis of patients with COVID-19. Patients with T2DM who contract COVID-19 have a higher mortality rate and are more likely to develop severe COVID-19 [11,12]. The collision of these two major global epidemics suggests that the correct use of anti-diabetic agents is an urgent issue that must be addressed. As DPP-4is are commonly used hypoglycemic agents, the relationship between DPP-4i use and COVID-19 has also attracted increasing attention, we conducted this meta-analysis to determine whether DPP-4is exert a protective effect on the development of COVID-19 mortality.
Although recent observational studies have described the relationship between the use of DPP-4is and COVID-19 [13,14], no meta-analysis has been performed to synthesize this evidence. The purpose of this article was to systematically describe the relationship between the use of DPP-4is and the mortality of COVID-19 and provide evidence that can be used to guide the treatment of patients with diabetes during the COVID-19 pandemic.
Methods
This meta-analysis was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement guidelines, as described previously [15].
Selection criteria
Two reviewers (YY and ZC) independently reviewed all the eligible studies and selected those suitable for inclusion. Disagreements were settled by reaching a consensus or with the help of a third reviewer (JZ). All the articles included in this meta-analysis met the following criteria: (1) they contained information on DPP-4is and the outcomes of COVID-19, including mortality and the development of severe COVID-19; and (2) the subjects were patients with both COVID-19 and T2DM. Articles were excluded if they met the following criteria: (1) they lacked information or data necessary for the purpose of this meta-analysis and (2) they were published as letters, reviews, editorials, or conference abstracts.
Data extraction
All relevant articles were imported into EndNote X9 software and reviewed independently by two authors (YY and ZC). Discrepancies between authors were settled with the help of a third reviewer (JZ). The following information was extracted from the selected studies by two independent investigators: author, year, country, type of study, age, sample size, population and COVID-19 outcomes. All the extracted data were then imported into Excel.
Quality assessment of included studies
The quality of the included studies was assessed using the Newcastle-Ottawa Scale (NOS) [16].
We assessed the quality of all relevant studies based on the type of study, sample size, participant selection, representativeness of the sample, adequacy of follow-up, comparability (exposed-unexposed or case-control), and method of ascertaining cases and controls. A study with a score of 6 or more was defined as a high-quality study. The possible range of NOS scores is 0 to 9; studies scoring at least 7 have the lowest risk of bias. Those that scored 4-6 are assigned a modest risk of bias, and those scored <3 have the highest risk of bias.
Statistical analysis
All analyses were performed using Stata software (version 13.0). The correlations between DPP-4is and adverse outcomes were reported as the pooled odds ratios (ORs) and 95% confidence intervals (CIs). ORs > 1 represented a direct association, and those < 1 represented an inverse association. All results of the included studies were analyzed with random-effects models. I 2 statistics were used to assess the degree of heterogeneity: 25%, 50%, and 75% represented low, moderate, and high degrees of heterogeneity, respectively. Begg's and Egger's tests and funnel plots were used to detect potential publication bias, with a p-value <0.05 suggesting the presence of bias. The trim-and-fill method was also used to obtain an adjusted effect size when publication bias was detected.
Search results and study characteristics
The flowchart of the study selection process is shown in Fig 1. After a preliminary search of the selected electronic databases, 475 studies were identified. Then, 119 duplicates were eliminated. After further excluding 329 studies based on their titles and abstracts, 27 articles remained. Of those 27 articles, 23 were excluded after the full text was read for the following reasons: (1) insufficient participant information was provided (n = 12); (2) the original data regarding DPP-4i use were not provided (n = 7); and (3) the outcome of COVID-19 was the severity instead of the mortality of the disease (n = 4). Finally, 4 articles related to the use of DPP-4is and COVID-19 were included in this meta-analysis. The basic characteristics of the studies are shown in Table 1. Among the 4 studies included in this analysis, 1 was performed in France and 3 were performed in Italy (Table 1). All 4 included studies were published in 2020.
Quality assessment
The NOS mainly consists of three parts: sample selection, comparability of cases and controls, and exposure. All four included studies had NOS scores higher than 8, indicating no risk of bias in our analysis. Details of the risk of bias assessment are described in Table 2.
DPP-4i use and COVID-19 mortality
The results of the meta-analysis of the use of DPP-4is and the mortality of COVID-19 are shown in Fig 2. In general, the use of DPP-4is was associated with decreased mortality due to COVID-19 (OR = 0.58 95% CI, 0.34-0.99); No significant heterogeneity was observed (I 2 statistic = 51.1%, p = 0.105) (Fig 2). The results of Egger's and Begg's tests (p>0.05) and an inspection of the funnel plots showed that publication bias did not exist among the studies (Fig 3). A sensitivity analysis was conducted by omitting one study at a time and showed that the results were stable (Fig 4).
Discussion
Because the effects of DPP-4is on COVID-19 are inconsistent and vague, we conducted this meta-analysis to determine whether DPP-4is could be a treatment for patients with COVID-19 and diabetes. Our meta-analysis may support the hypothesis that the use of DPP-4is may exert a protective effect on COVID-19. Our findings showed that the use of DPP-4is is associated with decreased mortality due to COVID-19 or the risk of progression to mortality (Fig 2).
Mechanism underlying the relationship between DPP-4is and COVID-19
The mechanisms underlying the effect of DPP-4is on the outcomes of COVID-19 are not clear, but several mechanisms may provide some insights. First, DPP-4is, including sitagliptin, alogliptin, vildagliptin, saxagliptin and linagliptin, are drugs that are widely used to treat diabetes and approved by the Food and Drug Administration [17,18]. Compared with insulin alone, DPP-4is combined with insulin effectively control blood glucose levels, and their effectiveness and safety are guaranteed [19,20]; additionally, good glucose control can improve the prognosis and outcome of COVID-19 [21]. Therefore, the use of DPP-4is may also exert a beneficial effect on controlling glucose homeostasis in patients with COVID-19 and diabetes.
DPP-4 inhibitors and COVID-19 mortality
Second, DPP-4 has been suggested to be involved in the process of various inflammatory diseases [22,23]. DPP-4 themselves directly promote T cell proliferation, CD86 expression, activation of the NF-κB signaling pathway and excessive production of inflammatory cytokines to lead to an imbalance of inflammation [9,24], while severe cases of COVID-19 are characterized by excessive inflammation and the substantial production of pro-inflammatory factors [25,26]. Moreover, it seems that Sitagliptin has proven efficacy against acute respiratory distress syndrome (ARDS), the common causes of COVID-19-related death, because this drug inhibits IL-6, IL-1, and TNF in individuals with lung injury [27]. In addition, DPP-4is also exert a direct anti-inflammatory effect on the lungs [27,28]. Therefore, we postulate that DPP-4is themselves play a role in the treatment of COVID-19.
Additionally, glucagon-like peptide 1 (GLP-1), a gut-derived incretin, is secreted after a meal to promote insulin secretion and inhibit glucagon secretion, while GLP-1 not only plays a role in glucose control but also possesses anti-inflammatory properties [29,30]. Since DPP-4 in the circulation degrades GLP-1 rapidly to maintain glucose homeostasis, the use of DPP-4is may promote the anti-inflammatory effect of GLP-1 and indirectly achieve the purpose of inhibiting inflammation in patients with COVID-19. Although DPP-4 degrades GLP-1 and GLP-1 exerts an anti-inflammatory effect, researchers have not determined whether DPP-4is also inhibit the degradation of GLP-1 in patients with COVID-19. The anti-inflammatory effect is only our conjecture at present, and further proof is needed.
Last but not least, DPP-4 levels are significantly increased in the blood of patients with obesity and obesity-induced metabolic syndrome [31][32][33], and these comorbidities can aggravate the outcome of patients with COVID-19 [34,35]. Therefore, studies aiming to determine whether DPP-4is are useful medicines to treat COVID-19 are important.
Theoretical and practical significance
DPP-4is are hypoglycemic agents commonly used to treat diabetes, but researchers have not clearly determined whether DPP-4is can continue to be used after a patient contracts COVID-19. For the first time, our study systematically analyzed the effect of DPP-4i use on the mortality of COVID-19 in patients with both T2DM and COVID-19 and found that the use of DPP-4is may exert protective effect on death due to COVID-19. Our research may provide guidance for the treatment of patients with T2DM during the COVID-19 pandemic. Moreover, the relationship between DPP-4 and COVID-19 requires further research.
Limitations of the study
First, because few randomized controlled studies, case-control studies, and cohort studies on the relationship between DPP-4is and COVID-19 have been performed, the sample size in this meta-analysis was too small and the conclusions are not convincing. Second, as many types of DPP-4is are available, more data are needed to confirm the relationship between the use of DPP-4is and the outcomes of COVID-19; more clinical studies need to be performed.
Conclusions
In summary, the use of DPP-4is may ameliorate the progression of COVID-19 or the mortality due to COVID-19; this information may help guide the treatment of patients with T2DM and COVID-19, but more well-designed research is urgently needed to support or refute our results. | 2,749.6 | 2021-05-20T00:00:00.000 | [
"Medicine",
"Biology"
] |
Modified adaptive support weight for stereo matching
Stereo matching using local algorithms are incredib ly popular in the last years. The adaptive support weight algorithms can give high accuracy results such as global methods. This paper proposes a support aggregation approach for stereo matching t at computes support weight in sparse support window mask. The improvement from th e previous work is that the new support weight can reduce the computation time. At the end of the research, the result shows that the computation time decreases to approximately half to a quarter of the earlier work time without significant difference of bad pixel percentage and help to reach the optimum correspondence. It means sparse support weight affects the computation time that is needed in stereo matching and optimizes the disparity. This support weight is used to accomplish the stereo matching evaluation using this method. This approach is more reliable than the previous approach in the real im p e entation.
Introduction
Research in stereo vision has been an issue for several years, and it has again become very popular in recent years.In the stereo vision, the description of the stereo estimation difficulty is the method of evaluating a disparity map of two or more images of the scene.Numerous algorithms have been produced to estimate disparity, and the researcher has given a classification and evaluation for these algorithms in [1], which is presented based on disparity optimization, matching cost, and refinement phases of disparity.The main characteristic of a method typically based on optimization approach, that is the second step, therefore the expert classified it as local, semiglobal, global and cooperative [2].
In local approaches [1][2][3][4], "winner-take-all" optimization is used to reach disparity map, by evaluating each candidate separately.The aggregation of matching cost through averaging or summation over a support area is calculated; therefore the disparity giving smallest cost is appointed as the corresponding pixel.The local method optimizations are simpler compared to the global optimization algorithms for example graph cuts, belief propagation [5,6], that are more complicated and more accurate.Researchers developed these methods in an energy optimization of the whole, and the aim is to minimize the estimated stereo correspondence energy.The semi-global algorithms [1][2][3], such as dynamic programming optimization are afforded to reduce computational complication of the global methods.Because there is a view, that global optimization is conducted for every scan line (row) individually in a polynomial time.
Other researchers combined benefits of global and local methods of managing occlusions, the boundaries of the object where un-textured areas and discontinuities of depth observed, using cooperative methods [7,8].These methods, like area based global methods, depend on the supposition that scenes are consist of non-overlapping planar pieces all of which match to pixel clusters relating color-wise similar pixels.The primary constraints of stereo matching algorithms need a speed of evaluation time, low memory necessity, and robust disparity map, particularly in electronics application devices.Furthermore, several applications require stereo estimation methods must provide less computational complexity, provide less precision loss and less memory in the extracted 3D models for robotic applications, surveillance systems, and the future generation 3D TVs [14][15].In that behavior, the strongest applicants for faster disparity evaluations are local window-based approaches [5,[15][16][17][18].The local methods normally do not occupy iterative works to give fast and simplicity executions; because they do not consume full cost volume and compared to other approaches, they need lower memory.Therefore, these methods are suitable for real-time applications on appropriate conditions.Yoon introduced adaptive support weight stereo matching (ADSW) in 2006 [3], with outperform result compared to traditional local methods.Because of its advantages, many stereo-matching algorithms are developed based on the ADSW [19][20].
This work proposes a development of a fast computational stereo estimation method based on ADSW aggregation.The aim is to optimize support weight computation in adaptive support-weight methods.The support weight is evaluated faster using sparse support window mask.The proposed method provides further evaluation time reduction.This work has a contribution to the stereo matching improvement, particularly in speed up the computation.
Overview of Algorithm
The method is consist of several main steps which are determining the different distance weighting algorithm, cost aggregation evaluation, disparity selection for the left to right and right to left, and right to left check for cross checking the both results.
Gaussian function of the color distance is calculated among the midpoint pixel c of the window and a pixel i in the support window to define the original color weight.The pixel i regarding to pixel c has color weight wic, where the color distance value among two pixels, c and i is a constant.This weight allows the pixel with a similar color to the midpoint pixel.They will give more effect on the last matching cost.In this paper, the color space used is HSV instead of Lab color space that assigned in ADSW.The color distance is calculated utilizing Manhattan rather than Euclidean color distance to minimize calculation.The mask size of support window for weighting is maintained as large and as symmetric as possible.The support window mask uses only the alternate pixels in each row and each column of the support weights window, as shown in Fig. 1 for 7 x 7 mask., is expressed as (2) x and y in the reference image have a disparity value d, d x and d y are the corresponding pixels in the target image.
) , ( d y y e expresses the raw matching cost based on pixel evaluated by the colors of y and d y .When using the truncated AD (absolute difference), it is expressed as where Ic is the color group intensity of c and T represents the truncation limit number for controlling the matching cost.After the difference calculation, the every pixel disparity is just selected by using the Winner-Takes-All (WTA) method with no any global calculation as ) , ( min arg where sd = {dmin,…,dmax} is the set of whole probable disparities.
Results and Discussion
This paper used the Tsukuba, Teddy, and Venus as the example for tested images.The full support weight needs more calculation compared to a sparse window because it calculates whole pixels in the support windows.Support weight computation for n × n full support windows size needs evaluation of n × n pixels, however in sparse support window needs [(n-1)/2 + 1] × [(n-1)/2 + 1] pixels.Regarding to weight formula, it will reduce many calculations, because each pixel will be calculated.Computation time also decreases because the evaluated pixels number is also reduced as in weight computation.Both calculation indicated that the whole stereo matching process using sparse window mask will give faster result compare to the previous method proposed in [5].Furthermore, the matching process is performed twice: at first with the left image kept as a reference; then the process is reversed, and the right image of a stereo pair constitutes a reference image.Then, the two disparity maps are checked as follows: cross-checking, left-right checking (LRC).Fig. 5. shows the result of disparity using 33x33 spared support windows which are LR, RL, and the RLC disparity.This paper calculated the disparity estimation performance by comparing to the disparity from the ground truth.Table 1.shows execution time and bad pixels for disparity computation from several windows size and the support type.The execution time and bad pixels percentage can be used to consider optimum window size.
The table shows the execution time and error for disparity computation using 25 × 25, 29 × 29, 33 × 33 and 37 × 37 windows size and support.The 25 × 25 and 29 × 29 full support window give 8081.5;9404.2 seconds execution time; and 30.86; 29.56 error percentage respectively.Furthermore, 25 × 25 and 29 × 29 sparse support window give 2428.2;3750.3 seconds execution time; and 34.14; 19.52 bad pixels percentage in that order.In the size of 25 × 25 presents 8081.5 seconds calculation time for full support window and 2428.2 seconds evaluation time for sparse support window.The comparison of process time is around 3.3:1, where the error is 1:1.33.That means this approach reduces processing time into fourth without reducing the quality significantly.
The computer cannot perform the stereo matching evaluation to get the result using 33 and 37 window size by full support window method, because of the limitation of computer's memory, so we do not present in Table 1.While if we use sparse support window, the calculation can still be done, even yielding better results, lower bad pixels percentage.The 33 × 33 sparse support window gives 4850.5 seconds execution time and 18.56% bad pixels.The 37 × 37 sparse support window gives 6587.1 seconds execution time and 17.88% bad pixels.It shows the limitation of full support window method that consumes high memory and computation.Disparity estimation process that needs a large number of calculations gives too much time and slow speed execution.Hence, the sparse support window can evaluate the stereo matching with lower error.
The result is still having big bad pixels because occlusion handling and refinement as the post-processing step have not been conducted in this paper.The work aim is to contribute optimum computation for local stereo matching based on support weight aggregation schemes.Disparity estimation has not given expected result yet.Thus, reduction of time execution and quality improvement is needed.
Conclusion
This paper has proposed a new support aggregation approach for stereo matching and has computed support weight in sparse support window mask.The improvement from the previous work is that the new support weight mask can reduce computation time while maintain the performance.It means sparse support weight can improve the full mask support weight.The occlusion handling and post-processing filter can be explored in the next step to make the disparity map close to ground truth and has smaller bad pixels.
Fig. 1 .
Fig. 1.Support weight window mask The shaded squares are the pixels used in the support weight calculation in sparse and full mask.This paper shows that the sparse mask can reduced processing time and get similar result compared to the full mask.The difference between pixel x and d x , ) , ( d x x E
Fig. 2 .
Fig. 2. Tsukuba, Teddy and Venus stereo images: Left image, right Image and ground truth
Fig. 2
shows the images: (a) left image, (b) right image and (c) ground truth.Fig. 3 shows the disparity result using both full support weight (a) and sparse support weight windows for Tsukuba.The size of full support windows in Fig 4 (a) 25 × 25 full support weight windows, computed from left to right, (b) 25 × 25 full support weight windows, computed from right to left (c) 25 × 25 sparse support weight windows, computed from left to right, (d) 25 × 25 sparse support weight windows, computed from right to left, T = 40 for Tsukuba.
Fig. 3 .
Fig. 3. Dense disparity of Tsukuba using: (a) 25 x 25 full support weight windows, compute from left to right, (b) 25 x 25 full support weight windows, compute from right to left (c) 25 x 25 sparse support weight windows, compute from left to right, (d) 25 x 25 sparse support weight windows, compute from right to left.
Fig. 5 .
Fig. 5. Disparity map left to right (L to R), right to left (R to L) and disparity after right to left check (RLC) of Tsukuba, Teddy and Venus
Table 1 .
Execution time and error for disparity computation | 2,627.8 | 2017-06-01T00:00:00.000 | [
"Computer Science",
"Physics"
] |
Impedimetric Detection of Ammonia and Low Molecular Weight Amines in the Gas Phase with Covalent Organic Frameworks
Two Covalent Organic Frameworks (COF), named TFP-BZ and TFP-DMBZ, were synthesized using the imine condensation between 1,3,5-triformylphloroglucinol (TFP) with benzidine (BZ) or 3,3-dimethylbenzidine (DMBZ). These materials were deposited, such as films over interdigitated electrodes (IDE), by chemical bath deposition, giving rise to TFP-BZ-IDE and TFP-DMBZ-IDE systems. The synthesized COFs powders were characterized by Powder X-Ray Diffraction (PXRD), Fourier Transform Infrared spectroscopy (FT-IR), solid-state Nuclear Magnetic Resonance (ssNMR), nitrogen adsorption isotherms, Scanning Electron Microscopy (SEM), and Raman spectroscopy, while the films were characterized by SEM and Raman. Ammonia and low molecular weight amine sensing were developed with the COF film systems using the impedance electrochemical spectroscopy (EIS). Results showed that the systems TFP-BZ-IDE and TFP-DMBZ-IDE detect low molecular weight amines selectively by impedimetric analysis. Remarkably, with no significant interference by other atmospheric gas compounds such as nitrogen, carbon dioxide, and methane. Additionally, both COF films presented a range of sensitivity at low amine concentrations below two ppm at room temperature.
Introduction
Our ecosystem and human activities can release a great variety of gas compounds to the atmosphere, including greenhouse gases (carbon dioxide, methane, ozone, among others). Moreover, harmful gases such as nitrogen oxides and ammonia, and even some low molecular weight amines, above specific low concentrations can be dangerous for human health [1][2][3]. Notably, the ammonia concentration in the atmosphere has increased because this compound has been used indiscriminately as an essential raw material in multiple chemical industries; as a result, the global emission level of NH 3 has been doubled in the last 40 years (50 million tons in recent years) [4]. High levels of ammonia present a severe threat to the sustainability of our ecosystem and human health [5]. Therefore, to detect ammonia is a vital function for controlling the over release/production of NH 3 as much in environmental sectors as in medical diagnostics [6].
It is well known that detecting sensitive and selectively ammonia is a challenge. Up to now, several techniques have been used in commercial ammonia detectors, among them, metal-oxide gas sensors [7][8][9][10][11], conducting polymer analyzers, and detection optical methods [12][13][14][15][16][17]. However, these strategies exhibit some disadvantages associated with high temperatures and irreversible reactions causing poisoning and lack the selectivity or sensitivity. Nonetheless, in order to improve the gas sensing properties, some researchers have developed new approaches to increase the specific surface area and also using composite films [18]. In this way, chemosensors synthetized on interdigitated electrodes (IDE) have been used frequently. The IDEs allow increasing contact area between the sensing material and the electrodes [19]. In addition, IDEs facilitate the measurement of multiple electrochemical properties, such as capacitance, conductivity/resistance, impedance, etc., without sophisticated electronics [19,20]. The impedimetric gas sensors have lately received considerable interest owing to the increasing demand for monitoring harmful gases, and because of its good selectivity, fast detection, high sensitivity, and facile sample pretreatment [21]. Therefore, the impedimetric detection of ammonia has been previously reported [22,23], using polypyrrole coated zinc oxide and ion-exchanged Y zeolites.
Recent studies have shown that interdigitated electrodes can improve the conductive properties of materials, generally referred to as bad conductors, such as the Covalent Organic Frameworks (COFs) [24]. COFs are highly porous materials formed by organic molecules joined by covalent bonds (linkage) with another organic molecule (linkers) forming an infinite and repetitive arrangement into two-dimensional (2D) and three-dimensional (3D) networks [25][26][27]. These materials have a high surface area, and due to the strength of the covalent bond, they exhibit excellent stability in aqueous solution, acidic media, and also at high temperatures and corrosive environments [28]. These properties make COFs suitable for gas storage, energy storage, water treatments, cancer treatment, fuel cells, catalysts, and explosives sensors [29][30][31][32][33][34][35][36]. The synthesis of COFs can be carried out by several methods, including solvothermal, dropwise, microwave, sonochemistry, and mechanochemistry [37][38][39][40]. All these methods added to the large number of organic reactions which can be used in the synthesis of COFs, making the structural and topological diversity of these materials abundant [37]. Regarding the electrical properties of COFs, reports in the literature show that COFs have poor electrical conductivity, which can become a problem in the development of an electrochemical sensor. For this reason, some strategies have been developed to increase conductivity and improve electrical communication throughout the entire material [30], where a conductive polymer, such as PEDOT (poly 3,4-ethylenedioxythiophene), is synthesized inside the pores of the COF.
In this work, two COFs were synthesized through the condensation of primary amines with aldehydes by the dropwise/solvothermal method. These COFs were deposited as films over interdigitated gold electrodes. Afterward, using these COF-electrode systems, it was possible to detect ammonia and low molecular weight amines using an impedimetric technique. Noteworthy, there have not been reports using COF and electric impedance for the detection of ammonia.
Synthesis of COFs
COFs synthesis followed the thin layer protocol previously reported [42]. A 0.071 mmol amount of corresponding amine (benzidine (BZ) or 3,3-dimethylbenzidine (DMBZ)) were dissolved in 3.2 mL of N,N-dimethylformamide (DMF) in a glass vial. Into this vial was added a gold interdigitated electrode and magnetic bar, and then, the vial was capped and heated at 90 • C with low stirring. After the temperature was reached, 1 mL of TFP solution 10 mg/mL (0.048 mmol) was added slowly for 1 h. Subsequently, the reaction was carried out for 3 h (Scheme 1). Finally, the vial was open, and the modified electrodes with COF film on its surface and powder residues were separated. The COF films deposited over the electrodes, labeled as TFP-BZ-IDE and TFP-DMBZ-IDE, were washed with DMF, acetone, and distilled water three times.
Characterization
The COFs powders residues from the synthesis over the electrodes were characterized by x-ray diffraction in an X'Pert Panalytical diffractometer with a copper anode (λ = 0.1506 nm), with a data acquisition time of 1 s, a range of 2-35 degrees 2θ, and a size step of 0.1 degrees. FT-IR was carried out on a Shimadzu IR prestige-21 with an attenuated total reflection (ATR) accessory in a range from 750 cm −1 to 4000 cm −1 , 50 scans, and 4 cm -1 of resolution. The ssNMR experiments were recorded in a 400 MHz Bruker Avance III spectrometer with a 4 mm channel for solids (15 MHz). The data acquisitions for 13 C used a combination between cross-polarization and magic angle spinning (CP/MAS) in a 7 mm rotor at room temperature. The N 2 adsorptions were performed on an ASAP 2020 micrometrics instrument with 77 K nitrogen and 30 mg of sample approximately. On the other hand, for the COF films synthesized over the electrodes were characterized by SEM and Raman analysis, where the first was recorded on a JOEL JSM-6490L microscope with secondary electrons Everhaty-Thornley detector. The powders were cover with a Pt-Au 10 nm film approximately; the electron energy was 10 kV. The Raman experiments were carried out on a Thermo Scientific DXR with a laser excitation of 540 nm, 20 mW power, and 50× amplification.
Electrochemical Impedance Spectroscopy Experiments
The electrochemical impedance spectroscopy experiments were recorded on a Gamry Interface 1000 E potentiostat in potentiostatic mode at open circuit potential in the frequency range 1 Hz to 100 kHz with 10 mV amplitude of the test signal. The data were analyzed with the Gamry instrument version 7.05 package software.
Low Molecular Weight Amines Detection
The applicability of COFs as gas sensors for ammonia, methylamine, ethylamine, dimethylamine, and triethylamine detection was evaluated by the TFP-BZ-IDE and TFP-DMBZ-IDE systems. Low molecular weight amine detection was carried out on a modified system previously reported [43], using a closed chamber with a fan and an electrode holder equipped. TFP-BZ-IDE and TFP-DMBZ-IDE systems were introduced inside the chamber, and amine vapors were injected by a septum in an injection port, modifying the system atmosphere.
To evaluate the gas response, the COF system's impedance was measured in air and the presence of amines. The impedance response data reported for each concentration corresponds to the value at which the measurement was stabilized after three air/amine vapor injections cycles.
Powder Characterization
The COFs synthesis on IDEs followed the chemical bath deposition technique; methodology that also produced a supernatant powder. Here named as residuals COF powders. The thin films' characterization was a difficult procedure, and after several attempts of that, it was decided to make an extensive characterization of this residual powder and take it as a comparison pattern for the characterization of the COF-electrode systems.
Powder X-ray Diffraction
COFs showed a typical crystallinity for this kind of material exhibited a significant pick under 5 • 2θ, exactly at 3.6 • 2θ for both COFs (TFP-BZ and TFP-DMBZ) corresponding to the (100) plane and more picks at 6.2 • 2θ and 26.6 • 2θ corresponding to the (110) and (001) planes, respectively. The diffraction patterns are practically the same for both materials (Figure 1), since the chemical structures of the two COFs are very similar, as was reported previously [44].
Fourier Transform Infrared Spectroscopy
The FT-IR spectra of COFs present the typical broad and intensive bands correspondent to a characteristic C=C stretch for aromatics in the region of 1570 cm −1 (1582 cm −1 for TFP-BZ and 1576 cm −1 for TFP-DMBZ) and C-N stretch for amines near to 1260 cm −1 (1289 cm −1 and 1275 cm −1 for TFP-BZ and TFP-DMBZ, respectively). There are also weak bands close to 1650 cm −1 correspondent to C=O bonds of the keto form in the tautomerization keto-enamine form [28]. The disappearance of the N-H bonds (approximately at 3400 cm −1 ) and C=O bonds (1730 cm −1 ) shows that there are no precursor remnants (Figure 2a).
Solid-State Nuclear Magnetic Resonance Spectroscopy
The ssNMR results demonstrated that the keto form is predominant in the tautomerization keto-enamine form. The signals at 184.4 ppm for TFP-BZ and 184.1 ppm for TFP-DMBZ correspond to the carbonyl of the keto group (Figure 2b). Other characteristic signals of the keto form are those corresponding to sp 2 carbon bonding with the nitrogen of amines at 107.0 ppm and 107.9 ppm for TFP-BZ and TFP-DMBZ, respectively (Figure 2b).
N 2 Adsorption Isotherms
The nitrogen adsorption isotherms were studied to determine the COFs surface areas. The Brunauer-Emmett-Teller (BET) method was used to calculate the surface areas with a nonlocal density functional theory (NLDFT). The BET surface area was found to be 256.95 ± 8.39 m 2 g −1 and 256.78 ± 9.83 m 2 g −1 for TFP-BZ ( Figure 3a) and TFP-DMBZ (Figure 3b), respectively. The size pore distribution of both COFs showed maximums at 9.92 Å for TFP-BZ and 10.84 Å for TFP-DMBZ (Figure 3c), with isotherms showing important adsorptions in the regions of microporous (low relative pressures). The size pore distribution due to the population of width pore was below 20 Å, confirming the material microporosity.
Films Characterization
As it was described in the synthesis section (Section 2.2), the COFs films were deposited on Au IDEs. Figure 6a is an amplification of the interdigitated region of the pristine electrode, which makes possible a visual comparison with the COF film (Figure 6b). It can be seen that the COF film covers the entire electrode surface, and there is some material accumulation as red dots. The SEM images evidence that there is a continuous COF film on the electrode, comparing the pristine ceramic region ( Figure 6c) and TFP-BZ-IDE (Figure 6d). The COF film SEM image showed the presence of an intercross of wires forming a network morphology, similar to the COF powder previously characterized (Figure 4). In Figure 6, the TFP-BZ film is shown as an example. The TFP-BZ-IDE and TFP-DMBZ-IDE materials were characterized by Raman spectroscopy. Again, these materials presented a signal interference making necessary baseline corrections. After that, TFP-BZ-IDE and TFP-DMBZ-IDE materials have similar bands to that of COFs powders shown previously ( Figure 5). The typical C=C stretch from organic aromatic compounds was found at 1606 cm −1 for TFP-BZ-IDE (Figure 7, black) and 1605 cm −1 for TFP-DMBZ-IDE (Figure 7, red). These results confirmed the successful deposition of COFs on Au IDEs. Unfortunately, the TFP-BZ-IDE and TFP-DMBZ-IDE did not show diffraction peaks even after employing a grazing incidence diffraction method. This is probably due to the fact that the layer of COFs deposited on the electrode has a very low thickness, or that the layer of COF has no crystalline characteristics. However, SEM images, Raman spectra, and impedance characterization (Section 3.3) showed the presence of organic material on the electrodes. This indicates that the deposition of polymers on the electrodes was successful.
Low Molecular Weight Amines Detection
As mentioned in Section 2.4, a potentiostat was employed for the impedance experiments. The impedance measurements showed high values around of GOhms; so, it was necessary to employ a high range of impedance potentiostat and perform the corresponding calibrations and blanks. The impedance measurements with two contact systems have been reported many times in the literature [45][46][47]. Generally, these experiments are less complex, practical, and fast to complete the electrochemical measurements. Sometimes, two contact systems in the impedance setup are a useful advantage because this makes easier the circuit model construction and its interpretation.
The electrochemical characterizations of films were carried out by fitting the experimental data with circuit simulation, and it showed a good fit between experimental and simulation data (Figure 8a) for TFP-BZ-IDE (black) and TFP-DMBZ-IDE (red). The circuit used to simulate the process was a typical CPE circuit, formed by three components. Rm is the material resistance, Rct is charge transfer resistance, and CPE constant phase element is a complex electrical double layer (Figure 8b). In all experiments, the magnitude of the impedances was in GOhms (GΩ) range. The simulations showed that the most significant contribution to these impedances comes from Rct, while Rm and CPE showed less contribution to the total impedance value. These results suggested that the charge transfer between surfaces limits the electrical conduction process.
For low molecular weight amine detection, the impedance experiments were measured under different amine concentrations and the changes in the Bode and Nyquist plots were observed. The Bode plots allow seeing the frequency region where amines generate impedance change. As an example, the Bode plots of TFP-BZ-IDE exposed to 0.00 ppm (black), 1.10 ppm (red), and 2.21 ppm (blue) of ammonia are presented in Figure 9a. Since at frequencies below 10 Hz, there were significant changes in impedance (green dashed line in Figure 9a) when changing the ammonia concentration, 1.58 Hz was chosen as the frequency for the sensing experiments with the low molecular amines. Initially, the impedance response was studied for each amine, recording 10 points from 0.00 ppm to 44.15 ppm. For the selectivity plot (Figure 9b), 8.83 ppm amine concentration was selected to measure the impedance at 1.58 Hz. This concentration was selected because it is important to understand the behavior of the sensor at low concentrations. The four amines mentioned above were evaluated, and also, three possible interference gases (nitrogen, carbon dioxide, and methane). The TFP-BZ-IDE and TFP-DMBZ-IDE materials impedance responses were around five times greater than those found for the interference gases, showing that the TFP-BZ-IDE and TFP-DMBZ-IDE systems present significant selectivity toward the detection of amines. Although there is not a clear trend according to the chemical nature of the amine, all responses had high values of around 70%. These results showed that amine vapors affect the TFP-BZ-IDE and TFP-DMBZ-IDE impedance. No trend was seen in amine behavior, maybe because the critical factor for the COF-IDE system impedance is the charge transfer resistance (Rct), and these resistances are very similar between the COFs synthesized in this work. The impedance experiments in this work were carried out in static conditions; each measurement was made after a time for stability (20 s), and changes in the response above the same concentration were not observed. However, for employing this COF film like a sensor in future works, the measurement should be done in dynamic conditions; this way, it would be possible to evaluate the reproducibility of the sensor and how amine vapors adsorption and desorption are.
Additionally, the impedances changes at low amine concentrations, from 0.00 ppm to 2.21 ppm, were studied. Generally, at very low concentrations (under 0.88 ppm), the impedance responses showed a linear trend, however above 0.88, the impedance changes decreases, showing a saturated electrode behavior. The TFP-BZ-IDE (Figure 10a) showed lower impedance changes for ammonia and ethylamine, while the impedance changes by methylamine and dimethylamine are significant. On the other hand, TFP-DMBZ-IDE (Figure 10b) showed similar impedance responses for ammonia, dimethylamine, and ethylamine, whereas the methylamine change is notorious. Saturation behaviors more critical in TFP-DMBZ-IDE were observed, suggesting that this COF film has high sensitivity and could be used in concentrations approximately below 1 ppm. Nevertheless, the linear regions achieved in this work are very promising for developing a detection and quantification system for these amines. For future work, we plan to evaluate the COFs impedance changes under a mixture of gases, between amines and interference gases. In a real application, the sensor may be under an environment with different gases at the same time. This could generate signal interference as reported in the literature [48,49].
Conclusions
The impedimetric experiments of COFs TFP-BZ-IDE and TFP-DMBZ-IDE showed that the most significant impedance contributions were from the films' charge transfer resistances. A selectivity detection of low molecular weight amines was observed, with impedance changes around 70%. Vapor amines were detected in low concentration ranges below two ppm with a linear relationship between amine concentration and impedance responses. Therefore, these COF films are promising materials to develop a chemosensor for the detection of low molecular amines.
Conflicts of Interest:
The authors declare no conflicts of interest. | 4,137 | 2020-03-01T00:00:00.000 | [
"Chemistry"
] |
Induction of Axonal Outgrowth in Mouse Hippocampal Neurons via Bacterial Magnetosomes
Magnetosomes are membrane-enclosed iron oxide crystals biosynthesized by magnetotactic bacteria. As the biomineralization of bacterial magnetosomes can be genetically controlled, they have become promising nanomaterials for bionanotechnological applications. In the present paper, we explore a novel application of magnetosomes as nanotool for manipulating axonal outgrowth via stretch-growth (SG). SG refers to the process of stimulation of axonal outgrowth through the application of mechanical forces. Thanks to their superior magnetic properties, magnetosomes have been used to magnetize mouse hippocampal neurons in order to stretch axons under the application of magnetic fields. We found that magnetosomes are avidly internalized by cells. They adhere to the cell membrane, are quickly internalized, and slowly degrade after a few days from the internalization process. Our data show that bacterial magnetosomes are more efficient than synthetic iron oxide nanoparticles in stimulating axonal outgrowth via SG.
Introduction
The mechanosensitivity of cells determines a specific response to mechanical stimulation. Mechanical stretch can modulate several different cellular functions, such as the electrical activity of cardiac muscle [1], osteogenesis [2], and the myogenic response of small arteries [3]. Although mechanosensitivity is essential to all cells, studies were mainly devoted to clarify signal mechanotransduction in those cells that play a fundamentally mechanical role. However, in the last decades, the pivotal role of mechanical force in the neuron development has been clarified and has attracted much attention. Neurons are mechanosensitive cells over three distinct ranges of force magnitude [4]. They are even more mechanosensitive than other non-neuronal cell type, by sensing, probing and responding to pico-Newton (pN) forces [5]. Recently, our team demonstrated that the generation of pN forces modulates neurite elongation, sprouting, and neuron maturation [5,6]. We developed a new methodology for stretching the axon shaft from the "inside" by developing a force of about 10 pN, which is similar or lower than those endogenously generated by the growth cone, pointing that the generation of such extremely low mechanical forces is an endogenous mechanism of axonal outgrowth [5,6]. This methodology is based on labelling the whole axon with magnetic nanoparticles (MNPs) for subsequent application of an external magnetic field gradient to induce a force generation that stretches the whole axon. Mouse hippocampal neurons were found to elongate by stretch-growth at a rate of about 6.6 ± 0.2 µm·h −1 without thinning or axon disconnection [5,6]. Interestingly, we found that stretch-growth occurs at a similar extent using iron oxide MNPs with different
Magnetosome Cell Interactions
The cytotoxicity/biocompatibility of bacterial magnetosomes has been tested on various mammalian cell lines, including primary cells as well as cancer cells [14,37,[43][44][45]. Although the tested cell lines differed in their sensitivity to magnetosome treatment, viability values suggested biocompatibility for a broad concentration range. Thus, even for relatively high particle amounts (up to 400 µg Fe mL −1 ), cell viability was only slightly affected, even for more sensitive primary cells [14,37].
In this study, magnetosomes from M. gryphiswaldense were isolated and purified with their intact biological membrane (Figure 1), exhibiting an overall particle size of 39.7 ± 8.1 nm, a hydrodynamic diameter of 51.2 ± 7.2 nm, and high saturation magnetization (≥70 Am 2 kg −1 ) [46,47]. Magnetosomes were tested on mouse hippocampal neurons to exclude any toxicity and data confirm that the particles can be safely administered to cells to the low concentrations (<5 µg Fe mL −1 ) ( Figure 2C) required for stretch-growth.
Magnetosome Cell Interactions
The cytotoxicity/biocompatibility of bacterial magnetosomes has been tested on various mammalian cell lines, including primary cells as well as cancer cells [14,37,[43][44][45]. Although the tested cell lines differed in their sensitivity to magnetosome treatment, viability values suggested biocompatibility for a broad concentration range. Thus, even for relatively high particle amounts (up to 400 µg Fe mL −1 ), cell viability was only slightly affected, even for more sensitive primary cells [14,37].
In this study, magnetosomes from M. gryphiswaldense were isolated and purified with their intact biological membrane (Figure 1), exhibiting an overall particle size of 39.7 ± 8.1 nm, a hydrodynamic diameter of 51.2 ± 7.2 nm, and high saturation magnetization (≥70 Am 2 kg −1 ) [46,47]. Magnetosomes were tested on mouse hippocampal neurons to exclude any toxicity and data confirm that the particles can be safely administered to cells to the low concentrations (<5 µg Fe mL −1 ) ( Figure 2C) required for stretch-growth. gryphiswaldense. The latter biomineralizes up to 40 magnetosomes, arranged in a chain-like manner at midcell. (B) TEM image of a suspension of isolated/purified magnetosomes (negatively stained), containing well-dispersed particles. Magnetosomes consist of a magnetite core that is surrounded by a biological membrane (indicated by arrows). The latter consists of phospholipids and a set of magnetosome-specific proteins, and provides colloidal stability.
Next, we studied the interaction of magnetosomes with neurons, focusing on the intracellular localization and particle degradation, which is relevant for the estimation of the mechanical force generated into the axonal shaft. Cells treated with magnetosomes appeared morphologically similar to the control, and magnetosomes were visible as single spots into the axons after Prussian Blue staining (without magnetosomes, Figure 2A1, and with 2.5 µg mL −1 magnetosomes, Figure 2A2). In order to better characterize the intracellular localization of these particles into the neurons, we used fluorescently labelled magnetosomes ( Figure 2B). Fluorescence was easily detected in the cytoplasm of both cell soma and neurites. By analyzing the reconstruction of neuron volume via Z-stack re-slice, magnetosomes as small puncta were detected intracellularly. In the neuronal soma, we observed absence of particles in the nucleus ( Figure 2B1), but they were present in the low intracellular level of Fe ions ( Figure 2D). From 3 dpt, magnetosomes start to di from the pellet and the intracellular iron concentration increases, consistent with sumption that magnetosomes inside neurons are slowly dissolving. Starting from the Fe intracellular concentration level stabilizes and the particles totally disappe the cell membrane pellets, suggesting the process of intracellular magnetosome d tion is now sustained. These data are also consistent with observations previou lected with hippocampal neuron treated with different magnetic nanoparticles [5 Next, we studied the interaction of magnetosomes with neurons, focusing on the intracellular localization and particle degradation, which is relevant for the estimation of the mechanical force generated into the axonal shaft. Cells treated with magnetosomes appeared morphologically similar to the control, and magnetosomes were visible as single spots into the axons after Prussian Blue staining (without magnetosomes, Figure 2A1, and with 2.5 µg mL −1 magnetosomes, Figure 2A2). In order to better characterize the intracellular localization of these particles into the neurons, we used fluorescently labelled magnetosomes ( Figure 2B). Fluorescence was easily detected in the cytoplasm of both cell soma and neurites. By analyzing the reconstruction of neuron volume via Z-stack re-slice, magnetosomes as small puncta were detected intracellularly. In the neuronal soma, we observed absence of particles in the nucleus ( Figure 2B1), but they were present in the cytoplasm, as well as in the axon ( Figure 2B2). This result is in line with previous observations that we collected with iron oxide magnetic nanoparticles in PC12 cells and hippocampal neurons [5,6]. In order to characterize the intracellular degradation dynamics, we performed an incubation with magnetosomes for 4 h, followed by an extensive washing step. Then, we collected cell pellets at different time points (1, 3, 7, and 10 day post-treatment, dpt). For each time point, cell pellets were lysed, the supernatant was collected and separated by centrifugation from cell membrane fragments for measuring the intracellular levels of Fe 2+ and Fe 3+ that is mark of particle dissolution. Collected data point that magnetosomes are mainly intact at 1 dpt, as documented by the brown-stained pellet and the low intracellular level of Fe ions ( Figure 2D). From 3 dpt, magnetosomes start to disappear from the pellet and the intracellular iron concentration increases, consistent with the assumption that magnetosomes inside neurons are slowly dissolving. Starting from 7 dpt, the Fe intracellular concentration level stabilizes and the particles totally disappear from the cell membrane pellets, suggesting the process of intracellular magnetosome degradation is now sustained. These data are also consistent with observations previously collected with hippocampal neuron treated with different magnetic nanoparticles [5].
Magnetosomes Induce Stretch-Growth
Recently, we demonstrated in PC12 cell line and hippocampal neurons that neurite labelling with superparamagnetic iron oxide nanoparticles (SPIONs) enables them to be stretched through the dragging force created by an external magnetic field [5,6]. In this study, we tested the capability of magnetosomes to obtain the same effect, in terms of induction of stretch-growth.
The elongation analysis was carried out on cells labelled 4 h after plating with magnetosomes and exposed from day in vitro (DIV) 1 to DIV3 to the magnetic field (hereafter, labelled as stretch group) or a null magnetic field (labelled as control group). Figure 3 shows hippocampal neurons cultured on two distinct areas separated by a 0.5 mm cell-free gap region. A comparison between control and stretched condition reveals the dramatic effect of the stretch on the length of axons after 48 h of continuous stretching. For the quantification of this increase, we tested the two doses for which we excluded any toxicity on neurons (1.25 and 2.5 µg mL −1 ). Each experiment was repeated four times and analyzed by two different operators. In each experiment and with each dose tested, we found a statistically highly significant increase (p < 0.0001) in axonal length in comparison to the control groups ( Figure 3B,C). To demonstrate that the observed length increase was real growth and not a viscoelastic deformation, the average thickness of neurites treated with 2.5 µg mL −1 of magnetosomes was calculated ( Figure 3D). In line with our previous findings [5,6], no differences in size of neurites were found (p = 0.07).
Additionally, we made a comparison between the effect induced by biosynthesized magnetosomes and synthetic iron oxide nanoparticles. In previous works, we tested nanoparticles with different size (iron oxide core from 2 to 80 nm), magnetic properties, and surface coatings. Here, for comparison, we selected the commercially available SPIONs (core size 75 ± 10 nm, saturation magnetization 59 Am 2 kg −1 ), i.e., the most powerful inducer of stretch-growth that have been tested so far from our team [5,6]. Figure 3E shows the elongation rate, given as fold change of the stretch group versus the control group. Specifically, the stretch-growth induced an increase in the elongation rate of 1.43, 1.58, and 2.08 µm h −1 , using respectively, 3.6 µg Fe mL −1 of SPIONs, and 1.25 and 2.5 µg Fe mL −1 of magnetosomes. Surprisingly, the treatment with 2.5 µg Fe mL −1 of magnetosomes provided a statistically significant increase in the length of neurites compared to samples treated with SPIONs 3.6 µg Fe mL −1 (p = 0.021). We were wondering if this increase could be associated to differences in the cell-particles interactions between the two groups. The study of the axonal ultrastructure shows that magnetosomes adhere to the cell membrane of developing axons and, after internalization, localize as single particles within the axon ( Figure 4A). Similarly, SPIONs are localized as single spots to both the cell membrane and axoplasm of hippocampal neurons ( Figure 4B), suggesting an analogous internalization pathway and localization pattern between the two groups. However, axons treated with magnetosomes show a very high density of smaller less electron-dense spots whose size is compatible with the mean size of the magnetosome iron oxide core ( Figure 4A, dashed white rectangle) that could account for a strong ability of magnetosomes to enter the axons. toxicity on neurons (1.25 and 2.5 µg mL −1 ). Each experiment was repeated four times and analyzed by two different operators. In each experiment and with each dose tested, we found a statistically highly significant increase (p < 0.0001) in axonal length in comparison to the control groups ( Figure 3B,C). To demonstrate that the observed length increase was real growth and not a viscoelastic deformation, the average thickness of neurites treated with 2.5 µg mL −1 of magnetosomes was calculated ( Figure 3D). In line with our previous findings [5,6], no differences in size of neurites were found (p = 0.07). Considering the smaller size of the inorganic core and the lower Fe concentration used for cell labelling (Table 1), we can conclude that the superior magnetic behavior of the biosynthesized magnetosomes (Table 1), together with their strong ability to accumulate inside the axons, makes them a superior nanotool for inducing stretch-growth of hippocampal neurons than artificial SPIONs.
Animals
Animal procedures were performed in strict compliance with protocols approved by the Italian Ministry of Public Health (MoH) and of the local Ethical Committee of University of Pisa, in conformity with the Directive 2010/63/EU (project license approved by the MoH on 22 November 2018). C57BL/6J mice (post-natal P0-P1) were used. They were kept in a regulated environment (23 ± 1 • C, 50 ± 5% humidity) with a 12 h light-dark cycle with food and water ad libitum.
Chemically Synthesized Nanoparticles
SPIONs, commercially available (Fluid-MAG-ARA, Chemicell; Berlin, Germany), have an inorganic core of iron oxide,~75 ± 10 nm in diameter, and saturation magnetization of 59 Am 2 kg −1 , as stated from the supplier. They have an organic shell preventing aggregation, made of glucuronic acid, and the hydrodynamic diameter is 100 nm.
Magnetosome Isolation and Purification
Magnetosomes were isolated and purified as previously described [13,14]. Cell pellets of M. gryphiswaldense were resuspended in 50 mM HEPES, 1 mM EDTA, at pH 7.2, and cell disruption was performed by passing the suspension 3-5 times through a microfluidizer system (M-110 L, Microfluidics Corp., Westwood, MA, USA) equipped with a H10Z interaction chamber at 124 MPa. Afterwards, the crude extract was passed through a MACS magnetic-separation column (5 mL; Miltenyi, Bergisch Gladbach, Germany) placed between two neodymium-iron-boron magnets (each 1.3 T). Thereby, the magnetosomes were retained within the column, whereas non-magnetic cellular compounds passed the column and were instantly eluted. The column was subsequently washed with 50 mL 10 mM HEPES, 1 mM EDTA, at pH 7.2, followed by 50 mL high-salt buffer (10 mM HEPES, 1 mM EDTA, and 150 mM NaCl, at pH 7.2) to remove electrostatically bound impurities, and again by 50 mL 10 mM HEPES and 1 mM EDTA, at pH 7.2. The magnets were finally removed, and the magnetosomes were eluted with ddH 2 O. As further purification step, the particle suspension was centrifuged through a 60% w/v sucrose cushion (in 10 mM HEPES and 1 mM EDTA, at pH 7.2) for 2 h at 200,000× g and 4 • C. During centrifugation, the magnetosomes formed a pellet at the bottom of the tube (due to their high density), whereas residual impurities were retained in the cushion. Lastly, the particles were resuspended in 10 mM HEPES, at pH 7.0, and stored in Hungate tubes at 4 • C under a nitrogen atmosphere.
Determination of Iron Concentrations
The iron content of suspension of isolated magnetosomes was determined by atomic absorption spectroscopy (AAS). For this, 25-50 µL of magnetosome suspension was mixed with 69% nitric acid (final volume 3 mL) and incubated for 3 h at 98 • C. Samples were subsequently filled up to a volume of 3 mL with ddH 2 O and analyzed using an Analytik Jena contrAA 300 high-resolution atomic absorption spectrometer (Analytik Jena, Jena, Germany) equipped with a 300 W xenon short-arc lamp (XBO 301, GLE, Berlin, Germany) as continuum radiation source. The equipment presented a compact high-resolution double monochromator (consisting of a prism pre-monochromator and an echelle grating monochromator) and a charge-coupled device (CCD) array detector with a resolution of about 2 pm per pixel in the far ultraviolet range. Measurements were performed at a wavelength of 248.3 nm using an oxidizing air/acetylene flame. The number of pixels of the array detector used for detection was 3 (central pixel 1). All measurements were carried out in quintuplicates (n = 5), each as a mean of three technical replicates.
Fluorescent Labelling of Isolated Magnetosomes
Magnetosomes were fluorescent-labelled by reaction with DyLight 488 Amine-Reactive Dye (NHS ester-activated derivative of high-performance DyLight 488; Thermo Scientific, Waltham, MA, USA). Thereby, the labelled particles were shown to exhibit an up to 12-fold increased fluorescence compared to EGFP magnetosomes (genetically engineered magnetosomes that display up to 200 EGFP molecules on the surface) [37].
Briefly, 1.5 µg of the fluorescent dye (stored as a 10 mg mL −1 stock solution in dimethylformamide) was added to 1 µg magnetosomes (in 50 mM NaHCO 3 , pH 9.0) and incubated in the dark for 2 h at 16 • C. Excess dye was removed by extensive washing in which the particles were several times pelleted by centrifugation (4000× g, 4 • C, 30 min) and, after discarding the supernatant, resuspended in 10 mM HEPES, at pH 7.0. Success of the labelling reaction was confirmed by fluorescence measurements (535 nm) using an Infinite M200pro plate reader (Tecan, Crailsheim, Germany).
Sterile Filtration of Magnetosome Suspensions
For application under sterile conditions, magnetosomes were sterile filtrated as described previously [39]. Suspensions of WT particles or DyLight 488-labelled magnetosomes were diluted with 10 mM HEPES, at pH 7.0, to an Fe concentration of~50 µg mL −1 and filtrated using a 0.22 µm PVDF sterile filter (Roth, Karlsruhe, Germany). The particles were collected by low-spin centrifugation (4000× g, 4 • C, 30 min) and resuspended in a small volume of ddH 2 O. After determination of the Fe content by AAS, the suspensions were diluted to a final Fe concentration of 500 µg mL −1 .
Transmission Electron Microscopy
For transmission electron microscopy (TEM) analyses of M. gryphiswaldense cells and isolated magnetosomes, specimens were deposited onto carbon-coated copper-mesh grids (Science Services, Munich, Germany). Isolated particles were additionally negatively stained with 2% uranyl acetate. TEM was performed on a JEOL 1400 (JEOL, Tokyo, Japan) with an acceleration voltage of 80 kV. Images were taken with a Gatan Erlangshen ES500W CCD camera.
For localization studies, neurons, both control and stretched, were fixed with 1.5% glutaraldehyde in 0.1 M sodium cacodylate buffer, at pH 7.4. After rinses with the same buffer, cells were post-fixed in reduced osmium tetroxide solution (1% OsO 4 , 1% K 3 Fe(CN) 6 , and 0.1 M sodium cacodylate buffer), stained with our homemade staining solution [52], dehydrated in a growing series of ethanol, and flat-embedded in epoxy resin.
Images for morphological characterization have been collected with a Transmission electron microscope Zeiss LIBRA 120 Plus operating at 120 KeV equipped with an incolumn omega filter.
Analytical Methods
The optical density (OD) and magnetic response (C mag ) of M. gryphiswaldense cultures were monitored by photometric measurements at 565 nm as reported previously [53]. Thereby, the OD 565 directly correlates with the cellular growth. C mag represents a lightscattering based proxy for the average magnetic orientation of bacterial cells and relates to the average magnetosome numbers in cell populations, allowing semiquantitative estimations of the particle content.
Hydrodynamic diameters of isolated magnetosomes were determined with a Malvern Zetasizer Nano-ZS (Malvern Panalytical, Malvern, UK). Measurements were performed in the automatic mode (wavelength 638 nm) at 25 • C on highly diluted particle suspensions using DTS1070 cuvettes.
Bio-Synthetized versus Artificially Synthetized Nanoparticles
The summary of the main properties of bacterial magnetosomes and a comparison with the commercial SPIONs used in this work is provided in Table 1. As for the magnetic properties, SPIONs are multi-domain superparamagnetic nanoparticles, as stated from the supplier. Recent studies on the magnetic properties of isolated magnetosomes revealed the co-existence of stable single domain (SSD) and superparamagnetic particles (SP) [33,54,55]. Although the particles exhibit a narrow size distribution, the suspensions are polydisperse to some extent [14]. Suspensions of wild-type-like sized magnetosomes also contain SP particles, however, the SSD particles dominate [33].
Magnetic Field
Experiments were conducted in 35 mm Petri dishes placed inside a Halbach-like cylinder magnetic applicator, which provided a constant magnetic field gradient of 46.5 Tm −1 in the radial centrifugal direction [5,6,56].
Stretching Assay
For stretching assay 80,000 cells were seeded on a 14 mm glass coverslip. Alternatively, seeding of hippocampal neurons was performed in a 2-well silicone insert (80209; IBIDI, Gräfelfing, Germany), at a density of 100,000 cells per well. After letting the neurons attach for 4 h, the silicone insert was removed in order to generate a cell-free gap region of 500 µm. For both supports used, particles were added 4 h after seeding (DIV0.17). At DIV1, the culture was placed in the magnetic applicator (Stretch group) or outside (Control group). At DIV3, samples were fixed and stained for immunofluorescence.
Toxicity Test
Toxicity of magnetosomes was evaluated by performing dose-response assays. At DIV3, cells were fixed and stained for immunofluorescence and the number of viable neurons was counted.
After lysis, cell membranes were precipitated from the lysate by centrifugation (18,000 rpm, 5 min). An iron assay kit (DIFE-250, QuantiChrom) was performed on cell lysate for intracellular iron quantification, and the absorbance was measured at a wavelength of 590 nm.
Particle Localization
An iron stain kit (HT20-1KT, Sigma) was applied for particle localization. In addition, fluorescent magnetosomes were used.
Image Analysis
The analysis of the elongation was performed using image analysis software ImageJ. Neurite length l was evaluated using the plugin NeuronJ [57], and 120 non-interconnected primary axons were analyzed from 10× magnification images (randomly acquired).
For axon thickness, a population of 40 axons was analyzed from 10× magnification images (randomly acquired). For each axon, the longest path l has been considered, and the thickness s was calculated as s = A/l, A being the axon area related to that path. Area was calculated from images after threshold normalization, binary conversion.
Statistical Analysis
Data were plotted and analyzed with GraphPad software, version 6.0. Significance was set at p ≤ 0.05. Statistical power analyses have been performed with G-power software.
Conclusions
Neurons are mechanosensitive cells, and the application of extremely low force positively modulate axonal elongation, neurite branching, and neuron maturation [5,[58][59][60][61][62][63][64][65]. Many studies demonstrated that magnetic nanoparticles are an efficient tool for magnetizing axons, and the subsequent application of a magnetic field gradient can generate such extremely low forces that drive productive axonal outgrowth [5,6,56,66,67]. The efficiency of this stretching protocol depends on the ability of the nanoparticles to label axons and on their magnetic behavior. In this study, we demonstrate that superior magnetic properties of bacterial magnetosomes together with the strong interactions with neurons, probably facilitated by their membranous structure, account for their extraordinary ability to induce stretch-growth. This finding would expand the range of biotechnological applications of bacterial magnetosomes also to the neuroscience field, which has been poorly explored to date.
Institutional Review Board Statement:
The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Italian Minister of Health (MOH) and the local ethical committee (OPBA) (project name "meccanotrasduzione della crescita assonale, license date 22 November 2018).
Informed Consent Statement: Not applicable.
Data Availability Statement: Data available on request due to restrictions eg privacy or ethical. | 5,369 | 2021-04-01T00:00:00.000 | [
"Materials Science",
"Engineering",
"Biology"
] |
An Automated Cropland Burned-Area Detection Algorithm Based on Landsat Time Series Coupled with Optimized Outliers and Thresholds
: Given the increasingly severe global fires, the accurate detection of small and fragmented cropland fires has been a significant challenge. The use of medium-resolution satellite data can enhance detection accuracy; however, key challenges in this approach include accurately capturing the annual and interannual variations of burning characteristics and identifying outliers within the time series of these changes. In this study, we focus on a typical crop-straw burning area in Henan Province, located on the North China Plain. We develop an automated burned-area detection algorithm based on near-infrared and short-wave infrared data from Landsat 5 imagery. Our method integrates time-series outlier analysis using filtering and automatic iterative algorithms to determine the optimal threshold for detecting burned areas. Our results demonstrate the effectiveness of using preceding time-series and seasonal time-series analysis to differentiate fire-related changes from seasonal and non-seasonal influences on vegetation. Optimal threshold validation results reveal that the automatic threshold method is efficient and feasible with an overall accuracy exceeding 93%. The resulting burned-area map achieves a total accuracy of 93.25%, far surpassing the 76.5% detection accuracy of the MCD64A1 fire product, thereby highlighting the efficacy of our algorithm. In conclusion, our algorithm is suitable for detecting burned areas in large-scale farmland settings and provides valuable information for the development of future detection algorithms.
Introduction
The burning of crop residues in fields is one of the most widespread global biomassburning activities, contributing substantial amounts of pollutant particulate matter and gas [1], changing the albedo, and emitting greenhouse gases that affect the climate [2][3][4].Accurate burned-area mapping is essential for quantifying the environmental impact of wildfires, compiling statistics, designing effective short-to medium-term impact mitigation measures, and planning post-fire rehabilitation [5,6].
Various burned-area products have been published in the last two decades based on relatively coarse spatial-resolution sensors (e.g., MODIS, AVHRR, and MERIS).However, small burned areas tend to be poorly represented in these coarse-resolution products [7].For example, large detection errors have been found in MODIS-based (500-m spatial resolution) burned-area products in cropland areas, especially at the end of the fire season [8].The main underlying limitation of current burned-area products derived from coarse-resolution satellite data is the underestimation of burned area in regions with small and highly fragmented fires [9][10][11][12] resulting in high omission errors [13].Algorithms and products based on MODIS provide inadequate detection of small fires (<100 ha) [14], however, the average area of farmland parcels in China is 1140 m 2 [15].Hence, coarse spatial resolution data are likely to result in high omission errors in this region [16].Small fire patches must thus be detected using higher spatial-resolution data to reduce such omissions and better estimate the true burned areas [12].A previous study highlighted the benefits of Landsat-derived burned-area products, especially in terms of spatial detail [17].It has been shown that after burning, near-infrared (NIR) and short-wave infrared (SWIR) values decrease [18,19] and these two spectral bands are considered to be the preferred bands for burned-area detection [13,20].An important factor for the successful application of Landsat data in burned-area detection is the ability of their multispectral characteristics to cover the most important spectral bands for burned-area mapping, i.e., IR and SWIR [21,22].Although numerous studies [23][24][25] have demonstrated the applicability of Landsat data to burned-area detection, there are few specific studies of its application in cropland burning detection.
The spectral signature of burned vegetation varies as a function of factors including fire behavior, pre-fire surface properties, and time elapsed since burning [26].Thus, spectrally, burned areas may be confused with other phenomena, making the mapping of burned areas using Landsat data a complex task.This problem can be overcome by exploring the temporal variation characteristics of the burning phenomenon using long time series and determining a suitable threshold to measure the outliers of this variability [18,21,27].There are two key issues related to the successful use of this method.First, the time series must be able to accurately reflect the change characteristics of the burning phenomenon; second, and more crucially, an optimal threshold must be determined to properly define outliers in the time series.In regard to the former, a vegetation fire drastically and suddenly changes the characteristics of its local environment [18].These changes include periodical [18] and seasonal [28] variations.Therefore, it is essential to design an algorithm to adequately express these variations.In regard to the latter, previous studies on burned-area detection have frequently employed a variety of techniques, including histogram-based [29], scene statistics [30], expert-derived [31], and Otsu method [32] for threshold segmentation.However, these methods establish thresholds between unburned and burned areas based on the difference between pre-and post-fire images, or on post-fire image statistics, which may result in suboptimal performance in the context of outlier threshold detection in time series.Concurrently, the portability of these methods across diverse satellite sensors, geographical regions, and ecosystems is limited [30,31], which jeopardizes the robustness of the algorithm [33] and its suitability for cropland burned-area mapping [34].It is therefore necessary to develop an automated threshold optimization method that is independent of ecosystems and other factors specifically used for time series.Automated threshold approaches are more readily adaptable to local conditions and enhance detection accuracy.They also make it possible to run the mapping algorithm for a series of satellite images that would otherwise require extensive time investment [35].Moreover, an automatically calculated threshold minimizes human interference and increases objectivity in the process.This paper addresses two key issues simultaneously.Firstly, a more precise time series is constructed to reflect the temporal variation of cropland burning, encompassing changes before and after burning events and seasonal fluctuations.Secondly, an automated threshold optimization algorithm is developed to identify outliers in the time series with greater precision and determine the optimal burned-area detection threshold.The integration of these two approaches collectively enhances the accuracy of cropland burned-area detection.
In this study, we describe and validate a method for operational cropland burnedarea mapping.The method uses Landsat reflectance time series together with automatic threshold optimization.It includes: (1) construction of preceding and seasonal time-series datasets based on near-infrared and mid-infrared data, with calculation of outliers through a median-filtering algorithm; (2) determination of optimal thresholds using an iterative procedure; and (3) detection of cropland burned areas using the thresholds and outliers through logical operations.
Study Area
The North China Plain (NCP) (32 1), which is known as "China's granary", provides 40% and 25% of China's wheat and corn production, respectively, although it only occupies 3.3% of the national area [36].Geographically, the NCP stretches from the Yanshan Mountains in the north to the mainstream of the Huai River and the primary irrigation canal of Northern Jiangsu in the south.It is bordered by the Taihang and Qinling Mountains in the west and lies adjacent to the Bohai and Yellow Seas in the east.It covers parts or all of Beijing, Tianjin, Hebei, Shandong, Henan, Jiangsu, and Anhui provinces.The NCP has a temperate continental monsoon climate, with an average annual temperature of 13 • C and an average annual precipitation of 710 mm.The deep and fertile soil, combined with appropriate temperature and precipitation, make the NCP highly suitable for the cultivation of crops such as wheat, corn, and soybeans.The dominant grain planting systems in the NCP include winter wheat in the northern region and a winter wheat-summer maize rotation in the southern region.Winter wheat is sown in early-or mid-October and harvested in June of the following year, while summer corn is sown in mid-or late-June and harvested in September of the same year.Consequently, June is the cropland residue burning season in the northern part of the NCP, while both June and October are burning seasons in the southern part of the NCP.For this study, a typical region (highlighted by a black square in Figure 1), covering an area of 30 × 30 km 2 , was selected in Henan Province.Crop burning was previously practiced twice a year in this region until the recent implementation of a local no-burning policy.
Workflow of Burned-Area Mapping
We employed a combination of the long time-series outlier method and a thresholdoptimization algorithm to detect burned areas.The workflow involved the following steps: (1) Construction of preceding and seasonal time series: Time-series data were organized to capture the temporal variation in reflectance.This involved constructing the preceding time series, which reflects changes before and after burning events, and the seasonal time series, which captures seasonal variations.(2) Calculation of time-series outliers: Outliers in the time series were determined by calculating two reference values through the application of median filtering to each of the constructed time series.(3) Determination of the optimal threshold: An iterative procedure was implemented to determine the optimal threshold.This involved systematic iteration through various threshold combinations to identify the threshold value yielding the best results.
Fire 2024, 7, 257 (4) Extraction of cropland burned pixels: The logical relationship between the outliers and optimal threshold was used to extract cropland burned pixels.
All these operations were carried out on the Google Earth Engine (GEE) platform, which provides abundant capacity for data storage and processing.The use of GEE facilitates the efficient handling of large datasets and enables the execution of complex algorithms.
Time-Series Construction
The temporal behavior of harvesting or burning of cropland is characterized by a sudden decline and gradual recovery of spectral values, as well as periodic variations from year to year [16].To capture these patterns, we constructed two time series: a preceding time series and a seasonal time series.The former is suited to pixels exhibiting long-term and non-seasonal trends.In contrast, the latter is better suited to pixels exhibiting highly periodic and seasonal trends.After a fire occurrence, the reflectance in the NIR and SWIR bands, specifically B4 (TM 4), B5 (TM 5), and B7 (TM 7) of Landsat 5, generally shows a decline [27].Additionally, Hawbaker et al. [22] found that burn probability is negatively correlated with the short-wave middle infrared (TM 5) band, and positively correlated with the long-wave middle infrared (TM 7) band.On the basis of these findings, B4 and B5 were selected for cropland burned-area detection in this study.
To construct the preceding time series, seven images that were closest to the target image were chosen.Goodwin and Collett [27] found that selecting the seven nearest images strikes a good balance between characterizing unburned reflectance variation and avoiding the inclusion of pixels affected by past burn events.Moreover, to ensure consistency, these seven images must be captured within the same year to eliminate potential cross-year influences on the sequential data.
For the construction of the seasonal time series, it is crucial to ensure that images are captured under comparable growing conditions.Therefore, a progressive search approach was adopted.The specific sampling strategy was as follows: (1) The solar azimuth angle and solar altitude angle of the target image were denoted as Azimuth0 and Zenith0, respectively.(2) Images were selected based on the criteria that the solar azimuth angle must fall within the interval range [Azimuth0 −15 • , Azimuth0 −3 • ] and [Azimuth0 +3 • , Azimuth0 +15 • ], while the solar altitude angle must be within the range [Zenith0 −15 • , Zenith0 −3 • ] and [Zenith0 +3 • , Zenith0 +15 • ].Additionally, care was taken to avoid selecting images that were too similar to the target image in terms of seasonal characteristics.Therefore, the selected image and the target image must be within a 60-day interval to capture the desired seasonal spectral changes.This sampling strategy aims to include as many time-series datasets as possible to reflect the variation in surface reflectance.
Time-Series Outlier Detection
Land surface reflectance can be influenced by various factors other than burn characteristics.To mitigate the impact of these factors and focus on burn-related changes, median filtering was employed to calculate the time-series outliers.
First, median filtering was performed on the time-series data (including both the preceding time series and seasonal time series), as shown in Equations ( 1) and (2).
Second, the difference between the time-series data and the median value was calculated for each time series.A result greater than zero was retained, and a result less than zero was discarded.Then, the average of the "greater than zero" values was calculated.Equations ( 3)-( 5) explain these operations for the preceding time series.
According to the above method, the seasonal time series was calculated using Equations ( 6)- (8).
The mean values of the above two time series were taken as interim reference values.
Finally, the smaller of the above two reference values was used as the final reference value (Equation ( 9)).The difference between the target image and reference image was calculated to obtain the B45 outlier image (Equation ( 10)).
According to the above method, the B4 outlier image was calculated using Equations ( 11) and (12).
By performing median filtering and calculating time-series outliers, our analysis could focus on the substantial changes associated with burn characteristics while minimizing the influence of factors such as cloud and cloud shadows.The flow chart presented in Figure 2 illustrates the sequential steps involved in this process.The cropland burned area can be obtained by comparing the outlier image obtained above with an appropriate threshold; thus, it is crucial to obtain the optimal threshold, as shown in Equation ( 13).The cropland burned area can be obtained by comparing the outlier image obtained above with an appropriate threshold; thus, it is crucial to obtain the optimal threshold, as shown in Equation ( 13).
To obtain the optimal threshold, we utilized an iterative algorithm, as shown in Figure 3.The specific steps of the threshold iterative algorithm were as follows: (1) We determined the threshold interval, i.e., the upper and lower bounds of the threshold, through analysis of the time series and calculation of the range.Specifically, the reflectance change curves (maximum and minimum values) for all B4 and B45 images within the time series were plotted, and reasonable upper and lower thresholds were set based on the distribution of these values.Changes in the minimum and maximum values were used to determine the lower and upper thresholds, respectively.(2) We initialized six parameters for the iteration, including threshold lower bound, threshold upper bound (for both B45 and B4), kappa coefficient (kappa), and confidence threshold corresponding to the dark gray area in Figure 3.Each iteration required four input parameters: threshold lower bound, threshold upper bound, kappa, and confidence threshold.The output included the updated threshold lower bound, threshold upper bound, kappa, and optimal threshold.(3) The kappa and optimal threshold values obtained in each iteration were used as input parameters for the next iteration.Except for the first and second iterations, the threshold interval in the input parameters of the 2nd *i iteration corresponded to the threshold interval in the output parameters of the 2nd *i − 2 iteration.Similarly, the threshold interval in the input parameters of the 2nd *i − 1 iteration corresponded to the threshold interval in the output parameters of the 2nd *i − 3 iteration, where i is a natural number ≥2. (4) Kappa was recalculated in each iteration.If the calculated kappa was larger than the reference kappa, the corresponding threshold interval was retained.Simultaneously, this kappa and optimal threshold were retained as input parameters for the next iteration.The range of the threshold interval was gradually reduced if the current kappa was >Kappai.The iteration continued until the range of the threshold interval was <0.0005.It is important to note that the entire iteration procedure was completed by an extra B4 or B45 threshold iteration.
Table 1 provides the specific iteration parameters used in the threshold-optimization program.
Fire 2024, 7, 257 7 of 17 this kappa and optimal threshold were retained as input parameters for the next it-eration.The range of the threshold interval was gradually reduced if the current kappa was >Kappai.The iteration continued until the range of the threshold interval was <0.0005.It is important to note that the entire iteration procedure was completed by an extra B4 or B45 threshold iteration.
Table 1 provides the specific iteration parameters used in the threshold-optimization program.
Time-Series Outliers
For this study, the target image was acquired on 10 June 2006.To construct the preceding time series, images from corresponding adjacent dates with cloud cover <50% were selected.Specifically, the preceding time series consisted of images acquired on 7 April, 26 June, 12 July, 1 November, and 19 December, respectively.This preceding timeseries period was 257 days.Additionally, a total of 107 images that met the conditions for constructing the seasonal time series were selected.
Figure 4 shows the spectral variation characteristics of the three utilized bands (TM4, TM5, and TM7) before and after crop-straw burning in 2006.Through visual interpretation of the satellite imagery, we found that this area exhibited post-burning characteristics on 1 November 2006.
The reflectance values of TM4 (B4) and the sum of the reflectance values of TM4 and TM5 (B45) exhibited substantial changes before and after 1 November 2006.The B4 values showed a substantial decrease from 0.55 to 0.21, while B45 decreased from 1.18 to 0.61.However, with the implementation of new crop cultivation, these two values increased, reaching 0.34 and 0.69, respectively.In contrast, the reflectance of TM7 (B7) showed irregular changes owing to the burning and sowing of new crops.Therefore, using B7 or an index incorporating B7 to detect cropland burning would lead to a high error.Hence, we selected B4 and B45 as the spectral bands to detect burned areas, rather than relying on B7.
tion of the satellite imagery, we found that this area exhibited post-burning characteristics on 1 November 2006.
The reflectance values of TM4 (B4) and the sum of the reflectance values of TM4 and TM5 (B45) exhibited substantial changes before and after 1 November 2006.The B4 values showed a substantial decrease from 0.55 to 0.21, while B45 decreased from 1.18 to 0.61.However, with the implementation of new crop cultivation, these two values increased, reaching 0.34 and 0.69, respectively.In contrast, the reflectance of TM7 (B7) showed irregular changes owing to the burning and sowing of new crops.Therefore, using B7 or an index incorporating B7 to detect cropland burning would lead to a high error.Hence, we selected B4 and B45 as the spectral bands to detect burned areas, rather than relying on B7.
Optimal Threshold 3.2.1. Threshold Interval Determination
To obtain the optimal threshold, we employed an iterative procedure; in this, it is crucial to set reasonable initial values for the threshold interval.The choice of the threshold interval for B4 and B45 not only affects the number of iterations required but also impacts the accuracy of the optimal threshold result.To determine the threshold interval, statistical analysis of the reflectance values of B4 and B45 from all 107 data images was conducted (as depicted in Figure 5).This analysis revealed the range of maximum and minimum reflectance values for both bands.
For B4, we observed that the maximum reflectance values exhibited a wide range, fluctuating between 0.3 and 0.9.However, the majority of these values were <0.6.Meanwhile, the minimum reflectance values ranged from 0.03 to 0.25, with most being <0.15.Based on this analysis, a reasonable threshold interval for B4 was set as [0.1, 0.6].For B45, the maximum reflectance values fluctuated between 0.6 and 1.5, with a concentration of values between 0.6 and 1.0.Meanwhile, the minimum reflectance values ranged from 0.0 to 0.3, with most being <0.25.Therefore, a reasonable threshold interval for B45 was determined as [0.2, 1.0].
Sampling Points for Threshold Optimization
To optimize the thresholds, it is crucial to obtain a representative sample of burned pixels.Herein, a total of 800 points were selected for this purpose.The selection process involved two main steps, threshold optimization and detection accuracy evaluation.
For threshold optimization, two sets of 200 random points were chosen to determine the optimal threshold for two specific dates, respectively, namely 10 June 2006 and 8 June 2011.These dates were selected to represent different time periods and enable the evaluation of threshold performance across different years.
For detection accuracy evaluation, another two sets of 200 random points were selected to assess the accuracy of the optimal threshold for 10 June 2006 and 8 June 2011, respectively.These points were used to verify the performance of the threshold in correctly identifying burned pixels.In both steps, the selected sampling points were visually interpreted to determine whether they corresponded to burned pixels or unburned pixels.
conducted (as depicted in Figure 5).This analysis revealed the range of maximum and minimum reflectance values for both bands.
For B4, we observed that the maximum reflectance values exhibited a wide range, fluctuating between 0.3 and 0.9.However, the majority of these values were <0.6.Meanwhile, the minimum reflectance values ranged from 0.03 to 0.25, with most being <0.15.Based on this analysis, a reasonable threshold interval for B4 was set as [0.1, 0.6].For B45, the maximum reflectance values fluctuated between 0.6 and 1.5, with a concentration of values between 0.6 and 1.0.Meanwhile, the minimum reflectance values ranged from 0.0 to 0.3, with most being <0.25.Therefore, a reasonable threshold interval for B45 was determined as [0.2, 1.0].
Optimal Threshold Iteration
In Section 3.2.1, the threshold interval for the B4 reflectance value was defined as 0.6], while that for the B45 reflectance value was set as [0.2, 1.0].It is important to note that the threshold interval for B4 was smaller than that for B45.Additionally, the reflectance values of B4 were generally relatively lower than those of B45.Considering these factors, the iterative process of threshold optimization proceeded by first conducting the iteration for the B4 threshold interval, as indicated in the flow chart in Figure 3.This sequence was implemented to address the potential uncertainties associated with the higher reflectance values of B45 when combined with the lower values of B4.
The parameters and results of the threshold values of the iterative process for the target image on 10 June 2006 are shown in Table 2.The number of threshold iterations was small, thus reducing any unnecessary calculation.In addition, the B4 threshold interval converged slowly, while the B45 threshold interval converged quickly.Crucially, the optimal threshold calculated in the previous iteration was used as the confidence threshold and the kappa was used as the parameter for narrowing the threshold interval.Consequently, the threshold interval converged quickly.This iterative program showed high convergence speed and processing efficiency.
To verify the effectiveness of the iterative procedure, cropland burning in the study area on 8 June 2011 was also detected.The threshold iteration process and results of the optimal threshold calculation for this date are shown in Table 3. Kappa did not change in the iteration process and reached a very high value in the first iteration, but the threshold confidence changed and increased with iteration.Unlike for 10 June 2006, Fire 2024, 7, 257 10 of 17 the iterative procedure quickly determined the optimal threshold for this date.This may be related to the different burning characteristics of the two days.In the selected image range, the burned area on 8 June 2011 was relatively small and clustered.This facilitated the iterative procedure to determine the change characteristics of the band value after burning more quickly, further providing the lower and upper bounds of the band and the optimal threshold.
Optimal Threshold Validation
The cropland burned area was obtained by applying Equation (13).To evaluate the accuracy of the results, the sample points used for accuracy analysis of the optimal threshold were utilized.Table 4 presents the assessment metrics, including the overall accuracy, kappa, commission error, and omission error.The results demonstrated that the verification accuracy corresponding to the optimal threshold was high.The overall accuracy reached 93% and the kappa was 0.84.Although the commission error for burned pixels was relatively higher, it was still <15%.Based on these findings, we concluded that the optimal threshold obtained through iteration exhibits high confidence and can effectively differentiate between burned-area pixels and unburned-area pixels.The accuracy evaluation results indicate that the threshold-based approach employed herein is reliable for detecting and distinguishing burned areas in croplands.
Intercomparison of Burned Area and MCD64A1
Figure 6 displays the burned-area maps for the construction method on 10 June 2006 and for the validation on 8 June 2011.To assess the accuracy of cropland burned-area detection, an additional round of uniform sampling was conducted, resulting in a total of 400 sampling points.A comparison between the detection results of this study and those of MCD64A1 is presented in Table 5. Noticeable commission and omission errors occurred in the latter owing to its lower spatial resolution; MCD64A1 misclassified a substantial number of burned pixels as unburned, leading to a high omission error.In terms of overall accuracy and kappa, the algorithm used herein outperformed MCD64A1 in accurately identifying burned areas.Notably, the higher burned-area overall accuracy achieved by our algorithm was particularly advantageous in distinguishing burned pixels from unburned pixels.
Fire 2024, 7, x FOR PEER REVIEW 12 of 18 capabilities for small burned areas.The results on 10 June 2006 demonstrate that many small burned patches were detected herein, but were not detected by MCD64A1.Meanwhile, compared with MCD64A1, the burned areas detected herein had finer boundaries.These findings underscore the superiority of the algorithm employed herein when compared with MCD64A1, highlighting its potential for more accurate and reliable detection of burned areas in croplands.Compared with the results of our algorithm, MCD64A1 detected more burned areas, especially on 8 June 2011.Owing to the limitation in spatial resolution of the input sensor of the MCD64A1 product, some mixed pixels (consisting of burned and unburned pixels) may be classified as burned pixels.The algorithm used herein showed strong detection capabilities for small burned areas.The results on 10 June 2006 demonstrate that many small burned patches were detected herein, but were not detected by MCD64A1.Meanwhile, compared with MCD64A1, the burned areas detected herein had finer boundaries.These findings underscore the superiority of the algorithm employed herein when compared with MCD64A1, highlighting its potential for more accurate and reliable detection of burned areas in croplands.
Advantages of the Algorithm
Combining time-series change detection and threshold optimization has proved to be an effective means of detecting cropland burned areas.Through accuracy verification and comparative verification with MCD64A1, we consider that our algorithm is suitable for the detection of cropland burned areas.The detection algorithm uses Landsat data, taking advantage of its preferable spatial resolution and spectral information, and overcoming the limitations of MODIS data, which has limited accuracy due to its low spatial resolution [14].Furthermore, traditional detection algorithms (whether a contextual algorithm based on a relative or absolute threshold, or a detection algorithm based on a spectral index) cannot overcome the influence of the spatial heterogeneity of large areas.Our algorithm uses two different sampling strategies to construct time series; this overcomes the influence of spatial heterogeneity and reduces the probability of misjudgment caused by seasonal or other factors.The overall accuracy of our algorithm reached 93.25%, which is better than current general burned-area products.Therefore, this algorithm is suitable for farmland burned-area detection.
The reasons for the high accuracy of our algorithm can be summarized as follows.
(1) Studies have shown that the reflectance values drop most evidently in the NIR and SWIR bands after burning [37].Herein, good results were achieved using Landsat image NIR and SWIR bands for burning detection [27].We have shown that using the B4 and B45 reflectance values for burned-area detection gives better results than those using B4 and B7 values when calculating time-series outliers.Therefore, selecting the NIR and SWIR1 of Landsat images as two important parameters for outlier calculation ensures detection accuracy.(2) The seasonal time series adopted herein fully considers the seasonal repetitive characteristics of crop-straw burning, reduces the possibility of low-reflectance pixels caused by non-burning factors being misclassified as burned pixels, and ensures a low commission error of unburned pixels.(3) The Landsat-5 surface reflectance product based on the GEE platform was used herein.This product uses the CFMASK algorithm to detect clouds, achieving good results [38], which greatly reduces the impact of clouds and cloud shadows on fire detection.The accuracy of the product is high, with reflectance values accurate to 0.001, which results in the optimal threshold being more accurate, and improves the detection accuracy.
In addition, the automation of threshold setting is an important problem in the threshold method.Herein, an iterative program was used to realize automatic optimization of the threshold.This avoids time-consuming manual adjustment and also improves the accuracy of cropland burned-area detection.Therefore, this automatic threshold optimization could provide a foundation for future applications of other remote sensing or spectral band data to extract cropland burned areas in larger regions.
Limitations of the Algorithm
Although good performance was achieved in this study, there are some shortcomings that require further improvement.First, in the construction of the preceding time series, the selection of appropriate reference values can make calculating outliers more conducive to distinguishing burned and unburned areas, and the size of the outliers will further affect threshold optimization.Figure 7 shows the reflectance values and their medians for the preceding time series.It can be seen that the reflectance values of B4 and B45 on four dates (red circles in Figure 7) are abnormal because cloud cover in the images for these days exceeded 50%.A large number of clouds and cloud shadows can result in unusually high (e.g., 17 November 2006) or unusually low (e.g., 3 December 2006) spectral values, respectively.If there are abundant pixels in the time-series data with abnormally low reflectance caused by cloud shadows or other non-burning factors, the median reflectance of this time-series data will be low, and the outliers calculated using the actual burned pixels will be high, causing burned pixels to be misjudged as unburned pixels.Consequently, if the median reflectance value of time-series data under such a sampling strategy is selected as the reference value, the probability of misjudgment will increase.The removal of cloud-contaminated images when constructing the preceding time series resulted in the inadequate representation of the median values and consequential outliers.This limitation of Landsat is offset by the advantage of MODIS, whose higher time resolution (i.e., daily repetition) can ensure the accuracy of the constructed preceding time series and better reflect the annual variation characteristics of cropland burning.reflectance caused by cloud shadows or other non-burning factors, the median reflectance of this time-series data will be low, and the outliers calculated using the actual burned pixels will be high, causing burned pixels to be misjudged as unburned pixels.Consequently, if the median reflectance value of time-series data under such a sampling strategy is selected as the reference value, the probability of misjudgment will increase.The removal of cloud-contaminated images when constructing the preceding time series resulted in the inadequate representation of the median values and consequential outliers.This limitation of Landsat is offset by the advantage of MODIS, whose higher time resolution (i.e., daily repetition) can ensure the accuracy of the constructed preceding time series and better reflect the annual variation characteristics of cropland burning.Second, when analyzing the misclassification of burned pixels, we found that a considerable proportion of water pixels were misclassified as burned pixels.Channel TM5 shows different spectral characteristics for recent and aged fire scars, and recent scars in this channel were easily mistaken for water bodies [39].The pixel_qa band in the surface reflectance data of the GEE platform has a low detection accuracy for water.Because water strongly absorbs in the NIR and SWIR bands, its reflectance value is very low and the outliers calculated from these bands are also low, resulting in misclassification as a burned pixel.To further explore the reasons for the commission errors of water bodies, 232 waterbody sampling points (Figure 8) were collected, among which 228 water pixels were mistakenly detected as burned pixels.Second, when analyzing the misclassification of burned pixels, we found that a considerable proportion of water pixels were misclassified as burned pixels.Channel TM5 shows different spectral characteristics for recent and aged fire scars, and recent scars in this channel were easily mistaken for water bodies [39].The pixel_qa band in the surface reflectance data of the GEE platform has a low detection accuracy for water.Because water strongly absorbs in the NIR and SWIR bands, its reflectance value is very low and the outliers calculated from these bands are also low, resulting in misclassification as a burned pixel.To further explore the reasons for the commission errors of water bodies, 232 water-body sampling points (Figure 8) were collected, among which 228 water pixels were mistakenly detected as burned pixels.Figure 9 shows the B4 and B45 reflectance values (Figure 9a) and B4outlier and B45outlier values (Figure 9b) of water-body sampling points and burned pixels.The B4 and B45 reflectance values of the water-body sampling points misclassified as burned pixels were lower than those of true burned pixels.Similarly, the B4outlier and B45outlier values of the water-body sampling points misclassified as burned pixels were lower than those of true burned pixels.The water-body sampling points were more dispersed than the burned pixels both in terms of their reflectance values and outliers.In general, the outliers of the water-body sampling points and burned pixels were more dispersed than the corresponding reflectance values.However, the water-body areas were small and difficult to detect when visually interpreting the image, so this algorithm does not further detect the water bodies.
Future Directions and Recommendations
The use of time-series analysis to characterize temporal trends in reflectance facilitated the automation of our algorithm by improving the discrimination of fire-related changes from seasonal and non-seasonal influences on vegetation.When constructing the preceding time series, it is necessary to collect seven images adjacent to the detection date, and the cloud cover of these images is required to meet a given standard.However, the number of available images is insufficient owing to the relatively low temporal resolution Figure 9 shows the B4 and B45 reflectance values (Figure 9a) and B4 outlier and B45 outlier values (Figure 9b) of water-body sampling points and burned pixels.The B4 and B45 reflectance values of the water-body sampling points misclassified as burned pixels were lower than those of true burned pixels.Similarly, the B4 outlier and B45 outlier values of the water-body sampling points misclassified as burned pixels were lower than those of true burned pixels.The water-body sampling points were more dispersed than the burned pixels both in terms of their reflectance values and outliers.In general, the outliers of the water-body sampling points and burned pixels were more dispersed than the corresponding reflectance values.However, the water-body areas were small and difficult to detect when visually interpreting the image, so this algorithm does not further detect the water bodies.Figure 9 shows the B4 and B45 reflectance values (Figure 9a) and B4outlier and B45outlier values (Figure 9b) of water-body sampling points and burned pixels.The B4 and B45 reflectance values of the water-body sampling points misclassified as burned pixels were lower than those of true burned pixels.Similarly, the B4outlier and B45outlier values of the water-body sampling points misclassified as burned pixels were lower than those of true burned pixels.The water-body sampling points were more dispersed than the burned pixels both in terms of their reflectance values and outliers.In general, the outliers of the water-body sampling points and burned pixels were more dispersed than the corresponding reflectance values.However, the water-body areas were small and difficult to detect when visually interpreting the image, so this algorithm does not further detect the water bodies.
Future Directions and Recommendations
The use of time-series analysis to characterize temporal trends in reflectance facilitated the automation of our algorithm by improving the discrimination of fire-related changes from seasonal and non-seasonal influences on vegetation.When constructing the preceding time series, it is necessary to collect seven images adjacent to the detection date, and the cloud cover of these images is required to meet a given standard.However, the number of available images is insufficient owing to the relatively low temporal resolution
Future Directions and Recommendations
The use of time-series analysis to characterize temporal trends in reflectance facilitated the automation of our algorithm by improving the discrimination of fire-related changes from seasonal and non-seasonal influences on vegetation.When constructing the preceding time series, it is necessary to collect seven images adjacent to the detection date, and the cloud cover of these images is required to meet a given standard.However, the number of available images is insufficient owing to the relatively low temporal resolution of Landsat data.Furthermore, in contrast to wildfires in rangeland and forests, crop-straw burning has more frequent changes within a year, and fires in cropland are commonly of a short duration [16].For example, crop-straw burning occurred twice in the year studied herein; vegetation recovery is usually quite quick after a fire, and the temporal gaps usually result in a high omission error.Consequently, on the one hand, it is essential that denser time series are gathered for cropland burned-area detection, but on the other hand, it is difficult to construct such time series only using Landsat data.This problem can be solved in two ways: (1) further experiments should be carried out to determine the characteristics of the annual spectral variation of cropland burning, and a more appropriate number of images should be selected to construct the preceding time series; (2) imagery with higher temporal resolution should be chosen or the synergy of multiple data sources should be used to obtain denser time series.
To improve the accuracy of this algorithm further, it could be used in combination with other burned-area detection algorithms.In this study, we adopted an algorithm combining an improved spectral index and positive points excluded using the SWIR and thermal infrared bands of Landsat 8 for fire detection [40].It is known that the temperature of water bodies is lower than that of land after burning [27].Hence, in future studies, the thermal infrared band could be used to remove false burned pixels.Likewise, the contextual firepoint detection algorithm could be used to exclude most false detection points located in water bodies and residential areas to improve detection accuracy.Moreover, the JRC Monthly Water History dataset provided by the GEE platform contains accurate maps of the location and temporal distribution of surface water.If this dataset were to be used for mask processing, the detection accuracy could also be improved in the future.
In addition to the aforementioned issues, it is important to address another concern.This paper exclusively focused on evaluating the algorithm's effectiveness in a representative farmland in the NCP region of China, where farmland burning is common.However, for a more comprehensive assessment, it would be essential to include diverse experimental sites covering various crop types.Future studies could incorporate additional test areas spanning different crop types to facilitate a more comprehensive verification process of the algorithm.
Conclusions
Existing burned-area detection methods based on band values and spectral indices are limited by their subjectivity and low efficiency in determining outliers and thresholds.In this paper, we proposed an automatic algorithm combined with a threshold-optimization method based on two time series utilizing the GEE platform.We took an area of Henan Province on the NCP as our study area to examine the applicability of the proposed method.In contrast to previous coarse-resolution burned-area products, the burned-area map was derived from all available Landsat images, and its commission and omission errors were 18.9% and 7.53%, respectively.This high detection accuracy is attributed to the use of two time series that together can fully consider the annual and seasonal variations in cropland burning.Meanwhile, the automatic threshold-optimization algorithm improves the detection efficiency and more accurately expresses the time-series outliers of burning characteristics.In summary, this study could provide several routes for the future application of other remote-sensing or spectral-band data to extract cropland burned areas in larger regions.
Figure 1 .
Figure 1.The study area location.The left figure depicts the boundaries of China; the right figure represents the North China Plain (NCP) bound and the provinces included.The black area in the right figure represents the algorithm evaluation area selected in this paper.
Fire 2024, 7 , 18 Figure 2 .
Figure 2. The workflow of time-series outlier detection.The gray panels represent raw data, the other colored panels represent intermediate steps in the outlier calculation, and the red panel represents the final outlier.2.2.3.Optimal Threshold Determination and Burned-Area Detection
Figure 2 .
Figure 2. The workflow of time-series outlier detection.The gray panels represent raw data, the other colored panels represent intermediate steps in the outlier calculation, and the red panel represents the final outlier.
Figure 3 .Figure 3 .
Figure 3.A flow chart of the optimal threshold iterative program.
Figure 4 .
Figure 4.The reflectance change of pixels (33.26 • N, 114.5584 • E) in the preceding time series of 2006.
Figure 8 .
Figure 8. Water-body sampling points and their misclassification as burned areas.
Figure 9 .
Figure 9. (a) B4 and B45 reflectance values, and (b) B4outlier and B45outlier values of water-body sampling points and burned pixels.
Figure 8 .
Figure 8. Water-body sampling points and their misclassification as burned areas.
Figure 9 .
Figure 9. (a) B4 and B45 reflectance values, and (b) B4outlier and B45outlier values of water-body sampling points and burned pixels.
Figure 9 .
Figure 9. (a) B4 and B45 reflectance values, and (b) B4 outlier and B45 outlier values of water-body sampling points and burned pixels.
Table 1 .
The parameters of the threshold-optimization iteration program.Note that the parameters outline the details of the iteration; those in standard font indicate an initial value of the program, those underlined indicate an intermediate output of the program, and those double underlined indicate the optimal thresholds calculated by the program.
Table 2 .
The iterative process and results of threshold value calculation for the target image on 10 June 2006.Note that values in standard font indicate an initial value of the program, those underlined indicate an intermediate output of the program, and those double underlined indicate the optimal thresholds calculated by the program, those in italics indicate the threshold of lower and upper bounds, given by the iteration.
Table 3 .
The iterative process and results of threshold value calculation for the target image on 8 June 2011.Note that values in standard font indicate an initial value of the program, those underlined indicate an intermediate output of the program, and those double underlined indicate the optimal thresholds calculated by the program, those in italics indicate the threshold of lower and upper bounds, given by the iteration.
Table 4 .
The precision validation results of the optimal threshold accuracy of different sampling methods.
Table 5 .
The validation results of the burned area detected herein and that of MCD 64A1.
Table 5 .
The validation results of the burned area detected herein and that of MCD 64A1. | 9,791.6 | 2024-07-18T00:00:00.000 | [
"Environmental Science",
"Computer Science"
] |
Adaptive resilient control for cyber-physical systems against unknown injection attacks in sensor networks
An adaptive resilient control is concerned for a class of cyber-physical systems (CPSs) in the presence of stealthy false data injection attacks in sensor networks and strict-feedback nonlinear dynamics. As the sensors are attacked by ill-disposed hackers, the exactly measured state information is unavailable for state feedback control. After theory ratiocinations, the initial issue of false data injection attacks is transformed into nonlinear uncertainty dynamics and unknown control directions at the last step. At each step in the recursive backstepping control, extended state observers (ESOs) in active disturbance rejection control (ADRC) are investigated to approximate the lumped system uncertainties. Specially, the Nussbaum functions are introduced at the last step in the adaptive control. All the closed-loop signals are proved to be semi-globally uniformly ultimately bounded by Lyapunov theory. Finally, numerical simulations verify that the proposed control can afford favorable stabilization performance and counter false data injection attack.
According to different attack targets, the cyber-attacks can be classified into denial-of-service (DoS) attacks and deception attacks, which may deteriorate the system performance, and even cause instability. The DoS attacks attempt to congest data transmission and intercept accurate information availability [16][17][18][19]. The deception attacks signify to undermine the integrity of transmitted information, such as replay attacks [20][21][22], false data injection attack [23,24]. Both the measurement faults and adversary attacks by hacker may contribute to the unexpected time-varying injection data in sensor network. Therefore, the resilient control should be studied to defend against the unknown false data injection attack and achieve secure operation of CPSs.
To mitigate the effects generated by the false data injection attacks, different schemes are addressed for multiple theoretical CPSs, such as strict-feedback systems, multi-agent systems, and networked systems. In the strict-feedback CPSs, a new type of coordinate transformation is proposed to convert the sensor attacks to the multiple time-varying state feedback gains. A novel Nussbaum function is developed in the adaptive backstepping control [25]. For nonlinear strict-feedback systems under false injection data and unknown nonlinearities, injection data compensators via neural networks are designed and an adaptive approximation recursive event-triggered control is investigated [26]. In timevarying multi-agent systems, the random stealthy false data injection attacks on sensor and actuator channels are modeled as the Bernoulli processes. By stochastic analysis and recursive linear matrix inequalities, the prescribed H ? criteria are guaranteed and the eventtriggered security consensus is achieved [27]. Considering multi-agent systems with false data injection attacks and noises, the graphical theory and Kalman filter are adopted to construct distributed state estimators. Then, the mean square consensus is studied to accomplish the security mechanism [28]. In networked systems, for linear stochastic discrete form, a Kalman filter-based detection is designed to compensate for the feedback/forward channel delays and a predictive control scheme is formulated [29]. For networked switched T-S fuzzy systems, delay system transformation and average dwell time technique are both utilized to develop eventtriggering mechanism and adaptive controller [30].
In practice CPSs, automatic voltage control or load frequency control of a power system is investigated in the presence of false data injection attacks. To accomplish the automatic voltage control, the optimal stealthy attack strategy is modeled as a partial observable Markov decision process. Then, a Q-learning algorithm in nearest sequence memory is developed to learn and analogy the attack online. Finally, the kernel density estimation is conducted to detect the bad data and offset the disruptive attack impacts [31]. In smart grids, petri nets are introduced to model the cyber-attacks, and the event trigger control is proposed to suppress the voltage fluctuation in power system [32]. In automatic generation control to keep the grid frequency fixed at a nominal value, an optimal attack is modeled and analyzed to guide the protection of sensor data links, then efficient algorithms are provided to estimate and mitigate attack impact [33]. A novel resilient load frequency control is proposed, where artificial neural network and Luenberger observer are combined to detect anomalies online and eliminate the adverse attack effects [34].
The adversarial attackers supervise the measured state information furtively and send hostile deception attacks/false injection attacks depended on the state variables, which make the exactly measured state variables unavailable. As only the corrupted system states can be provided to accomplish state feedback control design, the corrupted system dynamic is deduced tentatively. After theoretical analytic deduction, the injected false attack information in the original CPSs generates nonlinear system uncertainties and unknown control direction in the corrupted system dynamic, which is the motivation for this manuscript.
In the celebrated active disturbance rejection control (ADRC), extended state observers (ESOs) can estimate and compensate total disturbances such as system nonlinearities, uncertainties, and external disturbances, to improve the performance and robustness. Inspired by the above analysis, ESOs are introduced to implement the recursive backstepping control, for strict-feedback CPSs in the presence of false injection attacks in sensor networks. Specifically, a ESO is formulated to estimate the nonlinear system uncertainty term induced by false injection attacks at each step. At the last step, an extra ESO is explored owing to the unknown control direction, while Nussbaum function is exploited to achieve resilient control.
The innovation and novelty of the paper can be summarized as (i) the false injection attacks in the original strict-feedback CPSs are ingeniously modeled as the nonlinear system uncertainties and unknown control direction in the corrupted system dynamics for the first time. (ii) In the resilient backstepping control, ESOs are formulated to compensate the nonlinear uncertainties induced by false injection attacks at each step, while an extra ESO and Nussbaum function are combined to cope with the issue of unknown control direction at the last step.
The paper is arranged as following. Section 2 elaborates problem formulation and preliminaries. An adaptive recursive control is constructed in Sect. 3. Section 4 provides the stability analysis of closed-loop system. Simulation examples are included to illustrate the effectiveness of the proposed control schemes in Sect. 5. Finally, Sect. 6 states concluding remarks.
Problem formulation and preliminaries
Consider the strict-feedback CPSs under false data injection attacks in sensor network, described by where i ¼ 2; 3; :::n À 1; k ¼ 1; 2; :::n; x i ðtÞ ¼ ½x 1 ðtÞ; x 2 ðtÞ; :::x i ðtÞ 2 R i is system state vector, uðtÞ 2 R is control law to be specified later, x c k ðtÞ 2 R is the available corrupted system state, g k ðx k ðtÞ; tÞ : ðR k  RÞ7 !R is the unknown false data injection attack term in the kth sensor. The C 1 nonlinear function f k ðx k ðtÞÞ : R k 7 !R is assumed to be unknown. g k is known positive system parameter. Assumption 1 [25] The false data injection attack g k ðx k ðtÞ; tÞ can be parameterized as g k ðx k ðtÞ; tÞ ¼ kðtÞx k ðtÞ, where kðtÞ is unknown time-varying gain. Assumption 2 [35,36] The unknown gain kðtÞ, its first derivate _ kðtÞ and second derivate € kðtÞ are bounded, i.e., M are unknown positive constants. Assumption 3 [25] The unknown gain kðtÞ satisfies that 1 þ kðtÞ 6 ¼ 0, subsequently, a general assumption is made that 1 þ kðtÞ [ 0.
The real system state that is unavailable owing to the false data injection attack can be denoted as where jðtÞ ¼ 1=ð1 þ kðtÞÞ. From Assumptions 2 and 3, the term jðtÞ, its first derivate _ jðtÞ and second derivate € jðtÞ are bounded, that is, M are unknown positive constants. Assumption 4 [37] The nonlinear real-valued continuous function f k ðÁÞ and its first derivate _ f k ðÁÞ are available and bounded. Definition 1 [38] If the continuous function NðsÞ satisfies that then NðsÞ is a Nussbaum function. Lemma 1 [39] V N ðÁÞ and vðÁÞ are two smooth functions on ½0; t f Þ;V N ðtÞ ! 0; 8t 2 ½0; t f Þ; NðsÞ is a Nussbaum-type function; h 0 is a nonzero constant. If the following formula (4) holds, with l 0 is a proper constant and l 1 [ 0, then V N ðtÞ,vðtÞ and R t 0 ðh 0 NðvðsÞÞ À 1Þ _ vðsÞds are bounded on ½0; t f Þ.
The adaptive resilient control problem tolerating sensor attack is to accomplish an adaptive control strategy, such that the strict-feedback CPSs (1) can be stabilized, and all the signals in the resulting closedloop system remain bounded.
Notations. 'ðÁÞ denote the set of continuous differentiable functions, Á k k denotes the Euclid norm of R n .
Adaptive recursive control
Considering the false data injection attack model (2), the strict-feedback CPSs (1) can be rewritten as The resilient control problem of CPSs in the presence of false data injection attacks (1) can be converted into the stabilization problem of CPSs (1) subjected to lumped system uncertainties 1 . . .; n À 1 and unknown control direction 1 j g n , as the false data injection attacks term jðtÞ and nonlinear function f i ðx i Þ are unknown previously.
The backstepping and ADRC are adopted to implement the recursive control with the corrupted state information in the section. At each step in the recursive design process, the celebrated ADRC scheme is introduced and the extended state observer (ESO) is designed to approximate the lumped system uncertainties. At the last step, the Nussbaum function is introduced to tackle unknown control direction.
The following coordinate transformation is defined as where z i ðtÞ is the tracking error, u f i ðtÞ is the filtered signal of smooth virtual control u i ðtÞ passing by a firstorder filter: where s is the filter gain 0\s\ minf 2 1þgi ; 1g. The filter error is defined as To facilitate the expression, the suffixes ðtÞ are omitted in the later section.
Step 1: The dynamic of z 1 is A ESO is constructed to estimate the system uncertainty term wherex c 1 andf 1 are estimations of corrupted system state x c 1 and lumped uncertainty f 1 , pertinent chosen functions w 11 ðÁÞ; w 12 ðÁÞ 2 'ðR; RÞ,e is a small positive constant.
The notationsx c 1 ¼ x c 1 Àx c 1 andf 1 ¼ f 1 Àf 1 are the estimation errors of system state and lumped uncertainty, which satisfy that From Assumptions 2-4, there is a positive constant f d 1M , such that the first derivate of lumped uncertainty The similar conditions hold for the remaining n À 1 subsystem.
To stabilize the z 1 -dynamic, the virtual control u 1 is defined as Eq. (9) yields that Step 2 i n À 1: The dynamic of z i is where f i is the lumped system uncertainty where the functions w i1 ðÁÞ; w i2 ðÁÞ 2 'ðR; RÞ,x c i andf i are estimations of x c i and f i , To stabilize the z i -dynamic, the ith virtual control u i is defined as with k i is the positive controller gain. Eq. (14) can be deduced as Define the scale estimation error vector of system state and lumped uncertainty as g i ¼ g i1 ; g i2 ½ T 2 R 2 ; i ¼ 1; 2:::n À 1 and g i1 ¼ 1 The g i -dynamic satisfies the following Assumption 5.
Assumption 5. There exist the positive define, continuous differentiable functions where r i1 ; r i2 ; r i3 ; r i4 ; # i are positive constants.
Step n: The dynamic of x c n is with f n is the lumped system uncertainty f n , 1 j f n ðjx c n Þ À _ j j x c n . Two ESOs are proposed to approximate x c n and f n as where i i;j ; i; j ¼ 1; 2 is the state of ESOs. Define the estimation error vector as p ¼ ½p 1 ; p 2 T and p 1 ¼ x c n À ði 1;1 þ 1 j i 2;1 Þ; p 2 ¼ f n À ði 1;2 þ 1 j i 2;2 Þ yielding that The p-dynamic can be expressed in vector form: with A 0 , Àc 1 1 As long as f n ¼ _ f n f d nM and matrix A 0 is Hurwitz, p can convergent to the neighborhood near zero ð0; f d nM k min ðA 0 ÞÞ asymptotically. The estimation equation of p-dynamic can be defined as Then, the nth corrupted system state and lumped system uncertainty are The dynamic of z n is where U is the reconstructed control input U,g n u þ i 2;2 .
Considering the issue of false data injection attack, the control direction 1 j is unknown. Nussbaum function is utilized to construct the control input U: where k n is the positive controller gain, NðÁÞ is Nussbaum-type function chosen as NðsÞ ¼ s 2 cosðsÞ herein.
Stability analysis
The main results of the paper are summarized in the following theorem.
Proof Firstly, the convergence of extended state observers (10) and (15) in the first n-1th subsystems is discussed. By Assumption 5, the first derivate dynamic of Lyapunov function V i ðg i Þ along the trajectory (19) is Considering Assumption 5, it follows that Recall the definition of g i1 ; g i2 yielding that uniformly in t 2 ½t 0 ; 1Þ as e ! 0. Follow from Eq. (33), the Theorem 1(ii) holds. Then, the boundedness of the filter error (8) is discussed. The Lyapunov function candidate is chosen as V e ¼ 1 2 P nÀ1 i¼1 e 2 i , then the first derivate is with a general assumption provided that _ u i j j u d iM . Finally, the boundedness of extended state observers (22) in the nth subsystem and the tracking error (6) is discussed.
The proof is completed. h
Simulation examples
Two simulation examples are studied to illustrate the effectiveness of the proposed control scheme to resist the false data injection in full state measurements.
Example 1: A nonlinear second-order system is considered: The initial conditions are set to x 1 ð0Þ ¼ 1:5; x 2 ð0Þ ¼ À1. The linear ESOs are utilized in Eqs. (10) and (15) as w 11 ðsÞ ¼ w 12 ðsÞ ¼ s. The controller, ESOs and filter gains are chosen as The comparative simulations are carried out between the proposed resilient control and approximation adaptive control in Ref. [26]. Figures 1 and 2 depict the stabilization results and state trajectories of x 1 ; x 2 , which can converge to the origin after a transient response.
As plotted in Fig. 3, the corrupted state x c 1 can estimate its estimation statex c 1 about 2 s. The ESOs states i i;j ; i; j ¼ 1; 2 in Fig. 4 are smooth and converge to the corresponding steady-state values about 2 s.
Compared to the approximation adaptive control in Ref. [26], the proposed resilient control can provide a better stabilization performance, while refrain from designing NN-based function approximations and injection data compensator.
Example 2 An IEEE 6 bus power system with three generators in Kron-reduced form is considered: where x 1 ; x 2 2 R 3 is the rotor angle vector and rotor frequency vector of generator, u denotes the equivalent power input, the generator inertial matrix and damping matrix are M g ¼ diagf0:125; 0:034; 0:016g 2 R 3Â3 ; D g ¼diagf0:125; 0:068; 0:48g 2 R 3Â3 , other physical parameters are The corrupted system state dynamics are segmented as The initial conditions are xð0Þ ¼ 0:5 0:5 0:5 0:5 ½ 0:50:5 T . In Example 2, a similar linear ESOs are adopted as in Example 1. The controller, ESOs and filter gains are selected as x c 1 ;x c 2 are plotted in Fig. 7. The ESOs state vectors i i;j ; i; j ¼ 1; 2 are plotted in Figs. 8 and 9.
The simulation results can demonstrate that the proposed ESOs can estimate the lumped system uncertainty terms induced by false injection attacks, and the resilient control can achieve a satisfactory stabilization performance, although only the corrupted system states can be measured. The results in both numerical example 1 and practical simulation 2 can validate the superiority of the proposed adaptive recursive control scheme, which can stabilize the strict-feedback CPSs and accommodate unknown injection attack in sensor networks. For a class of nonlinear strict-feedback cyber-physical system, the backstepping and ADRC are combined to develop a nonlinear resilient control framework to circumvent the malicious attacks through sensor networks. ESOs are constructed to compensate for the false injection data effects and unknown nonlinear dynamics via the corrupted states. The semi-globally uniformly ultimately boundedness of all the closed-loop signals can be achieved by Lyapunov analysis. Finally, the numerical examples are provided to illustrate the effectiveness of the proposed control. Further research can be extended to the resilient control problem of CPSs resisting DOS attacks and replay attacks. Data Availability Statement The datasets generated during and analyzed during the current study are available from the corresponding author on reasonable request. | 4,022.2 | 2021-03-01T00:00:00.000 | [
"Mathematics"
] |
Metrics for comparing neuronal tree shapes based on persistent homology
As more and more neuroanatomical data are made available through efforts such as NeuroMorpho.Org and FlyCircuit.org, the need to develop computational tools to facilitate automatic knowledge discovery from such large datasets becomes more urgent. One fundamental question is how best to compare neuron structures, for instance to organize and classify large collection of neurons. We aim to develop a flexible yet powerful framework to support comparison and classification of large collection of neuron structures efficiently. Specifically we propose to use a topological persistence-based feature vectorization framework. Existing methods to vectorize a neuron (i.e, convert a neuron to a feature vector so as to support efficient comparison and/or searching) typically rely on statistics or summaries of morphometric information, such as the average or maximum local torque angle or partition asymmetry. These simple summaries have limited power in encoding global tree structures. Based on the concept of topological persistence recently developed in the field of computational topology, we vectorize each neuron structure into a simple yet informative summary. In particular, each type of information of interest can be represented as a descriptor function defined on the neuron tree, which is then mapped to a simple persistence-signature. Our framework can encode both local and global tree structure, as well as other information of interest (electrophysiological or dynamical measures), by considering multiple descriptor functions on the neuron. The resulting persistence-based signature is potentially more informative than simple statistical summaries (such as average/mean/max) of morphometric quantities—Indeed, we show that using a certain descriptor function will give a persistence-based signature containing strictly more information than the classical Sholl analysis. At the same time, our framework retains the efficiency associated with treating neurons as points in a simple Euclidean feature space, which would be important for constructing efficient searching or indexing structures over them. We present preliminary experimental results to demonstrate the effectiveness of our persistence-based neuronal feature vectorization framework.
where u − v ∞ = max{|u.x − v.x|, |u.y − v.y|} denotes the L ∞ distance between two points; and γ ranges over all bijections between D 1 and D 2 . It is known that the bottleneck distance d B (D 1 , D 2 ) can be computed in O(m 1.5 log m) time, where m is the total number of points in D 1 ∪ D 2 .
Given two functions f, g : |T | → R, suppose that g is a perturbation of f with bounded distance in L ∞ -norm, that is f − g ∞ := max x∈|T | |f (x) − g(x)| measures the amount of perturbation of g from f . The Stability Theorem [1] states that for a function f and its perturbation g, the bottleneck distance between their persistence diagram summaries is bounded from above by the size of the perturbation; that is, d B (Dgf, Dg g) ≤ f − g ∞ . This result is later generalized to more general persistence modules, and show that the bottleneck distance between two persistent diagrams is bounded by the so-called interleaving distance between the corresponding persistence modules that generate them [2,3].
In our setting, given two neuron trees T 1 and T 2 with descriptor functions f 1 : |T 1 | → R and f 2 : |T 2 | → R, we cannot directly compare these two descriptor functions since they are defined on different domains (T 1 and T 2 , respectively). We instead use the so-called functional distortion distance [4] d F D (f 1 , f 2 ) to measure how different the functions f 1 and f 2 are. Intuitively, d F D considers all pairs of mappings between T 1 and T 2 as a way to align them, say φ : |T 1 | → |T 2 | and ψ : |T 2 | → |T 1 | to align T 1 to T 2 (via φ) as well as align T 2 to T 1 (via ψ). It then compares f 1 and f 2 composited with these maps so that they are then defined on a common domain. Each such pair of maps (alignment) (φ, ψ) will give a cost, measuring how well f 1 and f 2 are aligned under these two maps, and d F D (f 1 , f 2 ) returns the minimum cost under all possible such alignments (pairs of maps). We refer the readers to [4] see the formal definition. It follows from results of [4] that Figure 1: T 1 is a noisy version of T 1 : there could be local combinatorial changes such as the subtrees B, C and D merge at slightly different height, and there could also be spurious noisy branches. However, such changes do not perturb the tree metric much: in this specific example, the metric distortion is bounded by ε. As a result, their corresponding persistence diagram summaries are also close with d B (D, D ) ≤ ε.
This stability result applies to any descriptor functions. For example, suppose we consider the Euclidean distance functions f 1 and f 2 on the two trees in Figure 1. Then their persistence diagram summaries are close (at most ε), despite that there are noisy branches, as well as combinatorial changes in the tree structures from T 1 to T 2 -Indeed, it is not hard to establish maps φ : T 1 → T 2 and ψ : T 2 → T 1 and show that the cost of the Euclidean distance function incurred by them is at most ε, thus upper bounds d F D (T 1 , T 2 ) by ε too.
As another example, if we use the geodesic distance to the root as the descriptor function, then by using results from [5], the bottleneck distance between resulting persistence diagrams is stable w.r.t. changes in the input neuron trees as measured by the Gromov-Hausdorff distance between these trees. The Gromov-Hausdorff distance is popular way to measure the level of near-isometry between two metric spaces [6,7]. The Gromove-Hausdorff distance between the two neuron trees in Figure 1 is at most ε, implying that using the geodesic distance as descriptor functions f : T 1 → IR and f : T 1 → IR, we have d B (Dgf, Dgf ) ≤ ε as well.
In our framework, to improve computational efficiency, we vectorize the persistent diagrams describe in (Step 2), and the natural L p -distance between them are sum-based, instead of maxbased (as in bottleneck distance). One can extend the bottleneck distance to the so-called degree p-Wasserstein distance d W,p (D 1 , D 2 ) between two persistence diagrams D 1 and D 2 which we will introduce shortly in Eqn (2). The stability for the Wasserstein distance of persistence diagrams is not as well understood as in the bottleneck distance case (which is in fact the case when p = ∞), although there are some results for some special cases [8].
Stability of the persistence feature vectors. We now discuss the stability of feature vectorization step. Specifically, suppose we are given two persistence diagrams D 1 and D 2 , with corresponding feature functions ρ 1 = ρ D 1 , ρ 2 = ρ D 2 : IR → IR induced from D 1 and D 2 as described in Section 2.2 of the main text.
First, the degree-p Wasserstein distance between D 1 and D 2 , for 1 ≤ p < ∞, is defined as: where D 1 := D 1 ∪ L and D 2 := D 2 ∪ L are the persistence diagrams augmented with points in the diagonal L as before.
Theorem 0.1 The L 1 -distance between feature functions ρ 1 and ρ 2 is stable w.r.to the 1-Wasserstein distance between the diagrams D 1 and D 2 generating them. Let ∆ denote the largest persistence value of any point in D 1 ; that is, ∆ = max u,u ∈D 1 |u − u|. Specifically, We now prove this theorem. First, we need the following result bounding the distance between two 1-dimensional Gaussians [9]. | 1,909.6 | 2016-11-14T00:00:00.000 | [
"Computer Science"
] |
Identifying the Influence of the Polymer Matrix Type on the Structure Formation of Micro-Composites When They Are Filled With Copper Particles
Studies were carried out to establish the mechanisms of structure formation during crystallization of polymer composites based on polyethylene, polypropylene or polycarbonate filled with copper microparticles. The researches were executed using a technique, the first stage of which consisted in the experimental determination of crystallization exotherms of composites, and the second – in the theoretical analysis based on the obtained exotherms of the structure formation characteristics. A complex of studies on determination of crystallization exotherms for investigated micro composites was carried out. The regularities of the cooling rate influence of composites, the method of their production and the mass fraction of filler on the temperature level of the beginning and ending of crystallization, the maximum value of the reduced heat flux, etc. were established. It is shown that for the applied methods of obtaining composites the increase of their cooling rate causes the decrease of the indicated temperatures and heat flux. It is established that the value of the mass fraction of the filler has a less significant effect on the characteristics of the crystallization process.<br><br>The regularities of structure formation of polymer composites at the initial stage of crystallization with the involvement of data on crystallization exotherms and nucleation equations are investigated. The presence of planar and three-dimensional mechanisms of structure formation at this stage has been established. It is shown that the ratio of these mechanisms is influenced by the type of polymer matrix and the method of obtaining composites.<br><br>For the second stage of crystallization, which occurs in the entire volume of the composite, the results of experiments on crystallization exotherms were analyzed on the basis of the Kolmogorov-Avrami equation. It is shown that the structure formation of polyethylene-based composites occurs by the three-dimensional mechanism, and on the basis of polypropylene and polycarbonate – by the mechanism of the stressed matrix.
Studies were carried out to establish the mechanisms of structure formation during crystallization of polymer composites based on polyethylene, polypropylene or polycarbonate filled with copper microparticles. The researches were executed using a technique, the first stage of which consisted in the experimental determination of crystallization exotherms of composites, and the second -in the theoretical analysis based on the obtained exotherms of the structure formation characteristics. A complex of studies on determination of crystallization exotherms for investigated microcomposites was carried out. The regularities of the cooling rate influence of composites, the method of their production and the mass fraction of filler on the temperature level of the beginning and ending of crystallization, the maximum value of the reduced heat flux, etc. were established. It is shown that for the applied methods of obtaining composites the increase of their cooling rate causes the decrease of the indicated temperatures and heat flux. It is established that the value of the mass fraction of the filler has a less significant effect on the characteristics of the crystallization process. The regularities of structure formation of polymer composites at the initial stage of crystallization with the involvement of data on crystallization exotherms and nucleation equations are investigated. The presence of planar and three-dimensional mechanisms of structure formation at this stage has been established. It is shown that the ratio of these mechanisms is influenced by the type of polymer matrix and the method of obtaining composites. For the second stage of crystallization, which occurs in the entire volume of the composite, the results of experiments on crystallization exotherms were analyzed on the basis of the Kolmogorov-Avrami equation. It is shown that the structure formation of polyethylene-based composites occurs by the three-dimensional mechanism, and on the basis of polypropylene and polycarbonateby the mechanism of the stressed matrix Keywords: polymer composites, copper microparticles, mechanisms of structure formation, polyethylene, polypropylene, polycarbonate
The development of polymer composite materials with the necessary complex of properties involves conducting systematic researches on the choice of polymer matrix and type of filler, analysis of the patterns of structure formation of composites and more. Promising uses of polymer microcomposites are associated with the application of their highly heat conductive modifications. Thus, the latter can be used to replace metal elements in electric motors and generators for the manufacture of heat exchange surfaces for various purposes and so on. For example, the high efficiency of using these composite materials to create heat exchangers is associated with such characteristics as the required thermophysical and anti-corrosion properties, relatively low specific gravity, and so on.
Analysis of literature data and problem statement
The beginning of systematic studies of various aspects of the kinetics of non-isothermal crystallization of polymer micro-and nanocomposites filled with metals dates back to about 2000. Works [1][2][3][4][5][6][7][8][9][10][11][12][13][14] made a significant contribution to the development of scientific and technical bases for the creation of these materials. Thus, the article [1] presents the study results of the kinetics of composite isothermal crystallization a based on nylon 6 and aerosil of different series. The use of the Kolmogorov-Avrami mathematical model for obtaining thermodynamic parameters of crystallization is proposed. It is shown that this mathematical model reliably describes the kinetics of crystallization for a pure polymer matrix at its insignificant filling with aerosil. In [2] it is found that the proposed model reliably describes the crystallization processes for monomers and metals. The limit cases when it is possible to use this model to describe the kinetics of isothermal crystallization for a thermoplastic polymer matrix are presented. The article [3] considers the results of studies of the kinetics of isothermal crystallization of a metal alloy and notes that the Kolmogorov-Avrami equation adequately describes the process of crystallization from the melt. Articles [1][2][3] are mainly of methodological importance for establishing the adequacy of the Kolmogorov-Avrami model and do not contain the results of extensive parametric studies. In [4], the influence of molecular weight on the crystallization temperature and velocity of crystallites growth is studied on the example of commercial polyethylene glycol. The article [5] presents the results of studies of the kinetics of non-isothermal crystallization of polypropylene reinforced with multilayer carbon nanotubes and short glass fibers. The fact of increasing the crystallization velocity and decreasing the half-life of polypropylene crystallization in the presence of micro-and nanofillers has been established. It is shown that the Kolmogorov-Avrami model reliably describes the behavior of polypropylene only if the share of its crystallization does not exceed 70 %. Beyond this level, there is a significant discrepancy between experimental results and theoretical calculations. Article [5], as well as the above works, is largely methodical in nature, it, in particular, sets the limits of application of the Kolmogorov-Avrami equation for the studied composites. The results of studies of the kinetics of non-isothermal crystallization of pure polyvinyl alcohol and its nanocomposite based on nanocrystals of NaX zeolite are presented in [6]. Based on the results of studies of crystallization kinetics, which are grounded on data from differential scanning calorimetry, it was shown that NaX nanocrystals play the role of nucleation centers during crystallization. This was taken into account in the theoretical model, which significantly improved the mathematical description of the experimental results. That is, in [6] the idea of the mechanisms of crystallization of polymer composites was developed. However, there are practically no studies of this process on the influence of determining factors on its main characteristics. The work [7] is devoted to the study of the influence of the amount of reinforcement and cooling rate on the kinetics of non-isothermal crystallization of nanocomposites based on polyamide 6. Two methods of nanocomposite preparation are used. It is shown that the structure formation and thermophysical properties of such composites significantly depend on the method of preparation. Work [7], however, is limited to studies of polymer composites based on a single polymer matrix. In [8], the regularities of the influence of carbon nanotubes (CNTs) on the kinetics of nonisothermal crystallization of polymer composites using the isoconversion method with differential scanning calorimetry and the method of optical microscopy in polarized light are investigated. In this study, the range of fillers used is somewhat expanded, which remains, however, insignificant. Namely, the peculiarities of crystallization of composites for some variants of CNT functionalization are considered. The article [9] is devoted to the analysis of the peculiarities of polymer composites crystallization filled with copper nanoparticles at different mass fractions of filler (0.5; 1.0; 2.0 and 4.0) %. The work concerns a number of issues related to the process of crystallization of polymer composites under consideration. However, there are virtually no studies on the influence of different determinants on the characteristics of the structure of composite materials (except for the mass fraction of filler). Much attention is paid to the comparative analysis of crystallization models in the presence and absence of CNT functionalization. In the study [10] considered only a few options for combinations of polymer matrix and filler. In [11] the results of researches of structure formation mechanisms at crystallization of polymeric composites on the basis of polycarbonate filled with microparticles of aluminum or carbon nanotubes are resulted. The study [12] is devoted to establishing the patterns of the structure formation of polymer composites filled with carbon nanotubes. In it, in particular, the data of the comparative analysis of structure formation characteristics at application of various polymeric matrices -from polycarbonate and polyethylene are resulted. Peculiarities of the influence of methods of obtaining polymer composites on the mechanisms of their structure formation are analyzed in studies [13,14]. They consider composites based on polyethylene, filled with aluminum and CNT, respectively. In [11][12][13][14], detailed studies of the influence on the characteristics of the structure formation of composites of such factors as the mass fraction of filler and their cooling rate from the melt were performed. However, all of them are characterized by the absence of a certain complexity of research on many combinations of polymers and fillers, different methods of obtaining composites and so on.
Therefore, the available studies of the patterns of structure formation of polymer microcomposites, however, do not cover a wide range of issues, the solution of which is necessary for the development of these composites with given characteristics. In particular, further development is necessary to study the patterns of the structure formation of polymer microcomposites for different combinations of polymer matrices and fillers, using different methods of obtaining these composites and the like. It is also important for these conditions to study the influence of such factors as their cooling rate from the melt, the mass fraction of filler, etc. on the features of the structure formation of polymer microcomposites.
The aim and objectives of research
The aim of research is to establish the regularities of the formation of the structure of polymer microcomposites based on polyethylene, polypropylene and polycarbonate when they are filled with copper particles with varying in a wide range of the main defining parameters. This will make it possible to obtain the information needed to control the properties of these materials by changing their structure when creating composites with predetermined characteristics for different areas of their application.
To achieve this aim, the following objectives are solved: -to perform a complex of experimental studies to determine the crystallization exotherms of composites during their cooling from the melt in a relatively wide range of changes in the mass fraction of filler and cooling rate for different methods of obtaining composites; -to determine the characteristics of the structure formation of the studied polymer microcomposites.
Materials and methods of research of structure formation mechanisms of polymeric microcomposites
The experimental-theoretical technique of establishing the mechanisms of structure formation of polymer composites, which included two stages, was used in the work. The first of them was to experimentally determine the crystallization exotherms of the composite when it was cooled from the melt at a given constant rate. At the second stage of the technique, the characteristics of the structure formation of composites were theoretically determined using the obtained experimental data.
With regard to the experimental preparation of polymer microcomposites, two methods were used in the work, based on mixing the components, respectively, in dry form and in the polymer melt.
1. Experimental methods for obtaining microcomposites
The first of the applied methods (method I) for obtaining microcomposites is based on mixing the components, which are in dry form, using a magnetic stirrer and an ultrasonic dispersant. A measuring beaker with a suitable granular polymer and filler microparticles is placed in a magnetic stirrer (Velp scientifica F2052). In the beaker inside the mixture of components are installed two anchors that rotate in a magnetic field around their axes. In addition, the glass contains the rod of the ultrasonic dispersant (UZDN-A150). Depending on the type of polymer, the mixing time (9-23 min) and the rotation velocity of the armatures in the magnetic field (29 r/s), as well as the power (30-45 W) and the operating time of the ultrasonic disperser (7-21 min) are set. The second method (method II) is based on mixing the components in the polymer melt using a disk extruder. The mixture of the composite material components in the mold of the extruder is compacted using a hydraulic press. Next, the mixture is heated in the mold to a temperature that exceeds the melting point (glass transition) of the polymer by 20-70 °C, depending on the type of polymer. The rotation of the metal piston, which gradually descends into the region of the polymer melt, provides mixing of the composite components. At a certain time, the composite material passes through a hole in the lower part of the mold.
The finishing operation of both methods is hot pressing of the resulting composition. The latter is carried out in a special installation, in which the composite is heated at normal pressure to a temperature that is 20 °C higher than the melting (glass transition) temperature of the polymer. After holding at the specified temperature, which lasts 15-20 minutes, the samples of the composite material are pressed while giving them the desired shape.
The copper microparticles used as a filler were obtained from copper sawdust by grinding them in a ball mill also to form particles with a size of 0.5… 1.0 μm. The geometric characteristics of the particles were determined by optical microscopy. The particles were spherical and flattened.
2. Experimental determination of crystallization exotherms of polymer microcomposites
The construction of experimental crystallization exotherms of the composite during its cooling from the melt at a given constant rate was carried out as follows. The sample placed in the cell was heated to a temperature exceeding the melting point of the polymer by 50 K, kept at a given temperature of 180 s and then cooled at a fixed cooling rate (V t =0.5…20.0 K/min). The specific heat flow Q s removed from the composite was determined in an atmosphere of dry nitrogen by differential scanning calorimetry using a Perkin-Elmer DSC-2 device with modified software from IFA GmbUlm.
3. Theoretical determination of the structure characteristics of polymer microcomposites
The obtained experimental data on the kinetics of crystallization of composites, as already mentioned, were the basis for the theoretical determination of the corresponding parameters of structure formation. Two stages of structure formation were considered -the initial stage of crystallization (nucleation stage) and the stage of crystallization in the whole volume of the composite. In the first of these situations, the main characteristics of crystal formation, such as the reduced nucleation parameter a m and the reduced transport barrier K m , were to be determined.
where Z is the nucleation energy; k is the Boltzmann table; o m H D -melting enthalpy; m is the dimensionless parameter of the form; T N -temperature of the crystallization beginning; ΔE -activation energy; A -numerical coefficient.
In addition, it was necessary to analyze the dimension of crystal formation associated with the parameter m.
The nucleation equation was used to determine the values of a m and K m [15] where V t -cooling rate; T M -melt temperature corresponding to the maximum value of specific heat flow; ∆T -temperature range of crystallization. The solution of this equation was carried out using the least squares method and using the values of T N , T M and ∆T obtained as a result of experimental studies. In order to analyze the dimensionality of crystal formation, equation (2) was solved for m=1 and m=2.
Regarding the second stage of crystallization (crystallization in the entire volume of the composite), the experimental exotherms of crystallization were considered under the assump-tion of two mechanisms of crystal formation, the first of which is associated with the crystallization of the polymer matrix (realized on polymer density fluctuations), and the secondwith crystallization, in which the role of its centers is played by filler particles. The results of experiments on the kinetics of crystallization were analyzed according to the modified Kolmogorov-Avrami equation where f -relative share of the crystallization mechanism associated with the crystallization of the actual polymer matrix; K n -effective rate constant; n -pseudoparameter of the form; α -relative volume fraction of the crystalline phase; τ -reduced time, τ=V t •t; t -time; superscript indices «'» and «"» refer to the first and second of these mechanisms.
5.
Research results of the structure formation patterns of polymer microcomposites filled with copper particles
1. Research results to determine the crystallization exotherms of polymer microcomposites
Experimental studies of the characteristics of the crystallization process of polymer microcomposites during their cooling from the melt were performed when the mass fraction ω of copper microparticles changed from 0.2 % to 4.0 % and when the cooling rate varied from 0.5 K/min to 20.0 K/min (Fig. 1). Composites obtained by the two methods described above, which are based on the mixing of the components in dry form and in the polymer melt, respectively, were subjected to this study. Table 1 shows the corresponding studies results of the crystallization process characteristics. Research data on the influence of the mass fraction of filler ω on the parameters of the crystallization process of polymer microcomposites are given in Table 2.
The following tables present data on the main characteristics of the crystallization process, such as the temperature of the beginning T N , the end T k of crystallization and the temperature interval of crystallization ΔT, as well as the maximum value of specific heat flow max Table 1 Characteristics of the crystallization process of polymer microcomposites based on polyethylene, polypropylene and polycarbonate filled with copper particles, with different methods of obtaining them for ω=4.0 %
2. The results of theoretical studies to determine the characteristics of the structure formation of polymer microcomposites
Using the obtained experimental data on crystallization exotherms at the second stage of research, as already noted, the characteristics of structure formation for the two crystallization stages of polymer composites were theoretically determined.
In the Table 3 to the example the corresponding data for the initial stage of crystallization -the stage of nucleation of individual structurally ordered subdomains for V t =2.0 K/min are shown. The presented results were obtained using equation (2) for two values of the form parameter m (m=1 and m=2). The value of R in Table 3 denotes the correlation coefficient of experimental and calculated data. Fig. 4 illustrates the results of studies on the values of the reduced nucleation parameter a m for the studied polymer microcomposites. Table 4 illustrates the research results for the second stage of structure formation -the stage of crystallization in the entire volume of the composite for ω=4.0 %. Table 3 Structure formation parameters at the initial stage of crystallization of polymer microcomposites based on polyethylene, polypropylene and polycarbonate filled with copper particles, with different methods of their production for V t =2.0 K/min ω, % a 1 , The data on the relative fraction f of the mechanism of structure formation associated with the crystallization of the actual polymer matrix, the value of the parameter of form n and the effective rate constant K n for the two mechanisms of crystallization are presented here. The first of them is realized on fluctuations of polymer density, and in the second the role of crystallization centers is played by filler particles. The value of χ 2 in Table 4 denotes the variance.
The data presented in Fig. 5 illustrate the values of the parameter of form n with respect to the investigated conditions.
The results are given in Table 4 and in Fig. 5, is obtained on the basis of the modified Kolmogorov-Avrami equation (3).
6. Discussion of researches results of structure formation pattern of polymer microcomposites at their filling by copper particles
1. Discussion of research results to determine the crystallization exotherms of polymer microcomposites
According to the obtained data (Table 1) for all studied composites with different polymer matrices when using both methods of obtaining them, the increase in cooling rate V t causes a decrease in the temperatures of T N , end T K of the crystallization process and temperature T M , which corresponds to the maximum specific heat flow max .
s Q
The value max s Q decreases significantly. Also noteworthy is the fact that, other things being equal, higher values max s Q correspond to composites obtained by method II (Fig. 2).
According to the research results, the above characteristics of the crystallization process at the same values of V t differ significantly for composites based on different polymer matrices (Fig. 3). The largest values of temperatures T N , T K and T M correspond to composites based on polycarbonate, slightly smaller -polypropylene and the smallest -polyethylene. For example, at ó=4.0 % and V t =20.0 K/min, the temperature of crystallization beginning is 364.8 K and 457.9 K for composite materials with a matrix of polyethylene and polycarbonate. That is, the difference between these temperatures is 93.1 K. The difference in temperature of the crystallization end under these conditions is 90.2 K.
As for the value of the maximum heat flow max , s Q according to the obtained data, it also significantly depends on the type of polymer matrix. At the same time, there is a tendency to decrease the values max s Q during the transition from the polyethylene matrix to the polypropylene matrix and then -to polycarbonate matrix. This trend is fully realized for polymer composites obtained by method II, and with some exceptions for materials obtained by method I. Namely, at V t =20,0 K/min the value max s Q for the composite based on polypropylene exceeds the corresponding value for polyethylene (Fig. 2).
The research results also indicate that the type of polymer matrix has a significant effect on the nature of crystallization exotherms for the corresponding polymer composites. As can be seen from Fig. 1, a, d, when using a polymer matrix made of polyethylene in the investigated range of changes in the cooling rate V t , the crystallization exotherms have only a unimodal peak for composites obtained by both method I and method II. For composites based on polypropylene with increasing V t , the unimodal peak on the curve Q s =f(T) is transformed into a bimodal one (Fig. 1, b, e). However, the Table 4 The parameters of structure formation at the stage of crystallization in the volume of polymer microcomposites based on polyethylene, polypropylene and polycarbonate filled with copper particles, with different methods of obtaining them for ω=4.0 % V t , K/min f n' , n K ′ 10 -5 K -nʹ n" , n K ′′ 10 -5 K -n'' χ 2 , 10 -5 f n' , n K ′ 10 -5 K -nʹ n" , n K ′′ 10 -5 K -n'' χ 2 , 10 -5 values of V t at which this transformation is observed differ significantly for different methods of obtaining the studied polymer composites. Namely, in the case of method I, the bimodal peak appears at V t =5.0 K/min, and in the case of method II -only at V t =20. K/min. With regard to polymer composites based on polycarbonate, according to the experimental results, the corresponding exotherms of crystallization are characterized by the presence of a bimodal peak on the curve Q s =f(T) in the entire range of cooling rate V t . This nature of crystallization exotherms occurs for both methods of obtaining polymeric composite materials. (Fig. 1, c, f).
According to the obtained data, the value of the mass fraction of filler ω has a less significant effect on the characteristics of the crystallization process than the cooling rate V t of the melt composite. As can be seen from Table 2, an increase in ω from 0.2 to 4.0 % leads to relatively insignificant changes in the temperatures of the beginning and end of crystallization for all investigated polymer microcomposites in both methods of their production. At the same time, there is a tendency to decrease the value of the specific heat flow max . s Q Its relative reductions are the largest for polycarbonate, slightly smaller -for polypropylene and the smallest -for polyethylene.
It is noteworthy that the value of ω differently affects the nature of crystallization exotherms for composites based on different polymer matrices. Thus, for composites based on a polyethylene matrix, the crystallization exotherms are characterized by the presence of a unimodal peak in the entire range of change ω. When using matrices made of polypropylene and polycarbonate for microcomposites obtained by method I, with increasing ω on the crystallization exotherm, the unimodal peak changes to bimodal. As for these composites obtained by method II, for composites based on polycarbonate this change is observed, and on the basis of polypropylene it is absent ( Table 2).
2. Discussion of the results of theoretical research to determine the structure characteristics of polymer microcomposites
On the basis of the crystallization exotherms obtained as a result of experimental researches, as already noted, the theoretical determination of the structure formation parameters was carried out. According to the data given in Table 3, at the first stage of crystallization -the stage of nucleation there are two mechanisms of structure formation: two-dimensional, planar (m=1) and three-dimensional, volumetric (m=2). This is evidenced by the values of the coefficients R 1 and R 2 , which confirm a satisfactory correlation between the results of experiments and calculations.
The fact that under these conditions the values of R 2 are higher than R 1 indicates the advantage of the three-dimensional mechanism over the planar one. The only exception is the situation corresponding to the crystallization of polycarbonate-based microcomposites obtained by method II. As can be seen from Table 3, here the values of R 1 and R 2 are comparable in value. That is, under these conditions, there are similar in value fractions of planar and volumetric mechanisms of structure formation.
Regarding the changing of the nucleation parameter a m , according to the obtained data, its values rise with increasing mass fraction of filler when using different polymer matrices for both methods of obtaining composites (Table 3, Fig. 4). The differences in the values of a m for matrices made of polyethylene and polypropylene are relatively small. The values of a 1 for polycarbonate matrices significantly exceed the corresponding values for these matrices, and the values of a 2 -on the contrary are significantly lower.
It should also be noted that there is a tendency to increase the values of the reduced nucleation parameter a m for composites obtained by method II in comparison with method I. This trend is somewhat violated in the case of a polyethylene matrix relative to the value of a 2 (Table 3).
Studies have shown that the values of the reduced transport barrier K n with increasing ω decrease for composites based on different polymer matrices. This indicates an increase in restrictions on the transport of matrix segments across the lamella-crystal surface. In this case, the values of K n as a whole are slightly higher for composites obtained by method I. Exceptions are the values of K 2 relative to polymer composites based on polycarbonate.
Regarding the second stage of crystallization, which occurs throughout the volume of the composite material, here, as mentioned above, two mechanisms of crystal formation were studied. The first mechanism takes place on fluctuations in the density of the polymer, in the second mechanism centers of crystallization are microparticles of filler. As can be seen from Table 4, for polyethylene-based composites for these two mechanisms of structure formation, a parameter of the form n≈3 for all values of ω and both methods of obtaining these composite materials. This means that crystallization on fluctuations in polymer density and crystallization associated with copper microparticles occur by a three-dimensional mechanism.
These data also indicate that for composites based on polypropylene and polycarbonate, the parameter of form n varies in the range of 3.8…5.2. This indicates that the crystallization mechanism associated with both the polymer matrix and the filler has a stress matrix mechanism. It is noteworthy that this mechanism when using matrices made of polypropylene is slightly more pronounced than in the case of matrices made of polycarbonate (Table 4, Fig. 5).
The performed work concerns only the peculiarities of the structure formation of polymer microcomposites of a certain type. It does not contain studies on the relationship between structure formation and the properties of the resulting composite materials.
Further development of this study will be to establish the dependence of the properties of the studied composites on the characteristics of their structure. Of considerable interest will also be the analysis of the possibilities of controlling the properties of polymer composites by changing their structure.
Conclusions
1. Experimental data on crystallization exotherms for polymer composites filled with copper microparticles were obtained. The analysis of crystallization characteristics at application of various polymeric matrices -from polyethylene, polypropylene and polycarbonate is executed. The experiments were performed in a relatively wide range of changes in the mass fraction of filler (from 0.2 to 4.0 %) and the cooling rate of composite materials (from 0.5 to 20.0 K/min). The investigated composites were obtained on the basis of methods based on mixing the components in dry form (method I) and in the polymer melt (method II). The influ-ence of the cooling rate V t of composites from the melt, the mass fraction of filler ω and the method of obtaining composites on the main characteristics of the crystallization process are revealed. In particular, it is shown that in the studied range of change V t the type of polymer matrix significantly affects the nature of crystallization exotherms in terms of the presence of unimodal or bimodal peak and its transformation when changing the value of V t . It is also found that the value of the mass fraction of filler ω has a less significant effect on the characteristics of the crystallization process of composites than the cooling rate V t .
2. Using the results of experimental studies of crystallization exotherms of polymer composites, the regularities of their structure formation at two stages of crystallization are established: a) at the initial stage of crystallization -nucleation stage. In accordance with the nucleation equation, it is shown that at this stage for all the composites under consideration, in both methods of their production there are two mechanisms of structure formation -planar and three-dimensional with some advantage of the latter; b) in the second stage of crystallization, which occurs throughout the volume of the composite. Using the modified Kolmogorov-Avrami equation, two mechanisms of crystal formation were investigated, the first of which takes place on fluctuations in polymer density, and in the second the filler particles serve as centers of crystallization. It is shown that for polyethylene-based composites with two methods of their production, these mechanisms are three-dimensional. It is also established that when using polymer matrices made of polypropylene and polycarbonate, crystallization occurs by the mechanism of the stressed matrix on both the fluctuations of the polymer density and when it is initialized on the filler particles. | 7,469 | 2020-10-23T00:00:00.000 | [
"Materials Science"
] |
Optimizing Gear Performance by Alloy Modification of Carburizing Steels
Both the tooth root and tooth flank load carrying capacity are characteristic parameters that decisively influence gear size, as well as gearbox design. The principal requirements towards all modern gearboxes are to comply with the steadily-increasing power density and to simultaneously offer a high reliability of their components. With increasing gear size, the load stresses at greater material depth increase. Thus, the material and particularly the strength properties also at greater material depth gain more importance. The present paper initially gives an overview of the main failure modes of case carburized gears resulting from material fatigue. Furthermore, the underlying load and stress mechanisms, under particular contemplation of the gear size, will be discussed, as these considerations principally define the required material properties. Subsequently, the principles of newly developed, as well as modified alloy concepts for optimized gear steels with high load carrying capacity are presented. In the experimental work, the load carrying capacity of the tooth root and tooth flank was determined using a pulsator, as well as an FZG back-to-back test rig. The results demonstrate the suitability of these innovative alloy concepts.
Introduction
Gears and gearboxes are used for a wide range of applications.For example, high power wind turbines usually have a gearbox transforming the low speed rotor shaft rotation into the higher rotational speed required by the generator.Approximately 85 percent of today's windmills are equipped with a gearbox.Usually, such gearboxes are designed as one-or two-stage planetary transmissions.These gearboxes have been gradually increasing in size over recent years due to the up-scaling of individual turbine sizes.In combination with this performance growth, the economic and qualitative optimizations of the entire manufacturing chain are of high importance.The gears in wind turbines are sometimes exposed to extremely high loads at the gear flanks and in the tooth root of the gear teeth, for example during sudden changes of wind speed or hard stops.Many failures and breakdowns of wind turbines, accordingly, originated in the gearbox, leading to significant outages and replacement costs.The powertrain of a windmill accounts for approximately 25 percent of the total equipment cost.In the mining industry, gears and gearboxes can be found in a variety of different applications along the entire process chain such as conveyor drives for extraction, gearboxes for mill drive systems in the processing stage or gearboxes for the stacker reclaimer and special trucks for the transportation process.Most of the gears in these applications also have to transmit high torque, are often subjected to demanding operating conditions and have to achieve long service life.Consequently, large-sized gears can be found in many of these products.The general requirements for high performance gear components are a hard case providing adequate fatigue strength, as well as wear resistance and a tough core preventing brittle failure under high impact loads [1].Accordingly, various alloy concepts, thermo-mechanical and thermo-chemical treatments have been developed to achieve this property combination.Commonly, gears are therefore case carburized.The heat treatment process of case carburizing is complex, requiring a high level of technical knowledge, as well as a profound understanding of the material characteristics.Alloy concepts for medium-and large-sized gear applications significantly vary in different markets due to historical drivers (e.g., automotive, machine building, military), practical experiences, as well as the local preference for certain alloying elements (Table 1).High and uniform dimensional stability [2].
DIN EN 10084 and ISO 683-11 [3,4] specify the technical delivery conditions for carburizing steel grades.Besides the classification and designation of the steel grades, also the production processes, requirements (e.g., hardenability ranges), as well as the testing and inspection procedures are specified.In addition to these general standards, many end users have issued proprietary delivery specifications, which describe particular demands (e.g., austenite grain size) in more detail.This is a result of the many possible processing routes for the production of carburized components.Depending on the component requirements, different sequences of annealing, hardening and machining are pursued (Figure 1).For instance, when a high dimensional stability of the component is needed, pre-hardening is performed before and stress relieving after machining.It is hence essential to take the entire process chain into consideration when optimizing the material.For the design of large-scale gearboxes, steel grades are commonly selected according to the requirements specified in DIN 3990/ISO 6336, Part 5 [5,6].Figure 2 indicates as an example the anticipated tooth root endurance strength of various steel alloys and heat treatment concepts.Within the strength fields, in general, three quality levels-ML, MQ, ME acc. to [5,6]-can be distinguished: grade ML stands for the minimum requirement; grade MQ represents requirements that can be met by experienced manufacturers at moderate cost; grade ME represents requirements that must be aimed at when higher allowable stresses are desirable (Figure 2).It is obvious that the highest strength values are achievable for case carburized gears of quality grade ME.The diagram relates an easily measurable property like, in this case, surface hardness to a complex system property such as the tooth root endurance strength.The fact that for a given surface hardness, a rather wide range of tooth root endurance strength levels can be obtained suggests that alloy composition, microstructure and thermo-chemical treatment have an extremely high impact on the actual gear performance.Another system property of high importance for gear durability is the resistance to gear flank failures like pitting, micropitting, as well as tooth flank fracture.High contact pressure, the status of lubrication, material properties, microstructure and chemical composition influence these system properties.Furthermore, with respect to the flank load carrying capacity, case carburized materials of the highest quality level ME typically show the highest achievable strength values.
Metals 2017, 7, 415 3 of 20 in general, three quality levels -ML, MQ, ME acc. to [5,6]-can be distinguished: grade ML stands for the minimum requirement; grade MQ represents requirements that can be met by experienced manufacturers at moderate cost; grade ME represents requirements that must be aimed at when higher allowable stresses are desirable (Figure 2).It is obvious that the highest strength values are achievable for case carburized gears of quality grade ME.The diagram relates an easily measurable property like, in this case, surface hardness to a complex system property such as the tooth root endurance strength.The fact that for a given surface hardness, a rather wide range of tooth root endurance strength levels can be obtained suggests that alloy composition, microstructure and thermo-chemical treatment have an extremely high impact on the actual gear performance.Another system property of high importance for gear durability is the resistance to gear flank failures like pitting, micropitting, as well as tooth flank fracture.High contact pressure, the status of lubrication, material properties, microstructure and chemical composition influence these system properties.Furthermore, with respect to the flank load carrying capacity, case carburized materials of the highest quality level ME typically show the highest achievable strength values.For both vehicle and industrial transmissions, further optimization of gear steel towards better performance under demanding conditions is necessary.This is amongst others motivated by reducing fuel consumption and emissions, as well as a higher load bearing capacity at the surface, in in general, three quality levels -ML, MQ, ME acc. to [5,6]-can be distinguished: grade ML stands for the minimum requirement; grade MQ represents requirements that can be met by experienced manufacturers at moderate cost; grade ME represents requirements that must be aimed at when higher allowable stresses are desirable (Figure 2).It is obvious that the highest strength values are achievable for case carburized gears of quality grade ME.The diagram relates an easily measurable property like, in this case, surface hardness to a complex system property such as the tooth root endurance strength.The fact that for a given surface hardness, a rather wide range of tooth root endurance strength levels can be obtained suggests that alloy composition, microstructure and thermo-chemical treatment have an extremely high impact on the actual gear performance.Another system property of high importance for gear durability is the resistance to gear flank failures like pitting, micropitting, as well as tooth flank fracture.High contact pressure, the status of lubrication, material properties, microstructure and chemical composition influence these system properties.Furthermore, with respect to the flank load carrying capacity, case carburized materials of the highest quality level ME typically show the highest achievable strength values.For both vehicle and industrial transmissions, further optimization of gear steel towards better performance under demanding conditions is necessary.This is amongst others motivated by reducing fuel consumption and emissions, as well as a higher load bearing capacity at the surface, in For both vehicle and industrial transmissions, further optimization of gear steel towards better performance under demanding conditions is necessary.This is amongst others motivated by reducing fuel consumption and emissions, as well as a higher load bearing capacity at the surface, in the near surface case, as well as at greater depths below the surface.A secondary target is to design efficient alloying concepts taking the entire processing route into consideration including modified or innovative heat treatments.A fundamental way of dealing with these demands is to adjust the chemical composition of carburizing steels.In this respect, one can principally define two approaches.An economically-driven approach aims at achieving a defined performance spectrum with a cost-reduced alloy concept, whereas a performance-driven approach targets superior properties at equal or moderately increased cost.The current work considers both approaches focusing on modified molybdenum-based alloy concepts including niobium microalloying.Thereby, innovative heat treatment conditions have also been considered.The success of either strategy is verified by using standardized tooth root fatigue tests and back-to-back running tests allowing direct benchmarking against a database of many established gear steel grades.
Gear Fatigue Failure Modes and Failure Mechanism
The gear load carrying capacity is generally limited by different failure modes.Each failure mode is decisively influenced by the gear design, the gear material characteristics, the operating conditions and the gear lubricant performance.Nevertheless, each single failure mode is dominated by different physical parameters and subject to different failure mechanisms.A profound understanding of the underlying mechanism and of the relevant load and stress conditions, under a particular contemplation of the gear size, are essential requirements in order to select an appropriate gear material with optimized material properties in the entire gear volume to provide a sufficiently high load carrying capacity.Gear failures basically can be divided into material fatigue-related failures and non-fatigue failure modes, which are primarily due to tribological problems in the lubricated contact, such as for instance scuffing.A further differentiation of gear failures is possible based on the failure initiation site.Regarding the location on the gear, this can be either the gear flank or the tooth root area.On the other hand, it is the crack initiation site located at the surface or at greater material depth.All these aspects can result in different requirements towards the material properties in different areas.
Figure 3 shows the main gear failure modes related to material fatigue, which are targeted for optimization in the present investigation.Pitting and tooth root breakage are the typical appearances of fatigue failure in gears.Both failure types are usually initiated at the surface or close to the surface and are characterized by a crack propagating further into the material.While the pitting load capacity is strongly influenced by the Hertzian contact stresses in the gear contact, the tooth root strength is related to bending stresses in the root fillet.Differences in the nature of the contact and bending stresses may result in different requirements regarding the material properties in relevant material areas.the near surface case, as well as at greater depths below the surface.A secondary target is to design efficient alloying concepts taking the entire processing route into consideration including modified or innovative heat treatments.A fundamental way of dealing with these demands is to adjust the chemical composition of carburizing steels.In this respect, one can principally define two approaches.An economically-driven approach aims at achieving a defined performance spectrum with a costreduced alloy concept, whereas a performance-driven approach targets superior properties at equal or moderately increased cost.The current work considers both approaches focusing on modified molybdenum-based alloy concepts including niobium microalloying.Thereby, innovative heat treatment conditions have also been considered.The success of either strategy is verified by using standardized tooth root fatigue tests and back-to-back running tests allowing direct benchmarking against a database of many established gear steel grades.
Gear Fatigue Failure Modes and Failure Mechanism
The gear load carrying capacity is generally limited by different failure modes.Each failure mode is decisively influenced by the gear design, the gear material characteristics, the operating conditions and the gear lubricant performance.Nevertheless, each single failure mode is dominated by different physical parameters and subject to different failure mechanisms.A profound understanding of the underlying mechanism and of the relevant load and stress conditions, under a particular contemplation of the gear size, are essential requirements in order to select an appropriate gear material with optimized material properties in the entire gear volume to provide a sufficiently high load carrying capacity.Gear failures basically can be divided into material fatigue-related failures and non-fatigue failure modes, which are primarily due to tribological problems in the lubricated contact, such as for instance scuffing.A further differentiation of gear failures is possible based on the failure initiation site.Regarding the location on the gear, this can be either the gear flank or the tooth root area.On the other hand, it is the crack initiation site located at the surface or at greater material depth.All these aspects can result in different requirements towards the material properties in different areas.
Figure 3 shows the main gear failure modes related to material fatigue, which are targeted for optimization in the present investigation.Pitting and tooth root breakage are the typical appearances of fatigue failure in gears.Both failure types are usually initiated at the surface or close to the surface and are characterized by a crack propagating further into the material.While the pitting load capacity is strongly influenced by the Hertzian contact stresses in the gear contact, the tooth root strength is related to bending stresses in the root fillet.Differences in the nature of the contact and bending stresses may result in different requirements regarding the material properties in relevant material areas.Additionally, the failure mode of micropitting can negatively influence the gear performance.Micropitting is most often observed at the surface of the loaded gear flank, typically occurring under unfavorable lubricating conditions.Micropitting can also be considered as a fatigue failure, yet with a crack propagation limited to the near-surface zone.Consequently, micropitting is controlled by the material characteristics in the near-surface zone.Furthermore, the contact load at the flank surface also induces stresses deeper in the material.If these stresses exceed the prevailing local strength of Additionally, the failure mode of micropitting can negatively influence the gear performance.Micropitting is most often observed at the surface of the loaded gear flank, typically occurring under unfavorable lubricating conditions.Micropitting can also be considered as a fatigue failure, yet with a crack propagation limited to the near-surface zone.Consequently, micropitting is controlled by the material characteristics in the near-surface zone.Furthermore, the contact load at the flank surface also Metals 2017, 7, 415 5 of 20 induces stresses deeper in the material.If these stresses exceed the prevailing local strength of the material, subsequent failure with crack initiation below the surface may arise.In the literature, such failure types are referred to as tooth interior fatigue fracture (TIFF), tooth flank fracture or sub-surface fatigue.As the load-induced stresses at greater material depths increase with increasing gear size, the strength properties of the material at a greater material depth consequently gain in importance for large gears.
The stress condition in a gear tooth basically is in relation to the tooth normal force acting in the gear contact, which again depends on the applied torque.This tooth normal force causes contact stresses at the gear flank and bending stresses especially in the root fillet as is schematically indicated in Figure 4. Further influences on the actual stress distribution arise from the gear geometry, the operating conditions and the manufacturing process (residual stress).
the material, subsequent failure with crack initiation below the surface may arise.In the literature, such failure types are referred to as tooth interior fatigue fracture (TIFF), tooth flank fracture or subsurface fatigue.As the load-induced stresses at greater material depths increase with increasing gear size, the strength properties of the material at a greater material depth consequently gain in importance for large gears.
The stress condition in a gear tooth basically is in relation to the tooth normal force acting in the gear contact, which again depends on the applied torque.This tooth normal force causes contact stresses at the gear flank and bending stresses especially in the root fillet as is schematically indicated in Figure 4. Further influences on the actual stress distribution arise from the gear geometry, the operating conditions and the manufacturing process (residual stress).Basically, increasing the gear size allows transmitting a higher torque.However, load-induced stresses at greater material depth also become larger with increasing gear size, even if the maximum relevant stress value is comparable.Figure 5 demonstrates exemplarily the distribution of the relevant stresses over material depth for different gear sizes.It is obvious that with increasing gear size, expressed by the radius of curvature ρC for the gear flank or the gear module mn for the tooth root, respectively, an adjustment of the hardness profile becomes necessary.This is to keep the allowable stress on a larger level than the actual load-induced stress at any position into the depth.Consequently, with increasing gear size, an increased case hardening depth (CHD) is required.The influence of case hardening depth on the pitting and bending strength of gears is shown in Figure 6.Basically, increasing the gear size allows transmitting a higher torque.However, load-induced stresses at greater material depth also become larger with increasing gear size, even if the maximum relevant stress value is comparable.Figure 5 demonstrates exemplarily the distribution of the relevant stresses over material depth for different gear sizes.It is obvious that with increasing gear size, expressed by the radius of curvature ρ C for the gear flank or the gear module m n for the tooth root, respectively, an adjustment of the hardness profile becomes necessary.This is to keep the allowable stress on a larger level than the actual load-induced stress at any position into the depth.Consequently, with increasing gear size, an increased case hardening depth (CHD) is required.The influence of case hardening depth on the pitting and bending strength of gears is shown in Figure 6.
the material, subsequent failure with crack initiation below the surface may arise.In the literature, such failure types are referred to as tooth interior fatigue fracture (TIFF), tooth flank fracture or subsurface fatigue.As the load-induced stresses at greater material depths increase with increasing gear size, the strength properties of the material at a greater material depth consequently gain in importance for large gears.
The stress condition in a gear tooth basically is in relation to the tooth normal force acting in the gear contact, which again depends on the applied torque.This tooth normal force causes contact stresses at the gear flank and bending stresses especially in the root fillet as is schematically indicated in Figure 4. Further influences on the actual stress distribution arise from the gear geometry, the operating conditions and the manufacturing process (residual stress).Basically, increasing the gear size allows transmitting a higher torque.However, load-induced stresses at greater material depth also become larger with increasing gear size, even if the maximum relevant stress value is comparable.Figure 5 demonstrates exemplarily the distribution of the relevant stresses over material depth for different gear sizes.It is obvious that with increasing gear size, expressed by the radius of curvature ρC for the gear flank or the gear module mn for the tooth root, respectively, an adjustment of the hardness profile becomes necessary.This is to keep the allowable stress on a larger level than the actual load-induced stress at any position into the depth.Consequently, with increasing gear size, an increased case hardening depth (CHD) is required.The influence of case hardening depth on the pitting and bending strength of gears is shown in Figure 6.Because a different hardness profile also influences the residual stresses (compressive residual stresses are assumed over the complete case carburized layer), not only the material strength, but also the equivalent stress curve are influenced by a different case hardening depth (Figure 7, left).Obviously, the ratio between local equivalent stress and local material strength is more critical for smaller CHD and, in this case, is most unfavorable at a depth that is close to the case-core interface.Consequently, CHD is not only an important influencing parameter for the pitting and bending strength of gears, but may also strongly contribute to minimizing the risk of a crack initiation below the surface and thereby reducing the risk of failure due to tooth flank fracture (Figure 7, right).Furthermore, it is obvious that also increasing the core strength of the gear material may contribute to reducing the risk of a failure initiation at greater material depth.
Requirements on Material Properties for Large Gear Sizes
An increased case hardening depth required for large gear sizes is correlated with an increased carburizing time.Longer carburizing times will affect further material properties and result in special demands on the material characteristics for large-sized gears.Some major requirements for optimized materials with special regard to large gear applications are summarized in the following:
•
Case hardening depth: adequate CHD is necessary to achieve the required fatigue strength at the case and core: for the effects, see Figures 6 and 7; the gear material has to be suitable for long heat treatment process times to achieve the high CHD required for large gears; • Surface hardness: a minimum surface hardness of 660 HV or 58 HRC (Rockwell-C hardness) is required according to existing standards in order to achieve allowable stress numbers for pitting and bending of quality levels MQ and ME; higher surface hardness values do not increase fatigue resistance, but make machinability more difficult; in contrast, wear resistance of the surface typically increases with increased surface hardness; Because a different hardness profile also influences the residual stresses (compressive residual stresses are assumed over the complete case carburized layer), not only the material strength, but also the equivalent stress curve are influenced by a different case hardening depth (Figure 7, left).Obviously, the ratio between local equivalent stress and local material strength is more critical for smaller CHD and, in this case, is most unfavorable at a depth that is close to the case-core interface.Consequently, CHD is not only an important influencing parameter for the pitting and bending strength of gears, but may also strongly contribute to minimizing the risk of a crack initiation below the surface and thereby reducing the risk of failure due to tooth flank fracture (Figure 7, right).Furthermore, it is obvious that also increasing the core strength of the gear material may contribute to reducing the risk of a failure initiation at greater material depth.Because a different hardness profile also influences the residual stresses (compressive residual stresses are assumed over the complete case carburized layer), not only the material strength, but also the equivalent stress curve are influenced by a different case hardening depth (Figure 7, left).Obviously, the ratio between local equivalent stress and local material strength is more critical for smaller CHD and, in this case, is most unfavorable at a depth that is close to the case-core interface.Consequently, CHD is not only an important influencing parameter for the pitting and bending strength of gears, but may also strongly contribute to minimizing the risk of a crack initiation below the surface and thereby reducing the risk of failure due to tooth flank fracture (Figure 7, right).Furthermore, it is obvious that also increasing the core strength of the gear material may contribute to reducing the risk of a failure initiation at greater material depth.
Requirements on Material Properties for Large Gear Sizes
An increased case hardening depth required for large gear sizes is correlated with an increased carburizing time.Longer carburizing times will affect further material properties and result in special demands on the material characteristics for large-sized gears.Some major requirements for optimized materials with special regard to large gear applications are summarized in the following:
•
Case hardening depth: adequate CHD is necessary to achieve the required fatigue strength at the case and core: for the effects, see Figures 6 and 7; the gear material has to be suitable for long heat treatment process times to achieve the high CHD required for large gears; • Surface hardness: a minimum surface hardness of 660 HV or 58 HRC (Rockwell-C hardness) is required according to existing standards in order to achieve allowable stress numbers for pitting and bending of quality levels MQ and ME; higher surface hardness values do not increase fatigue resistance, but make machinability more difficult; in contrast, wear resistance of the surface typically increases with increased surface hardness;
Requirements on Material Properties for Large Gear Sizes
An increased case hardening depth required for large gear sizes is correlated with an increased carburizing time.Longer carburizing times will affect further material properties and result in special demands on the material characteristics for large-sized gears.Some major requirements for optimized materials with special regard to large gear applications are summarized in the following:
•
Case hardening depth: adequate CHD is necessary to achieve the required fatigue strength at the case and core: for the effects, see Figures 6 and 7; the gear material has to be suitable for long heat treatment process times to achieve the high CHD required for large gears; • Surface hardness: a minimum surface hardness of 660 HV or 58 HRC (Rockwell-C hardness) is required according to existing standards in order to achieve allowable stress numbers for pitting and bending of quality levels MQ and ME; higher surface hardness values do not increase fatigue resistance, but make machinability more difficult; in contrast, wear resistance of the surface typically increases with increased surface hardness; • Core tensile strength and toughness: increased core hardness is known to especially influence the tooth root bending strength; higher core toughness allows higher core hardness for optimized strength; furthermore, increased core strength and toughness are assumed to reduce the risk of tooth flank fracture damages; gear steels with improved hardenability are required to achieve the desired properties for large gears; • Microstructure and grain size: fine acicular martensite in the case, as well as fine acicular martensite and bainite in the core are required for optimized load carrying capacity; fine grain size, particularly ASTM 8 and finer, is known to positively impact gear flank and tooth root load carrying capacity; adequate alloying elements are required to ensure grain size stability and fine microstructure even at long process times of carburization;
•
Residual austenite: a certain amount of retained austenite in the case is, due to its ductility, assumed to be beneficial for micropitting load capacity and may also contribute to an improved pitting strength; a higher amount of residual austenite may reduce case hardness and bending strength; up to 25% finely-dispersed retained austenite is allowable according to existing gear standards; • Cleanness: non-metallic inclusions are known to act as local stress raisers; depending on inclusion size and its chemical composition, the gear load carrying capacity, especially the risk of a crack initiation below the surface, may be diminished; as the highly stressed material volume increases with the gear size, the probability of critical inclusions located in critical material sections is increased; consequently, high demands on the cleanness of the gear material especially for large gears result; • Area reduction ratio, material homogeneity and intergranular oxidation depth: these are further parameters that gain special importance for large gears; requirements according to existing gear standards have to be fulfilled even for larger gear sizes; intergranular oxidation can act as a fatigue fracture initiation site and may reduce the fatigue strength of the tooth; • Hardenability: improved hardenability of the gear material is a basic requirement to achieve several of the above described properties for large gears.
Figure 8 demonstrates the influence of gear size on the tooth root bending strength.Basically, allowable stress numbers decrease with increasing gear size due to different size effects.Nevertheless, the results clearly prove that for large gear sizes, gear steel grades with better hardenability (17CrNiMo6, 17NiCrMo14) achieve significantly higher bending strength values compared to gear steel grades with lower hardenability (16MnCr5).The difference in gear strength depending on material hardenability increases with the gear size.Consequently, appropriate alloying elements achieving high material hardenability and ensuring adequate material characteristics are essential for high performance carburizing steels in order to meet the requirements of large-sized gears and to provide an adequate gear load carrying capacity.
As the performance reference for the current study, steel grade 18CrNiMo7-6 (1.6587) has been selected since this grade is currently being widely used for demanding gear applications in Europe (refer to Table 1 for alternative gear steel grades used in other geographical regions).The task was to modify the main alloying elements in a way to achieve either the same performance at lower alloy cost or better performance at similar alloy cost.The following approach is considered to be relevant in this respect:
•
Improving hardenability; A fundamental way to deal with these issues is to adjust the chemical composition of the carburizing steel.Accordingly, the chemical composition of carburizing steels can be further developed to achieve the above goals using the following guidelines: Some previous developments of improved gear steels put the focus on high nickel additions and reduced molybdenum content (Figure 9).Although this approach provides an elevated core strength and generally high toughness, the hardness in the near-surface zone can be too low, as nickel is a very efficient austenite stabilizer.On the other hand, raising the carbon and molybdenum content, optionally in combination with microalloying elements, shifts the hardenability curve entirely upwards, thus providing a sufficient safety margin against local overloading in the critical area below the surface.This second approach may be leading to lower toughness especially when the nickel content is being reduced.However, refining and homogenizing the martensitic microstructure (packet size) can regain toughness.It was shown on the example of 18CrNiMo7-6 that below an average martensite packet size of 20 µm, the impact toughness strongly increases [11].Since the packet size strongly correlates with the prior austenite grain size [11,12], control and refinement of the latter one across the entire processing chain is an appropriate means of improving toughness.
Controlling Grain Size in Carburizing Steels
Many studies have been indicating that prior austenite grain size control in carburizing steel can be effectively achieved by using niobium microalloying in combination with other microalloys such as titanium, aluminum and nitrogen [13][14][15][16][17][18][19][20].The developed concepts have been used to generally refine and homogenize the grain size under standard case carburizing conditions.Furthermore, it has been demonstrated that high temperature carburizing becomes possible without violating grain size restrictions, thus allowing a faster furnace throughput.This is particularly beneficial when a larger case depth is required like in gears for trucks and heavy machinery.Additionally, a production concept for fine-grained carburizing steel has been developed based on an aluminum-free melt.This is to fully eliminate brittle inclusions deteriorating toughness and fatigue resistance in the steel.
The obstruction of grain coarsening is based on a pinning effect of precipitates on the austenite grain boundary.For efficient grain boundary pinning, a suitable size and distribution of precipitates is necessary, which again depends on the prior thermo-chemical treatment, as well as the carburizing temperature.Above a certain limiting carburizing temperature, the precipitates coarsen or dissolve, and their pinning effect is lost.It appears to be most efficient keeping as much as possible microalloy content in solid solution during thermo-chemical processing, which then can precipitate as fine-sized and homogeneously-distributed particles during up-heating to carburizing temperature.Niobium has the beneficial characteristic of low solubility in such steel, similar to titanium, providing temperature-stable precipitates.Yet, niobium has a lower affinity to nitrogen and does not form coarse nitrides, contrary to titanium.Furthermore, its precipitation kinetics is slower so that niobium remains longer in solution, forming finer and more dispersed precipitates.It was also found that mixed precipitates of Nb, Ti and N are better resistant against dissolution at a very high austenitizing temperature.Therefore, a microalloy combination of low Ti (sub-stoichiometric to N) and Nb in the range of 0.03-0.10%has been proven to be most efficient.
Adding niobium in combination with titanium to the reference grade 18CrNiMo7-6 has a marked effect on the grain size distribution as shown in Figure 10a.Not only the grain size is generally much finer, but also the scattering range becomes narrower.The microalloyed variant safely avoids prohibited grain sizes despite high carburizing temperature (1030 • C) and the long holding time Metals 2017, 7, 415 10 of 20 (25 h).Similar good results of grain size stability have been obtained with modified variants of 25MoCr4 and 20CrMo5 grades (Figure 10b).The Nb and Ti dual microalloyed 25MoCr4 variant reveals resistance to coarsening up to a 1050 • C carburizing temperature, whereas the Nb-only microalloyed 20CrMo5 variant is stable up to 1000 • C. The latter alloy indicates that for very high carburizing temperatures, indeed, the addition of multiple microalloys appears to increase the temperature stability of pinning precipitates.However, at standard carburizing conditions below 1000 • C, also the Nb-only microalloyed concept exhibits very fine austenite grain size with a rather narrow size distribution.
Metals 2017, 7, 415 10 of 20 high carburizing temperatures, indeed, the addition of multiple microalloys appears to increase the temperature stability of pinning precipitates.However, at standard carburizing conditions below 1000 °C, also the Nb-only microalloyed concept exhibits very fine austenite grain size with a rather narrow size distribution.
(a) (b) The martensite start temperature depends on the austenite grain size [21].The smaller the austenite grain size, the lower is the martensite start temperature.Accordingly, in a mixed grain size structure, transformation locally occurs at different temperatures.This situation will lead to the generation of residual stresses due to the volume change when the microstructure transforms from austenite to martensite.The earlier formed martensite islands cannot plastically accommodate the transformational volume change of the later formed martensite islands.Hence, imbalanced elastic stresses cause a macroscopic distortion of the quenched component.It has been experimentally confirmed that a larger grain size scatter results in a larger scatter of distortion (Figure 11a) [22].The distortion has to be corrected by straightening or hard machining.This is not only costly, but also reduces the thickness of the case layer when performing hard machining.Furthermore, residual stresses overlay the applied load stresses.Especially tensile residual stresses can cause premature failure, for instance under fatigue conditions.
Consequently, microalloying of case carburizing steel leading to reduced grain size scatter as demonstrated above is expected to lower quench distortions.This could indeed be verified for components manufactured from the modified variant of 25MoCr4 (320 ppm Nb, pm Ti, 160 ppm N) shown in Figure 11b.The material was continuously cast into bar.The bar was FP (ferrite-pearlite) annealed before cold extrusion and then again FP annealed.Carburization occurred at 980 °C for 195 min to a target case depth of 0.95 mm with a total furnace residence of 400 min.The components were then quenched in an oil bath (Isorapid 277) held at 60 °C.Subsequently part distortion was characterized by roundness deviation measurements at five positions as shown in Figure 11b.It is obvious that the microalloyed variant has a much lower roundness deviation as compared to the standard alloy.At each measuring position, the deviation was reduced by approximately 50% resulting in a similar reduction of straightening efforts.The cost savings achieved by such reduced straightening or hard machining efforts likely compensate the cost for the microalloys.If the available equipment allows high temperature carburizing, severe process time savings can be realized.For instance, for producing a target case depth of 1.5 mm, the total treatment cycle time can be reduced by 25 and 40 percent when the carburizing temperature is raised to 980 °C and 1030 °C, respectively, as compared to a standard carburizing temperature of 930 °C.The martensite start temperature depends on the austenite grain size [21].The smaller the austenite grain size, the lower is the martensite start temperature.Accordingly, in a mixed grain size structure, transformation locally occurs at different temperatures.This situation will lead to the generation of residual stresses due to the volume change when the microstructure transforms from austenite to martensite.The earlier formed martensite islands cannot plastically accommodate the transformational volume change of the later formed martensite islands.Hence, imbalanced elastic stresses cause a macroscopic distortion of the quenched component.It has been experimentally confirmed that a larger grain size scatter results in a larger scatter of distortion (Figure 11a) [22].The distortion has to be corrected by straightening or hard machining.This is not only costly, but also reduces the thickness of the case layer when performing hard machining.Furthermore, residual stresses overlay the applied load stresses.Especially tensile residual stresses can cause premature failure, for instance under fatigue conditions.
Consequently, microalloying of case carburizing steel leading to reduced grain size scatter as demonstrated above is expected to lower quench distortions.This could indeed be verified for components manufactured from the modified variant of 25MoCr4 (320 ppm Nb, pm Ti, 160 ppm N) shown in Figure 11b.The material was continuously cast into bar.The bar was FP (ferrite-pearlite) annealed before cold extrusion and then again FP annealed.Carburization occurred at 980 • C for 195 min to a target case depth of 0.95 mm with a total furnace residence of 400 min.The components were then quenched in an oil bath (Isorapid 277) held at 60 • C. Subsequently part distortion was characterized by roundness deviation measurements at five positions as shown in Figure 11b.It is obvious that the microalloyed variant has a much lower roundness deviation as compared to the standard alloy.At each measuring position, the deviation was reduced by approximately 50% resulting in a similar reduction of straightening efforts.The cost savings achieved by such reduced straightening Metals 2017, 7, 415 or hard machining efforts likely compensate the cost for the microalloys.If the available equipment allows high temperature carburizing, severe process time savings can be realized.For instance, for producing a target case depth of 1.5 mm, the total treatment cycle time can be reduced by 25
Increasing Hardenability and Tempering Resistance
As outlined above, it is of high interest to avoid a steep hardness gradient in the transition zone from the case layer to the core material.Therefore, the hardenability of the alloy must be improved.Several alloying elements, besides carbon, contribute to hardenability, such as: molybdenum, manganese, chromium, nickel, as well as boron microalloying.For cost reduction reasons for the alloys, the use of higher manganese and chromium additions, eventually combined with boron microalloying, has been favored for many gear applications.However, such a cost-reduced alloy concepts, although providing good hardenability, have limitations in terms of toughness and tempering resistance.Besides, the limitation of intergranular oxidation requires reduced Mn, Cr and also Si levels.In the other extreme, alloy producers have developed richly-alloyed steels for those applications where transmission failure causes high replacement and outage costs.
An example is 15NiMoCr10-4 (C: 0.15%, Si: 1.1%, Cr: 1%, Mo: 2% and Ni: 2.5%), which is used in high-end applications, e.g., in aerospace or Formula-1 gears.However, such steel requires special melting technology and is, thus, not widely available.Comparing this steel to another high-Ni steel (14NiCrMo13-4), the increase of the molybdenum content from 0.25%-2.0%brings about a significant improvement of hardenability, surface hardness and also tempering resistance [23] (Figure 12a).The high tempering resistance of the material bears two important advantages.Firstly, it allows performing duplex treatments, i.e., the case-hardened surface is exposed to a second treatment, such as PVD coating or plasma nitriding (PN), for further increasing surface hardness.These treatments are usually performed in a temperature window of 300 °C to 500 °C.It is thus a prerequisite that the hardness obtained in the underlying material after quenching from the carburizing temperature not be degraded by the second heat cycle.Secondly, many conventional case carburizing steel grades are restricted to a maximum operating temperature of 120 °C to 160 °C.A steel grade with high tempering resistance can be operated at higher temperatures without degrading.Elevated operating temperatures may occur for instance by frictional heating when the transmission experiences lubrication problems.
Good tempering resistance in a typical gear steel base alloy can also be achieved with lower molybdenum additions, as indicated in Figure 12b.Already, a Mo addition of 0.5% to 0.7% provides good resistance against softening for tempering parameters up to around 16. Resistance against
Increasing Hardenability and Tempering Resistance
As outlined above, it is of high interest to avoid a steep hardness gradient in the transition zone from the case layer to the core material.Therefore, the hardenability of the alloy must be improved.Several alloying elements, besides carbon, contribute to hardenability, such as: molybdenum, manganese, chromium, nickel, as well as boron microalloying.For cost reduction reasons for the alloys, the use of higher manganese and chromium additions, eventually combined with boron microalloying, has been favored for many gear applications.However, such a cost-reduced alloy concepts, although providing good hardenability, have limitations in terms of toughness and tempering resistance.Besides, the limitation of intergranular oxidation requires reduced Mn, Cr and also Si levels.In the other extreme, alloy producers have developed richly-alloyed steels for those applications where transmission failure causes high replacement and outage costs.
An example is 15NiMoCr10-4 (C: 0.15%, Si: 1.1%, Cr: 1%, Mo: 2% and Ni: 2.5%), which is used in high-end applications, e.g., in aerospace or Formula-1 gears.However, such steel requires special melting technology and is, thus, not widely available.Comparing this steel to another high-Ni steel (14NiCrMo13-4), the increase of the molybdenum content from 0.25%-2.0%brings about a significant improvement of hardenability, surface hardness and also tempering resistance [23] (Figure 12a).The high tempering resistance of the material bears two important advantages.Firstly, it allows performing duplex treatments, i.e., the case-hardened surface is exposed to a second treatment, such as PVD coating or plasma nitriding (PN), for further increasing surface hardness.These treatments are usually performed in a temperature window of 300 • C to 500 • C. It is thus a prerequisite that the hardness obtained in the underlying material after quenching from the carburizing temperature not be degraded by the second heat cycle.Secondly, many conventional case carburizing steel grades are restricted to a maximum operating temperature of 120 • C to 160 • C. A steel grade with high tempering resistance can be operated at higher temperatures without degrading.Elevated operating temperatures may occur for instance by frictional heating when the transmission experiences lubrication problems.
Good tempering resistance in a typical gear steel base alloy can also be achieved with lower molybdenum additions, as indicated in Figure 12b.Already, a Mo addition of 0.5% to 0.7% provides good resistance against softening for tempering parameters up to around 16. Resistance against softening under a tempering parameter of 16 means that a secondary treatment at a temperature of 450 • C for up to 10 h should be sustainable.This condition is typical for plasma nitriding.
Microalloying of Nb further enhances the tempering resistance, obviously as the result of a synergy effect with Mo.Molybdenum and niobium have to some extent similar metallurgical effects.Both exert strong solute drag on grain boundaries, as well as dislocations [24] and also lower the activity of carbon [25].These fundamental effects are noticed by delayed recovery or recrystallization, as well as a reduced rate of pearlite growth, thus increased hardenability.The solubility of both elements in austenite is however very different.Molybdenum has a good solubility [26], whereas that of niobium is low [27].Therefore, niobium precipitates as NbC particles at rather high temperatures.Manganese, chromium and particularly molybdenum increase the solubility of niobium in austenite [28].Accordingly, more niobium will remain in solution after quenching from austenitizing temperature, which is then available for fine precipitation during tempering treatment acting against softening.Microalloying of Nb further enhances the tempering resistance, obviously as the result of a synergy effect with Mo.Molybdenum and niobium have to some extent similar metallurgical effects.Both exert strong solute drag on grain boundaries, as well as dislocations [24] and also lower the activity of carbon [25].These fundamental effects are noticed by delayed recovery or recrystallization, as well as a reduced rate of pearlite growth, thus increased hardenability.The solubility of both elements in austenite is however very different.Molybdenum has a good solubility [26], whereas that of niobium is low [27].Therefore, niobium precipitates as NbC particles at rather high temperatures.Manganese, chromium and particularly molybdenum increase the solubility of niobium in austenite [28].Accordingly, more niobium will remain in solution after quenching from austenitizing temperature, which is then available for fine precipitation during tempering treatment acting against softening.
Modification and Testing of Carburizing Steels
Based on the individual and synergetic effects of alloying elements described before, the intended processing route and the desired property profile, two modified alloy concepts have been designed (Table 2) for a full-scale production trial including gear running tests.One of the developed alloy designs (Concept V1) can be considered as a modified 20MnCr5 grade.It is targeting higher performance than that of 18CrNiMo7-6 at similar alloy cost.The content of carbon is increased for higher maximum hardness while Mo and Ni are added for increased hardenability and tempering resistance.The other developed alloy design (Concept V2) can be considered as modified 20CrMo5 grade added with nickel, which has a lower total alloy cost than 18CrNiMo7-6, yet aiming at similar performance.In both concepts, niobium microalloying is applied for austenite grain size control.The achieved mechanical properties of both developed case carburizing steels obtained after heat treatment indeed correspond to the postulated expectations (Figure 10 and Table 3).The hardenability behavior of Concept V1 is superior to that of 18CrNiMo17-6, whereas that of Concept V2 is within the hardenability range of the reference grade.After an austenitizing treatment at 880 °C for a duration of 2 h followed by quenching in oil and holding at 180° for 2 h, Concept V1 shows
Modification and Testing of Carburizing Steels
Based on the individual and synergetic effects of alloying elements described before, the intended processing route and the desired property profile, two modified alloy concepts have been designed (Table 2) for a full-scale production trial including gear running tests.One of the developed alloy designs (Concept V1) can be considered as a modified 20MnCr5 grade.It is targeting higher performance than that of 18CrNiMo7-6 at similar alloy cost.The content of carbon is increased for higher maximum hardness while Mo and Ni are added for increased hardenability and tempering resistance.The other developed alloy design (Concept V2) can be considered as modified 20CrMo5 grade added with nickel, which has a lower total alloy cost than 18CrNiMo7-6, yet aiming at similar performance.In both concepts, niobium microalloying is applied for austenite grain size control.The achieved mechanical properties of both developed case carburizing steels obtained after heat treatment indeed correspond to the postulated expectations (Figure 10 and Table 3).The hardenability behavior of Concept V1 is superior to that of 18CrNiMo17-6, whereas that of Concept V2 is within the hardenability range of the reference grade.After an austenitizing treatment at 880 • C for a duration of 2 h followed by quenching in oil and holding at 180 • C for 2 h, Concept V1 shows clearly better tensile and fatigue strength, while Concept V2 nearly exactly matches the strength of the reference grade.The toughness of both developed steels is lower than that of 18CrNiMo7-6 due to the reduced nickel alloy content, yet remains still at a good level.The heat treatment behavior of the developed alloys has been tested by a carburizing process operated at 1030 • C to a nominal case depth range of 0.95 mm to 1.2 mm.This originated from the gear running tests to be executed with a module of 5-mm gears actually requiring a 0.75 mm to 1.0 mm case depth.The additional case depth was intended to compensate for grinding losses during hard machining of the carburized gear.For determining the depth of the case layer, a limit hardness of 550 HV was defined according to ISO 6336-5.The targeted surface hardness was set to 680 HV-700 HV.Additionally, a secondary plasma nitriding treatment has been also performed at 400 • C and 440 • C, respectively.Table 4 summarizes the hardness data for the various pilot heat treatments.In the as-quenched condition after carburizing, both grades fulfill the requirements.Both alloy concepts sustain a tempering treatment at 200 • C. Concept V2 however does not retain sufficient hardness after plasma nitriding treatment.On the contrary, Concept V1, due to its increased tempering resistance, shows very high surface hardness of around 1000 HV after plasma nitriding, whereas the core hardness is reduced.Nevertheless, a core hardness of more than 400 HV still represents a high value.It thus appears that Concept V1 by some further optimization has the potential of fulfilling the case depth requirements at secondary treatment temperatures up to 440 • C. A further increase of the molybdenum content towards 0.7% (Figure 9) and fine-tuning of the microalloy addition are thought to be the most promising ways to achieve the necessary tempering resistance.The operational performance of the developed steel grades (V1 and V2) was tested and benchmarked at FZG, Technical University of Munich, Germany.The tooth root load carrying capacity was investigated in a pulsator rig (Figure 13a).Investigations on the flank load carrying capacity were performed by running tests on a back-to-back gear test rig (Figure 13b) according to DIN ISO 14635-1 [29].The test gears for these investigations were case hardened after the gear milling.Subsequent to case carburizing, the test gears were mechanically cleaned by shot blasting.The flanks, as well as the tooth roots of the test gears for the investigations on the tooth root bending strength were not ground.The gear flanks of the test gears for the gear running tests were finally ground to a gear quality of Q ≤ 5 and a surface roughness Ra ≤ 0.3 µm.In order to reduce the effects of the premature contact profile, modifications in the form of tip relief were applied to the gears for the running tests.For the experimental investigations on the tooth root bending strength standard pulsator test, gears with a gear module mn = 5, number of teeth z = 24 mm and face width b = 30 mm were used.For the running tests, spur gears with a module mn = 5 mm, a gear ratio of 17/18 and a face width b = 14 mm were used.Both test gear types are typical for the examination of bending strength respectively pitting load capacity of case carburized gears and according to the specifications of ISO 6336 for reference test gears.
The tooth root load carrying capacity is one of the determining factors in gear design.Besides the strength of the material itself, the existing state of stress (load-induced stresses and residual stresses) significantly influences the tooth root load carrying capacity.The mechanical cleaning procedure by shot blasting as used in this test program introduces compressive stresses in the subsurface zone and is beneficial to fatigue resistance (see also Figure 14a) [30].The current tooth root bending fatigue tests were carried out under a constant pulsating load with a frequency of 90 Hz and continued until the limiting number of load cycles of 6 × 10 6 was reached or tooth root breakage occurred.Gear standards generally consider 3 × 10 6 as the beginning of the endurance range for bending strength.For each alloy concept, a complete S-N-curve was determined based on a number of approximately 25 test points.The endurance strength level of the S-N-curve was determined using the "stair-step-method" based on at least 10 data points for each alloy concept.The pulsating load at the endurance limit was estimated for a failure probability of 50 percent.The low-cycle fatigue part of the S-N curve was supported by around 10 valid tests for each variant.The conversion of the pulsating load into the resulting tooth root stress was done as described in DIN 3990 Part 3 [5].The allowable stress numbers for bending conditions σFlim and σFE given in DIN 3990/ISO 6336 [5,6] are valid for standard reference test gears at standard test conditions in a gear running test and a failure probability of one percent.Therefore, the test results from the pulsator tests were converted to these conditions according to the state-of-the-art as outlined in detail by Niemann and Winter [31].
The results obtained from these tests are displayed in Table 5. Case carburized alloy Concept V1 (20MnCr5 mod.) exhibits a clearly higher tooth root bending strength than the case carburized alloy Concept V2 (20CrMo5 mod.), as can be expected from the hardness characteristics.It has been established that surface-hardened gears of high load capacity containing high residual compressive stresses in the surface layer due to shot peening exhibit an increased risk of crack initiation below the surface [32].In this respect, the cleanness of the material has a decisive influence.Furthermore, it is assumed that the microstructure and especially the ductility of the surface layer are also relevant to the cracking behavior.Alloy Concept V1 did not show sub-surface crack initiation, which may be related to sufficiently high cleanness and ductility.
Figure 14a compares the determined allowable stress numbers to the tooth root bending stress levels according to DIN 3990/ISO 6336 [5,6] and to the test results of several batches of two case hardened Western European standard steels determined under comparable test conditions [Error!The tooth root load carrying capacity is one of the determining factors in gear design.Besides the strength of the material itself, the existing state of stress (load-induced stresses and residual stresses) significantly influences the tooth root load carrying capacity.The mechanical cleaning procedure by shot blasting as used in this test program introduces compressive stresses in the sub-surface zone and is beneficial to fatigue resistance (see also Figure 14a) [30].The current tooth root bending fatigue tests were carried out under a constant pulsating load with a frequency of 90 Hz and continued until the limiting number of load cycles of 6 × 10 6 was reached or tooth root breakage occurred.Gear standards generally consider 3 × 10 6 as the beginning of the endurance range for bending strength.For each alloy concept, a complete S-N-curve was determined based on a number of approximately 25 test points.The endurance strength level of the S-N-curve was determined using the "stair-step-method" based on at least 10 data points for each alloy concept.The pulsating load at the endurance limit was estimated for a failure probability of 50 percent.The low-cycle fatigue part of the S-N curve was supported by around 10 valid tests for each variant.The conversion of the pulsating load into the resulting tooth root stress was done as described in DIN 3990 Part 3 [5].The allowable stress numbers for bending conditions σ Flim and σ FE given in DIN 3990/ISO 6336 [5,6] are valid for standard reference test gears at standard test conditions in a gear running test and a failure probability of one percent.Therefore, the test results from the pulsator tests were converted to these conditions according to the state-of-the-art as outlined in detail by Niemann and Winter [31].
The results obtained from these tests are displayed in Table 5. Case carburized alloy Concept V1 (20MnCr5 mod.) exhibits a clearly higher tooth root bending strength than the case carburized alloy Concept V2 (20CrMo5 mod.), as can be expected from the hardness characteristics.It has been established that surface-hardened gears of high load capacity containing high residual compressive stresses in the surface layer due to shot peening exhibit an increased risk of crack initiation below the surface [32].In this respect, the cleanness of the material has a decisive influence.Furthermore, it is assumed that the microstructure and especially the ductility of the surface layer are also relevant to the cracking behavior.Alloy Concept V1 did not show sub-surface crack initiation, which may be related to sufficiently high cleanness and ductility.
Figure 14a compares the determined allowable stress numbers to the tooth root bending stress levels according to DIN 3990/ISO 6336 [5,6] and to the test results of several batches of two case hardened Western European standard steels determined under comparable test conditions [33].A further performance benchmark of both developed concepts against established case carburizing alloys is shown in Figure 14b.In this diagram, the grey shaded area indicates the typical performance range of European state-of-the-art carburizing grades (see also Figure 14a).Additionally, some international carburizing grades that were tested by the same method are indicated.Figure 14 clearly demonstrates that alloy Concept V1 ranks on top of quality level ME defined for established alloys according to DIN 3990 [5] and performs better than many higher alloyed steel grades including the reference grade 18CrNiMo7-6.Alloy Concept V2 compares well to the state-of-the-art alloys achieving quality level MQ.
indicates the typical performance range of European state-of-the-art carburizing grades (see also Figure 14a).Additionally, some international carburizing grades that were tested by the same method are indicated.Figure 14 clearly demonstrates that alloy Concept V1 ranks on top of quality level ME defined for established alloys according to DIN 3990 [5] and performs better than many higher alloyed steel grades including the reference grade 18CrNiMo7-6.Alloy Concept V2 compares well to the state-of-the-art alloys achieving quality level MQ. the results of reference steels determined under comparable test conditions [30]; (b) vs. selected alternative case carburizing steels as specified in Table 1.
In order to determine the pitting load capacity of the gear flank, repeated gear running tests were carried out for finding the endurance limit [Error!Reference source not found.].The endurance limit for pitting strength is considered to be reached when at least 50 × 10 6 load cycles are sustained without damage (this criterion is generally accepted by gear standards).The test rig was driven with a constant speed of 3000 rpm.All test runs were performed under oil spray lubrication (approximately 2 L/min into the tooth mesh) with FVA (Forschungsvereinigung Antriebstechnik) Oil No. 3 with 4% additive Anglamol 99 (sulphur-phosphorous additive), a mineral oil of viscosity class Comparison of the tooth root bending stress numbers of newly developed case carburized steels: (a) vs. strength levels of DIN 3990 and vs. the results of reference steels determined under comparable test conditions [30]; (b) vs. selected alternative case carburizing steels as specified in Table 1.In order to determine the pitting load capacity of the gear flank, repeated gear running tests were carried out for finding the endurance limit [33].The endurance limit for pitting strength is considered to be reached when at least 50 × 10 6 load cycles are sustained without damage (this criterion is generally accepted by gear standards).The test rig was driven with a constant speed of 3000 rpm.All test runs were performed under oil spray lubrication (approximately 2 L/min into the tooth mesh) with FVA (Forschungsvereinigung Antriebstechnik) Oil No. 3 with 4% additive Anglamol 99 (sulphur-phosphorous additive), a mineral oil of viscosity class ISO VG 100, and an oil temperature of 60 • C. Prior to each test run, a two-stage running-in period was performed.
Under the described test conditions, 6-8 test runs for each variant were scheduled at different load levels in order to determine the pitting load carrying capacity.The test runs were continued until either one of the failure criteria mentioned below was reached or the specified maximum number of load cycles was exceeded without failure.The test runs were regularly interrupted after a defined interval of load cycles in order to inspect the flank condition.According to the defined failure criteria, a test run was terminated when:
•
The flank area damaged by pitting exceeded about 4% of the working flank area of a single tooth or about 2% of the total working flank area;
•
The mean profile deviation due to micropitting exceeded the limiting value of 15 µm to 20 µm.
After every test run, the flank condition was evaluated and documented by means of digital photos (Figure 15).During all test runs of alloy Concept V2, micropitting was observed on the flanks of the test pinion and test gear.However, the limiting criterion of a profile deviation f fm > 20 µm due to micropitting was not reached in any of the test runs.Normally, light micropitting leads to higher load cycles until a pitting failure occurs.Test gears made from alloy Concept V1 partly showed a significantly lower sensitivity to micropitting than the test gears made from alloy Concept V2.ISO VG 100, and an oil temperature of 60 °C.Prior to each test run, a two-stage running-in period was performed.Under the described test conditions, 6-8 test runs for each variant were scheduled at different load levels in order to determine the pitting load carrying capacity.The test runs were continued until either one of the failure criteria mentioned below was reached or the specified maximum number of load cycles was exceeded without failure.The test runs were regularly interrupted after a defined interval of load cycles in order to inspect the flank condition.According to the defined failure criteria, a test run was terminated when:
Tooth breakage occurred; The flank area damaged by pitting exceeded about 4% of the working flank area of a single tooth or about 2% of the total working flank area; The mean profile deviation due to micropitting exceeded the limiting value of 15 μm to 20 μm.
After every test run, the flank condition was evaluated and documented by means of digital photos (Figure 15).During all test runs of alloy Concept V2, micropitting was observed on the flanks of the test pinion and test gear.However, the limiting criterion of a profile deviation ffm > 20 μm due to micropitting was not reached in any of the test runs.Normally, light micropitting leads to higher load cycles until a pitting failure occurs.Test gears made from alloy Concept V1 partly showed a significantly lower sensitivity to micropitting than the test gears made from alloy Concept V2.
The performed gear running tests allow an approximate determination of the pinion torque at the endurance limit, as well as of the nominal contact stress number at the endurance for a failure probability of 50 percent.The allowable contact stress σHlim representing the pitting load capacity with a failure probability of one percent is then calculated according to DIN 3990 [5].Table 6 summarizes the determined flank pitting load capacity limits for the two developed steel grades.A benchmark comparison of these data against the strength values for the different quality levels according to DIN 3990/ISO 6336, as well as for some reference data from the literature [29] is provided by Figure 16.A further performance benchmark of both developed concepts against established case carburizing alloys is shown in Figure 17.Obviously, alloy Concept V1 (20MnCr5 mod.) exhibits a very high pitting endurance limit and outperforms established alloys of quality level ME.The pitting endurance limit of alloy Concept V2 (20CrMo5 mod.) is situated in the upper region of the established contact stress field for case hardened steels reaching quality level ME.The performed gear running tests allow an approximate determination of the pinion torque at the endurance limit, as well as of the nominal contact stress number at the endurance for a failure probability of 50 percent.The allowable contact stress σ Hlim representing the pitting load capacity with a failure probability of one percent is then calculated according to DIN 3990 [5].Table 6 summarizes the determined flank pitting load capacity limits for the two developed steel grades.A benchmark comparison of these data against the strength values for the different quality levels according to DIN 3990/ISO 6336, as well as for some reference data from the literature [29] is provided by Figure 16.A further performance benchmark of both developed concepts against established case carburizing alloys is shown in Figure 17.Obviously, alloy Concept V1 (20MnCr5 mod.) exhibits a very high pitting endurance limit and outperforms established alloys of quality level ME.The pitting endurance limit of alloy Concept V2 (20CrMo5 mod.) is situated in the upper region of the established contact stress field for case hardened steels reaching quality level ME.The current results suggest that alloy Concept V1 has the potential of providing an economicallyviable solution for highly loaded gears in heavy machinery and vehicles.Its use in vehicle transmissions could enable downsizing of gear components, thereby reducing weight.In larger transmissions, such as those used in trucks and heavy machinery, its application can help with avoiding unexpected failure and extending warranty periods.The results of alloy Concept V2 indeed position it as a cost-attractive alternative to the established premium grade 18CrNiMo7-6.The current results suggest that alloy Concept V1 has the potential of providing an economicallyviable solution for highly loaded gears in heavy machinery and vehicles.Its use in vehicle transmissions could enable downsizing of gear components, thereby reducing weight.In larger transmissions, such as those used in trucks and heavy machinery, its application can help with avoiding unexpected failure and extending warranty periods.The results of alloy Concept V2 indeed position it as a cost-attractive alternative to the established premium grade 18CrNiMo7-6.Comparison of the pitting endurance strength number of newly developed case carburized steels vs. strength levels of DIN 3990 and vs. selected alternative case carburizing steels as specified in Table 1.
The current results suggest that alloy Concept V1 has the potential of providing an economically-viable solution for highly loaded gears in heavy machinery and vehicles.Its use in vehicle transmissions could enable downsizing of gear components, thereby reducing weight.In larger transmissions, such as those used in trucks and heavy machinery, its application can help with avoiding unexpected failure and extending warranty periods.The results of alloy Concept V2 indeed position it as a cost-attractive alternative to the established premium grade 18CrNiMo7-6.
Conclusions
Many parameters such as material, heat treatment, gear design, machining and lubrication must be considered for optimizing gear box performance.This study identified the desired material characteristics with regard to two major fatigue failure mechanisms in gears, pitting and tooth root breakage, considering the particular stress conditions in large gears.The objective was to optimize the alloy composition of carburizing steels for obtaining a load bearing characteristic providing better service performance of large gears.In the modified alloy design, particularly the dedicated use of molybdenum and niobium as alloying elements was considered.
Molybdenum alloying in case carburizing steels is established due to its pronounced hardenability effect.The current results demonstrate that molybdenum has several additional metallurgical benefits that are not provided by alternative hardenability elements:
•
Molybdenum significantly increases the tempering resistance, thus opening an opportunity of performing secondary heat treatments after case carburizing.The increased tempering resistance also makes gears less vulnerable against hot running in case of lubrication problems during operation.
•
It is also known that molybdenum enhances the large angle grain boundary cohesion, thus obstructing intergranular crack propagation and hence retarding macroscopic damage.
•
Contrary to manganese, molybdenum does not have a strong segregation tendency, and it does not form inclusions. Furthermore, its use does not increase the sensitivity for intergranular oxidation, as is the case for manganese and chromium.
•
The present investigation has demonstrated that modifying standard alloys with the moderate addition of molybdenum (0.5-0.7 wt %) can lead to significantly better performance in gear running tests than state-of-the-art alloys including several steels highly alloyed with nickel.
Microalloy addition of niobium to case carburizing steels is an increasingly applied technology offering the following advantages:
•
Niobium carbide nano-precipitates obstruct prior austenite grain coarsening during case carburizing and simultaneously reduce grain size scattering.A finer and more homogeneous grain structure results in improved toughness, higher fatigue resistance and less distortion after heat treatment.
•
Niobium further increases the tempering resistance provided by molybdenum due to a metallurgical synergy based on solute drag and particle pinning.
•
Prior austenite grain refinement of martensitic steels, as well as nano-carbide precipitates of niobium results in an increased resistance against hydrogen embrittlement.
The combined alloying effects of molybdenum and niobium are particularly relevant for highly loaded, large-sized gears requiring increased case hardening depth and thus long carburization times or elevated carburizing temperature.It was demonstrated by experimental benchmarking that the dedicated use of these metallurgical effects allows producing carburized gear steel with significantly increased performance in a cost-efficient way.
Figure 1 .
Figure 1.Typical processing routes for the manufacturing of case-hardened components.
Figure 2 .
Figure 2. Tooth root load carrying capacity; allowable bending stress numbers according to ISO 6336-5 and the indication of quality levels (ML, MQ, ME) [6].
Figure 1 .
Figure 1.Typical processing routes for the manufacturing of case-hardened components.
Figure 1 .
Figure 1.Typical processing routes for the manufacturing of case-hardened components.
Figure 2 .
Figure 2. Tooth root load carrying capacity; allowable bending stress numbers according to ISO 6336-5 and the indication of quality levels (ML, MQ, ME) [6].
Figure 2 .
Figure 2. Tooth root load carrying capacity; allowable bending stress numbers according to ISO 6336-5 and the indication of quality levels (ML, MQ, ME) [6].
Figure 4 .
Figure 4. Schematic distribution of stress inside a gear tooth indicating highly loaded areas (Hertzian contact stress at tooth flank, bending stress at tooth root).
Figure 5 .Figure 4 .
Figure 5.Comparison of gear flank contact pressure (left) and tooth root stress (right) vs. the allowable stress over material depth depending on the gear size represented by the curvature ρC and module mn for a given tangential driving force Ft [7].
Figure 4 .
Figure 4. Schematic distribution of stress inside a gear tooth indicating highly loaded areas (Hertzian contact stress at tooth flank, bending stress at tooth root).
Figure 5 .Figure 5 .
Figure 5.Comparison of gear flank contact pressure (left) and tooth root stress (right) vs. the allowable stress over material depth depending on the gear size represented by the curvature ρC and module mn for a given tangential driving force Ft [7].
Figure 6 .
Figure 6.Influence of case hardening depth (Eht is identical to CHD) on the relative pitting endurance limit σHlim (left) and the relative tooth root endurance σFlim (right) for different gear sizes [8].
Figure 7 .
Figure 7. Exemplary influence of varying case hardening depth on the material strength (left), equivalent stress (left) and material exposure as a function of depth indicating the risk of sub-surface overloading (right)[7].
Figure 6 .
Figure 6.Influence of case hardening depth (Eht is identical to CHD) on the relative pitting endurance limit σ Hlim (left) and the relative tooth root endurance σ Flim (right) for different gear sizes [8].
Figure 6 .
Figure 6.Influence of case hardening depth (Eht is identical to CHD) on the relative pitting endurance limit σHlim (left) and the relative tooth root endurance σFlim (right) for different gear sizes [8].
Figure 7 .
Figure 7. Exemplary influence of varying case hardening depth on the material strength (left), equivalent stress (left) and material exposure as a function of depth indicating the risk of sub-surface overloading (right)[7].
Figure 7 .
Figure 7. Exemplary influence of varying case hardening depth on the material strength (left), equivalent stress (left) and material exposure as a function of depth indicating the risk of sub-surface overloading (right)[7].
Figure 8 .
Figure 8. Influence of gear size on tooth root bending strength for gear materials with different hardenability; (left) examples of investigated test gears, (upper right) experimentally-determined tooth root endurance limit for material 16MnCr5 depending on gear size and (lower right) size factor for tooth root bending strength for different gear steels [9].
Figure 9 .
Figure 9.Effect of alloy modifications on hardenability as compared to standard 18CrNiMo7-6 steel.
Figure 11 .
Figure 11.Reducing quench distortion in carburizing steel: (a) influence of mean prior austenite grain size scattering in steel 16MnCr5 (1.7131) on quench distortion [22]; (b) roundness deviation of a heattreated transmission shaft measured at five positions for standard 25MoCr4 (1.7325) and 25MoCr4 modified by Nb-microalloying.
Figure 11 .
Figure 11.Reducing quench distortion in carburizing steel: (a) influence of mean prior austenite grain size scattering in steel 16MnCr5 (1.7131) on quench distortion [22]; (b) roundness deviation of a heat-treated transmission shaft measured at five positions for standard 25MoCr4 (1.7325) and 25MoCr4 modified by Nb-microalloying.
Figure 12 .
Figure 12.Increasing tempering resistance in carburizing steel: (a) surface hardness of the carburized layer in relation to tempering temperature and the effect of increased molybdenum content; (b) effect of tempering resistance as a function of the molybdenum content and synergy effect with niobium (tempering parameter = T × (20 + log t) × 10 −3 (T in K, t in h)).
Figure 12 .
Figure 12.Increasing tempering resistance in carburizing steel: (a) surface hardness of the carburized layer in relation to tempering temperature and the effect of increased molybdenum content; (b) effect of tempering resistance as a function of the molybdenum content and synergy effect with niobium (tempering parameter = T × (20 + log t) × 10 −3 (T in K, t in h)).
Figure 13 .
Figure 13.Equipment for gear testing used at FZG: (a) pulsator test rig for investigations of the tooth root load carrying capacity; (b) back-to-back gear test rig for investigations of the flank load carrying capacity.
Figure 13 .
Figure 13.Equipment for gear testing used at FZG: (a) pulsator test rig for investigations of the tooth root load carrying capacity; (b) back-to-back gear test rig for investigations of the flank load carrying capacity.
Figure 14 .
Figure 14.Comparison of the tooth root bending stress numbers of newly developed case carburized steels: (a) vs. strength levels of DIN 3990 and vs. the results of reference steels determined under comparable test conditions[30]; (b) vs. selected alternative case carburizing steels as specified in Table1.
Figure 14 .
Figure 14.Comparison of the tooth root bending stress numbers of newly developed case carburized steels: (a) vs. strength levels of DIN 3990 and vs. the results of reference steels determined under comparable test conditions[30]; (b) vs. selected alternative case carburizing steels as specified in Table1.
Figure 15 .
Figure 15.Examples of the typical tooth flank condition at the end of the test runs for various nominal contact stresses and load cycles (tooth width is 14 mm).
Figure 16 .
Figure 16.Comparison of the determined pitting strength number of newly developed case carburized steels vs. strength levels of DIN 3990 and literature data of reference steels.
Figure 16 . 20 Figure 15 .
Figure 16.Comparison of the determined pitting strength number of newly developed case carburized steels vs. strength levels of DIN 3990 and literature data of reference steels.
Figure 16 .
Figure 16.Comparison of the determined pitting strength number of newly developed case carburized steels vs. strength levels of DIN 3990 and literature data of reference steels.
Figure 17 .
Figure 17.Comparison of the pitting endurance strength number of newly developed case carburized steels vs. strength levels of DIN 3990 and vs. selected alternative case carburizing steels as specified in Table1.
Table 1 .
Major carburizing steel grades for medium-and large-sized gears in various geographical markets.
•
Minimize intergranular oxidation → reduce Si, Mn and Cr [10]; • Prevent MnS inclusions → reduce S, limit Mn; • Prevent TiN inclusions → control Ti/N wt % ratio close to three; • Improve hardenability → increase Mo; • Improve toughness → increase Ni and Mo; • Refine and homogenize grain size → balance Nb, Ti, Al and N microalloying addition; • Strengthen grain boundaries → reduce P and S, add Mo and Nb.
Table 2 .
Chemical composition of developed case carburizing steels (alloy additions in wt %).
Table 4 .
Hardness characteristics of steels after various heat treatments.
Table 5 .
Characteristics of the determined S-N-curves concerning tooth root bending strength for 50% failure probability, as well as nominal and allowable bending stress numbers.
Table 5 .
Characteristics of the determined S-N-curves concerning tooth root bending strength for 50% failure probability, as well as nominal and allowable bending stress numbers.
Table 6 .
Experimentally-determined endurance limit for pitting.Examples of the typical tooth flank condition at the end of the test runs for various nominal contact stresses and load cycles (tooth width is 14 mm). | 17,526.6 | 2017-10-06T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Morphology-Dependent Resonances in Two Concentric Spheres with Variable Refractive Index in the Outer Layer: Analytic Solutions
: In many applications constant or piecewise constant refractive index profiles are used to study the scattering of plane electromagnetic waves by a spherical object. When the structured media has variable refractive indices, this is more of a challenge. In this paper, we investigate the morphology dependent resonances for the scattering of electromagnetic waves from two concentric spheres when the outer shell has a variable refractive index. The resonance analysis is applied to the general solutions of the radial Debye potential for both transverse magnetic and transverse electric modes. Finally, the analytic conditions to determine the resonance locations for this system are derived in the closed form of both modes. Our numerical results are provided with discussion.
Introduction
Scattering of radiation is a common process that happens in human and animal life. Almost everything we see comes to us indirectly by the scattering of light from objects around us. This leads to many interests in the classical topics of both acoustic and electromagnetic aspects for nanotechnology, transformation optics, fiber optics, metamaterials with negative refractive indices, optical and mechanical problems. These interesting topics are a consequence of the position of the resonance in the electromagnetic scattering in different supporting structures. These resonances are referred to as the morphology-dependent resonances (MDRs), where the electromagnetic energy is temporally trapped inside the particle for a certain period, oscillating back and forth many times before finally tunneling back through the classically forbidden region to the outside world again.
The positions of the resonances can be used to indicate the changes in many aspects of characteristic property such as density, energy, and temperature in the studied structures. The simulations have shown that there will be sharp peak or spike behaviors at the occurring locations of the MDRs as shown in many studies.
Researchers have studied the quasi-bound states, resonance tunnelling, and tunneling times generated by two types of twin symmetric potential barriers. The first is the twin rectangular barrier, and the other is the twin Gaussian-type barrier. They also evaluated the energy levels and widths for both cases. The behavior of the magnitude of wave functions of quasi-bound states as sharp peaks are compared with the regular bound states and the above-barrier state wave function [1].
The recent developments in MDR based sensors for aerospace applications have been reviewed. The concept of a sensor is based on the detection of small shifts of optical resonances, called the Whispering Gallery Mode (WGM). They investigated the MDR in the dielectric spheres. The resonance behaviors are also shown as sharp peaks [2]. Moreover, the conductance through pi-biased chaotic Josephson junctions is accumulated by many orders of magnitude in the short-wavelength regime. They found that mechanism behind this effect can be interpreted as macroscopic resonant tunneling [3].
The behaviors of transmission of the waves through the rectangular barrier when the attractive potential well is present on one or both sides have been studied. They also examined this study for a smoother barrier with a smooth adjacent Woods-Saxon Shape. This work is complementary to the resonant tunneling of objects through the two rectangular barriers. The result can be applied in the design of tunneling devices [4].
The different approach to study the relevant resonance topics is by using the method based on a direct calculation of the Jost matrix together with the Jost solutions of the Schrödinger equation to find a complete solution of one-dimensional Schrödinger equation for any complex energy and any arbitrary potential profile. The total widths of resonances are studied in the square well potential and the two-barrier potential. The resonance energies are demonstrated [5].
The work [6] investigated the group velocity of evanescent waves by using the simulations based on Maxwell equations for both TE and TM modes.
As described here, the MDRs are widely examined in many fields of study. The theory of tunneling is very useful to discuss in resonance phenomena [7].
The basic idea in studying the scattering of electromagnetic waves begins with Maxwell's equations. The process of solving the Maxwell equations in spherical coordinates and solutions has been described in the literature [8,9]. An exact theory of Maxwell's equation was derived and provided the solutions of Maxwell's equations for radially inhomogeneous media and the derivation of the scattering coefficients [10]. Then, B. S. Westcott followed a systematic search by [11] for useful refractive index profiles in spherically inhomogeneous isotropic media [12]. The wave functions are found using the method as that of [13]. He used the technique developed by [11] to derive the refractive index profiles where the wave functions can be expressed in terms of the standard transcendental functions such as hypergeometric, Whittaker or Bessel functions [13]. Several specific refractive index profiles were also studied by [14]. Some relationships of the refractive index profile, the wave number, and the angular momentum were studied by [11][12][13], who also published a summary of refractive index profiles along with TE and TM solutions corresponding to specific refractive index profiles for spherically layered media.
To determine the conditions to indicate the MDRs locations, only the radial parts of the scattering wave functions need to be used. However, only certain forms of radial parts have been used to derive the conditions to indicate the MDRs locations; most of the wave functions were expressed in terms of the standard transcendental function such as hypergeometric, Whittaker or Bessel functions [13]. The resonance for Mie scattering from a layered sphere was examined by [15,16], who also graphed the contribution of the resonant partial wave to the angle-averaged energy density. In 1993, Johnson developed the theory of morphology-dependent resonances (MDRs) for a spherical particle using the radial Debye potential in the form of Ricatti-Bessel functions and the analogy of quantum-mechanical shape resonances. The exact analytic formulas for predicting the resonances for both real and complex refractive index profiles have been provided [17]. This technique is so called the MDRs technique. He also extended the study of the exact theory of electromagnetic scattering to a heterogeneous multilayer sphere in infinite-layer limit [18,19]. This MDRs technique became very useful in various study.
Many authors graphed internal quantities for a transverse electric and magnetic modes using the MDRs technique for a homogeneous sphere as a function of radius. They considered mostly for the spherical-shaped particles with constant and piecewise-constant refractive index profiles. The cylindrical-shaped particles are also been studied.
The resonances for electric fields distribution in an infinite circular dielectric cylinder with constant refractive index 1.53 and angular momenta 1 to 5 are calculated. The results showed that the internally reflected circumferential waves are located near, but not on the surface [20].
The resonant frequencies and poles for the electromagnetic waves in a dielectric sphere with constant refractive index 1.4 with the size parameters ranging from 1 to 50 have also been studied. The real parts of the calculated poles were used to determine the location of the peaks in the resonance spectrum. The imaginary parts are related to the widths of these peaks [21].
The study of the internal and near-surface scattered field for a spherical particle at resonant conditions are considered. The resonances are examined for various refractive index profiles [22]. The reason that the radial parts for the case of constant and piecewiseconstant refractive index profiles have been considered more than the case of variable refractive index is that the radial Debye potential functions for both electric and magnetic fields became identical and simpler to solve comparing to the case of using variable refractive index.
Recently, the topic of morphology dependent resonances (MDRs) has been received much attention for exploring both fundamental research and technological applications. Many researchers have investigated MDR in layered droplets [12,16].
The study of MDRs in homogeneous sphere is now routine. The analysis of spherical particles with more complicated refractive index profiles can be extremely difficult and time consuming. They investigated that the concentration profile of water during sorption in ultraviscous and glassy aerosol particles contains a sharp front that propagates surface to the particle center over time. This result led to the MDR locations corresponding with the type of concentration profile closely matching those of a spherical core-shell structure [23].
The typical resonant TE mode for an incident linearly polarized plane wave in large layered spheres has been investigated by using the theory of Aden and Kerker. The resonance location can be shown as partial wave amplitudes [15].
Another way to study MDRs in a coated sphere is by calculating the volume-averaged source function obtained from Lorenz-Mie theory. The source functions for core and shell contributions can be computed and examined independently. The analytic expressions for the source functions are given [24].
To find the size and composition of core-shell particles using morphology-dependent resonances (MDRs), there are computationally intensive problems due to the large parameter space that needs to be searched during the fitting process. The issue of fitting speed can be solved by developing an algorithm that (i) reduces the multi-dimensional grid search to a one-dimensional search using a least squares method and (ii) implements a new method for calculating MDRs that is much faster than previous methods. They analyzed the best fits for core-shell MDRs across a large range of physically relevant scenarios using noise levels typical for conventional spectroscopic experiments [25].
More recent research studied the resonances of the electromagnetic plane wave in small charged particles. When electrons move freely along the surface of small charged particle, they contribute to scattering phenomena including resonances. These resonances result from excitation of an anti-symmetric surface plasmon at the layer interfaces. The resonance of the radial component of the inner and outer boundaries of the shell are represented as a sharp peak [26].
Other methods to study the behavior of a sharp Lorenz-Mie resonances for a spherical micrometer-sized droplet use the T-matrix [27].
The morphology dependent optical resonances shift (MDR) of a rotating spherical resonator have been analyzed. A shift in its MDR is caused by the centrifugal force acting on the spinning resonator. The MDR shifts of a spinning polydimethylsiloxane (PMDS) microsphere are examined as examples [28].
The researchers developed a new measurement system called 'pulsed 2D-2cLIF-EET' to study temperature fields inside micro-droplets. The MDR and stimulating dye emission are accounted for by using energy transfer [29].
The general solutions of the scattering functions for two concentric spheres when the outer shell has a variable refractive index are derived. The use of these solutions can be more challenging to analyze the analytic condition to find the resonance location by using the resonance theory [30].
Another recent article [31] studied the MDR of spheres with the discrete dipole approximation (DDA) technique. The DDA simulations can capture the narrow peaks or resonances in the extinction over the size parameters.
The MDRs topics have been studied widely in the objects with spherical in shape. The occurring of MDRs depends on the key conditions-the shape and refractive index of materials. Therefore, the change in the refractive index and radius of microsphere led to the change (shift) in the resonance (MDR) of the microsphere. Many applications of MDR have been shown in the same manner of interpretation [2,12,23,[27][28][29][30]32]. Lately, the MDR techniques are used to determine the size and composition of core-shell particles. The algorithm for calculating and fitting MDR speed has been developed by reducing the multi-dimensional grid search to one-dimensional search using a least squares method. In this study, the refractive indices are considered as constants, 1.35 and 1.53. The resonance condition for TE and TM modes in this research is known to be [21,25]. The coexistence of high porosity and MDR modes was explored to improve the photocatalytic activity in mesoporous TiO 2 spheres. The MDR modes can be calculated by using the theory of MDR, the coefficients of exponential-like increasing functions have to vanish [33]. The MDRs concept has also been applied to optical-biosensors used in indicating the locations for a high-density photon in micro-droplet [29,33]. The concept also has been used to investigate the size and composition of glassy aerosol microspheres. A core-shell model was used to simplify the analysis of MDR locations during water uptake by high-viscosity aerosol particles. The characteristic equations for the MDRs of such a core-shell particle in case of homogeneous sphere and two concentric spheres are presented by using the refractive indices 1.4 and 1.6 [23].
Moreover, The MDR analysis of a rotating spherical resonator has been analyzed by using optical quality factors. The MDR shifts of a spinning polydimethylsiloxane (PDMS) are found. The interpretation of MDRs was used to design the angular velocity sensor of resonators. The effect of angular velocity on the MDR shifts the spherical resonators that are used as sensing element [28]. The behavior of a sharp Lorenz-Mie resonances or MDR in a spherical micrometer-sized droplet have been studies again by using the superposition T-matrix method. Various values of the size parameters are presented using the constant refractive indices 1.31 and 1.55. Several effects of microscopic inclusions are discussed in a narrow MDR of a micrometer-sized spherical droplet [28]. There are some reviews of the applications of sensors for aerospace based on using the detection of small shifts of optical resonances, which referred to the MDRs. The shift in the resonances (WGM) can be caused by any perturbation to the shape, size or refractive index caused by surrounding environment changing. The MDRs of spheres have been elucidated of a number of theoretical studies. The phenomenon can be described by the analytical techniques using Maxwell's equations and the incident fields in the medium, and the methods and techniques of quantum mechanics, such as the potential well principle among others [2,10,11,17]. The manifestations of MDRs in the elements of the Stokes scattering matrix and their dependence on the refractive index by using the modern visualization techniques was studied. They have found that the scattering matrix can be changed in both magnitude and sign within MDRs using the complex refractive index of a real part of 1.4 and increasing imaginary part from zero to a very small value of 10 −5 . The potential of using MDRs for optical particle characterization can be significantly improved by measuring ratios of the scattering matrix elements [34].
Although the MDR technique has been widely studied and used to develop many technological applications [2,23,[27][28][29][31][32][33][34], the study of MDRs has been mostly done with the particles composed of a constant or piecewise constant refractive index. Both exact conditions and simulation results to find the resonance locations have been investigated. Many approximately spherically symmetric scatters in optical media in our real problems mostly have complex structures, non-constant refractive indices, and are generally not continuous functions [35]. To indicate the changes in some specific characteristic properties of such complicated structures, it is necessary to determine the exact formulas for predicting the locations of the resonances in various types of objects.
However, the lack of analytical research in this area corresponding with variable refractive index profiles is still a problem. The reason is due to the difficulty of deriving the exact wave scattering solutions, which are required to analyze the analytic solutions to obtain the conditions of resonances for the particles with variable refractive index profiles.
There are many studies in scattering topic dealing with variable refractive index profiles. Most of the wave solutions can be expressed in terms of complex forms of transcendental functions [13]. For example, wave scattering using the variable refractive index called the modified Luneburg lens determined the approximate lens size parameters of morphology-dependent resonances [9]. Using a change of variables method, the scalar radiation potential became the Whittaker function.
Based on the studied research, the exact scattering wave solutions are found in the form of Bessel functions for both TE and TM modes [30]. Moreover, the refractive index profile is of the variable type and can be led to the planoconvex lens, which has the refractive index profile Ar 2 , where A is a constant. The MDRs technique was adapted to use wave functions in the form of a Bessel function [17]. By using this technique, the exact formulas can be used to locate the positions of resonances using the variable refractive index as in [30].
Therefore, discovering the exact formulas defining the locations of resonances in the two-concentric sphere when the inner layer has constant refractive index, and the outer has a variable refractive index profile A(kr) m , using the MDRs technique will be investigated here. To do this we consider the wave scattering in the two-concentric sphere when the inner layer has a constant refractive index, and the outer has a variable refractive index profile by [30]. The variable refractive index profile is in the form of A(kr) m , where A and m are the constants, and k is the wave number. The exact radial solutions of the scattering wave solutions are found in the form of Bessel functions for both transverse electric (TE) and transverse magnetic (TM) modes, which is similar to the form that was used to find MDRs' locations by using the MDRs technique developed by [17]. Therefore, the variable refractive index in this work will be considered in the form of A(kr) m . This includes the planoconvex lens, the refractive index profile Ar 2 , where A is the constant. With the successful use of the MDRs technique in applying with the wave functions in the form of Bessel function by [17], this gives confidence that the exact formulas defining the MDRs locations using the variable refractive index by [30] can be derived. Since there is no research output yet for the conditions to find the MDRs location in this way, the resulting exact formulas are derived. Therefore, this work will carry out the following results. First, the exact formulas defining the locations of resonances in the two-concentric sphere when the inner layer has constant refractive index, and the outer has a variable refractive index profile A(kr) m , using the MDRs technique. More specifically, the resonance locations will be studied in both transverse electric (TE) and transvers magnetic (TM) modes. Second, the numerical results to find the exact values of the resonance locations by solving the conditions in the exact formulas for both TE and TM modes to confirm the existence of resonances.
The results will be presented to find the resonance locations of the two layers sphere when the inner layer has constant refractive index and the outer layer has variable refractive index of the form A(kr) m . These refractive index profiles can be used as the generalized form of specific refractive index profiles, such as the planoconvex lens. The numerical results will be provided by using the hyperparameter optimization techniques and some numerical methods to find the locations of the resonances using the proposed conditions. The existence of resonances is indicated. The generalized conditions to find the locations of resonances for the two layers sphere with this variable refractive index in the outer layer will be supplied along with the supporting numerical results. Hence, these results can be used to interpret the important characteristic property in objects that have the similar refractive index profile.
In summary, we investigate the scattering of an electromagnetic plane wave in radially symmetric heterogeneous media. The scattering object is composed of two concentric spheres; the inner layer has constant refractive index and the outer layer has a radiallydependent one. The theory of electromagnetic scattering and the general solutions for both modes (TM and TE) for this model is presented in Section 2. Section 3 describes the derivation of conditions determining the resonance locations for both modes. The work concludes with discussion in Section 4 and conclusions in Section 5.
Theory
Consider a two-layer sphere embedded in an infinite uniform medium as shown in Figure 1, regions 1, 2, and 3, respectively, as the radial coordinate increases away from zero. The radius of the inner layer is a, and the outer layer is b (which can be scaled to unity), the center is at origin of the coordinate system, and the refractive index profile in region 2 is considered as a function of the radial coordinate r, defined by n(r). Thus, in region 1, 0 ≤ r ≤ a, the refractive index is a constant (n 1 ). In region 2, a ≤ r ≤ b, the refractive index n 2 (r) is defined as n 3 A(kr) m where A and m are arbitrary constants. In region 3 (exterior to the sphere) r > b, and the refractive index, n 3 (r) can be any (complex) constant but here is taken to be one. The wave number is k = 2π/λ, where λ is the wavelength of the external incoming electromagnetic plane waves. We consider the case that the sphere is nonmagnetic. The complex time-dependence of the electric field is assumed to be harmonic. As the derivation of equations is developed, many of acronyms and variables are provided in Appendix A in Table A1 to provide readability without flipping back and forth between pages. Geometry for a concentric shell particle with variable refractive index. The refractive index of the core in Region 1 is n 1 , the refractive index of the shell in Region 2 is n 2 , and the refractive index of the outer shell in Region 3 is n 3 . The inner radius is a and outer radius is b.
It is known that the solutions of Maxwell's equations can be represented in terms of vector wave functions H describing the transverse magnetic field and a second function B for the transverse electric field [10,17]. and where r, θ, and φ are spherical coordinates, r is the radius vector, and Ψ and Φ are scalar functions that can be expressed in the following sets of solutions: where the P m l (cos θ) are associated Legendre polynomials. The functions M l and N l are the radial Debye potentials, which satisfy the differential equations where l is the angular momentum, the M l are associated with TM fields, and the N l functions are associated with TE fields. It is easy to see that if the refractive index n(r) is constant, Equations (5) and (6) are identical. To determine the resonance locations, we focus on the radial parts M l and N l [36,37].
The Wave Solutions for Transverse Magnetic Mode
For the transverse magnetic (TM) mode of this model, we employ the corresponding solutions where M 1,l (r), M 2,l (r), M 3,l (r) are the solutions in regions 1, 2, and 3, respectively, [30].
M 1,l (r) = n 1 a l j l (n 1 kr), for r < a, where a l , c l , A l , B l are constants, J ν and J −ν are Bessel functions of the first kind. The order ν is defined as [38,39] The parameters The spherical Bessel functions of the first kind, j l (kr) is defined as where J l+1/2 is a Bessel function of the first kind and half integral order, and h l (kr) is the spherical Hankel function, defined in terms of regular Hankel function as where H l+ 1 2 is the first Hankel function and j −l (X) is defined as
The Wave Solutions for Transverse Electric Mode
For the transverse electric (TE) mode, we suppose the solutions where N 1,l (r), N 2,l (r), N 3,l (r) are the solutions in regions 1, 2, and 3, respectively.
where b l , d l , C l , D l are constants, J ν and J −ν are Bessel functions of the first kind. The order µ is defined as [38,39] µ = 2l + 1 2(m + 1) .
The parameters , and The spherical Bessel functions of the first kind, j l (X), the Bessel function of the first kind and half integral order, J l+1/2 (X), the spherical Hankel function, h l (X), and the regular Hankel function, H l+ 1 2 (X) are defined as in Equations (10)-(12).
Resonance Analysis
As presented in [17], determining the locations of the resonance requires the coefficient of j l (kr) be zero, i.e., A l = 0 and 1 + c l = 0 for TM mode, and C l = 0 and 1 + d l = 0 for TE mode. Sections 3.1 and 3.2 show the formulas for all coefficients in both modes.
TM Mode
To determine the coefficients a l , c l , A l , and B l , we require that (16) all be continuous at the boundaries where the refractive index is discontinuous. The four boundary equations at the boundaries r = a and r = b are given by n 1 a l j l (n 1 ka) = Px where where a prime denotes differentiation with respect to the argument X.
We finally obtain all the coefficients: where x 0 = ka and y 0 = kb are the dimensionless size parameters. The other parameters are defined as l (n 3 y 0 ), .
TE Mode
To determine the coefficients b l , d l , C l , and D l , we require that (25) all be continuous at the boundaries where the refractive index is discontinuous. Note the absence of the n −2 term in the above conditions in this case [17]. The four boundary equations at the boundaries r = a and r = b are given by The expansions of these equations are presented in Equations (26)-(29), respectively. where , and when a dash denotes differentiation with respect to the argument X.
Finally we obtain all the coefficients: again where x 0 = ka and y 0 = kb are the size parameters. The other parameters are defined as
Discussion
To demonstrate the resonance behavior using the derived analytic condition obtained in the previous section, we consider the potential V l (r) defined as The corresponding values of n 1 and n 2 define the behavior of the potential. The characteristic of the potential whether it is attractive, repulsive or some combination of the those two depends on the refractive index values and the wave number. For certain values of the energy level k 2 , the phenomenon of resonance will happen. The wave particles will be temporally trapped inside the well or the sharp peak, oscillating back and forth many times before creating the classically forbidden region by tunneling to the outside world.
For various refractive index profiles, the shape of the potential function will be different. The case of constant refractive index can be considered for both increasing functions (n 1 < n 2 ) and decreasing functions (n 1 > n 2 ). The potential for the increasing refractive index profile will have one classically allowed region. On the other hand, there are two classically allowed regions of the potential or double wells for the decreasing refractive index profile.
In this work, we consider the case that the inner layer has a constant refractive index n 1 = 1.47, the outer layer has the variable refractive index n 2 = 2(kr) −2 , outside of the sphere has refractive index n 3 = 1, and the angular momentum l = 40. The shape of the potential function V 40 (r) is a single sharp peak as shown in Figure 2. The classically allowed region is the small area inside the sharp peak; outside the sharp peak is called the classical forbidden regions. The behavior of the resonance will occur in the classically allowed regions, then the wave function will continue decay monotonically in the forbidden or barrier regions. The wave function will finally vanish when r tends to infinity. The results presented here show the behaviors of the solutions M 40 (r) for n 1 = 1.47, n 2 = 2(kr) −2 , n 3 = 1, and l = 40 along with the potential function V 40 (r). By solving for the size parameters x 0 and y 0 from the derived analytic conditions A l = 0 and 1 + c l = 0 for TM mode (C l = 0 and 1 + d l = 0 for TE modes), the Gauss-Newton method is applied to search for these size parameters. There are many discrete values that satisfy these necessary conditions. However, we found that only finite values will make the wave function M 40 (r) stay in the rage of the classically allowed region. Therefore, to obtain the certain values of size parameters to provide resonance phenomena, the range to find these values is strictly determined. The numerical results shown here is obtained by using a-values in the range 0 to 1, and k-values in the range 0 to 10 as example.
The numerical results show that at the size parameters x 0 = 0.0674664 and y 0 = 2.441156, the solutions M 40 (r) for n 1 = 1.47, n 2 = 2(kr) −2 , n 3 = 1, and l = 40 are presented the TM resonance as shown in Figure 3. Only slightly change in the size parameters x 0 and y 0 will make the radial wave function lost the resonance behavior as shown in Figures 4 and 5. Table 1 demonstrates the comparison of the size parameters for the x-value and y-value in each case. Table 1. The size parameters x-value and y-value for the solutions M 40 (r) for n 1 = 1.47, n 2 = 2(kr) −2 , n 3 = 1, and l = 40 of TM mode.
Behavior of the Radial Wave Function
x 0 y 0 A typical resonance function is shown in Figure 3 (with the potential function V r (40) superimposed). The solution M 40 (r) for n 1 = 1.47, n 2 = 2(kr) −2 , n 3 = 1, and l = 40 has the TM resonance at x 0 = 0.0674664 and y 0 = 2.441156. For these values of the energy level, the particles will become temporally trapped inside the sharp peak, keep oscillating back and forth many times and finally tunneling back through the classical forbidden region to the outside particle. Note that this corresponds to an 'almost bound' state because of the very small transmissivity of the wave function into the exterior region 3. This is another way of looking at a resonance; the electromagnetic energy can be considered to be mostly trapped for a period of time until is gradually leaks out of the potential well formed by a combination of the refractive index and the so-called 'centrifugal potential' [17]. Clearly the TM wave function shown in Figure 3 is effectively trapped close to the surface of the inner sphere. In summary, we have derived the analytic conditions under which such 'virtual bound states' can occur for both polarizations (TM and TE) of the electromagnetic waves. radius solution -. , potential - shows the case that the values of x 0 and y 0 are slightly below the resonance, where x 0 = 0.0574664 and y 0 = 2.041156. The behavior of the TM wave function suddenly vanishes before entering the allowed region of the sphere and continuing to the outside world without any sharp peaks. Figure 5 shows the case that the values of x 0 and y 0 are slightly above the resonance, where x 0 = 0.0874664 and y 0 = 2.541156. The behavior of the TM wave function has an exponential-like increasing manner in the tunneling region, then decays when r → ∞. This case is very similar to the case that the values of x 0 and y 0 are on resonance except that the exponential-like growth in the tunneling region has more higher amplitude than the resonance case and is both inside and outside of the well. The wave is not totally trapped inside the allowed region. radius solution -. , potential - Figure 5. Behavior of the TM wave function in the vicinity of resonance for the case that the size parameters x-value and y-value are slightly above resonance, x 0 = 0.0874664 and y 0 = 2.541156. The inner radius is a = 0.02763707 and the outer radius b = 1. Red line (-.) represents the wave solution M 40 (r); Blue line (-) represents the potential function V 40 (r).
Conclusions
We have presented the derivation of analytic conditions to find the locations of resonances for a two-layer sphere where the inner region has constant refractive index and the outer region has variable refractive index profile of A(kr) m , where A and m are arbitrary constants. Using the resonance analysis, the coefficients of the exponential-like increasing function j l (kr) in the radial wave functions for both TM and TE modes need to approach zero as r → ∞. This yields the necessary conditions to guarantee that the resonance occurs. The derived analytic conditions to determine the resonance locations have been investigated here as the parameters A l = 0 and 1 + c l = 0 for TM mode, and C l = 0 and 1 + d l = 0 for TE modes. Since we are dealing with the variable refractive index profile, it may not lead to a total bound state as in the case of constant refractive index profiles.
For future work, there are many useful variable refractive indices that can be applied by using this resonance analysis. Other analytic solutions [11][12][13] still await investigation. These provide many interesting cases for future study. However, the analysis may encounter obstacles when dealing with singular refractive index profiles. The unstructured mesh finite element method can still be applied, however, especially in the case that the analytic forms of conditions to find the resonance locations are not provided.
Conflicts of Interest:
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript:
MDR
Morphology-dependent resonance WGM Whispering gallery mode TE Transverse Electric TM Transverse Magnetic | 7,939.4 | 2021-10-07T00:00:00.000 | [
"Physics"
] |
Learner Analytics; The Need for User-Centred Design in Learning Analytics
The interaction and interface design of Learning Analytics systems is often based upon the ability of the developer to extract information from disparate sources and not on the types of data and interpretive needs of the user. Current systems also tend to focus on the educator’s view and very rarely involve the user in the development process. From using HCI methods, we have found that learners want to be able to access an overarching view of their previous, current and future learning activity. We propose that the only way of truly creating a personalised, supportive system of education is to place the learner at the centre, giving them control of their own Learner Analytics.
Introduction
Higher Education gathers an "astonishing array of data about its 'customers'" but has traditionally been inefficient in its data usage [1].The analysis of this data though has th e potential to identify at-risk learners and provide intervention to assist learners in achieving success [2].Therefore, increasingly, student data is being aggregated and present ed to tutors in the form of Learning Analytics (LA) Dashboards.However, the representation and interaction methods being used are often based upon the ability of the developer to extract information from disparate sources and not on the types of data and interpretive needs of the user [3].Making information available and transparent to tutors is only the first step however.Presenting student data back to students, using student centric formats and metaphors could tackle students' inability to access a composite, overarching view of their current learning activity, which can impact on a student's ability to develop creative divergent thinking skills [4].Both conceptual and visual metaphors have historically been used as an effective teaching tool and have been proven to enhance motivation, learning and retention [5] [6].They must however have a high degree of resonance for the learner [5].
A number of projects have investigated the use of LA and information representation/visualisation such as the Open University's Anywhere app which includes a "range of analytics that show how students engage with it" [7], the University of Bedfordshire's student engagement system [8] and London South Bank University's partnership with IBM to "use predictive analytics to gauge if they might be falling behind" [9].However, there have been few studies that have systematically identified sources of pre-existing data and metrics that are currently used within a HE setting and considered the most appropriate way of analysing, representing and making these available to the learner and teacher.More importantly, it also seems that there has been little consideration given to how Learning Analytics can actually be integrated into Learning and Teaching activities.
Background
As part of an on-going project that is identifying factors for consideration in LA, we are conducting a literature review of work relating to LA systems, together with a review of systems in use within academic institutions.The review has currently identified 22 LA related systems, ranging from tools designed to identify 'at risk' students such as Purdue University's Course Signals [10] and the Student Success System [11] which both use traffic light representations, to tools with more specific goals such as engaging and activating students within large lecture theatres [12] or supporting group work by visualising participation [13].The review has so far found that although there are a number of projects investigating the use of LA there appear to be few systems that are available for general use.The review has also highlighted that existing systems are primarily targeted at educators, with only 5 of the 22 systems being designed purely for use by students.Significantly, only 4 of the studies gathered the requirements for the systems directly from students during the development process and only 4 reported an evaluation of the LA tool with students.
Learning Analytics for Teachers
Popular Virtual Learning Environments (VLE) such as Moodle and Blackboard support basic versions of LA e.g.Course Reports and Performance Dashboards.Contextual interviews that we have conducted with educators at Keele University however have shown that these aspects of the VLE are rarely used and do not support the questions that they would like to answer about their students' learning activity.The sessions have also highlighted that there is no easily accessible method for seeing a student's overall level of interaction on all modules, or a way of identifying clusters of students and their usage of specific types of resource; areas that have been identified as having significant potential in identifying students at risk of failing within a module [14].
Learner Analytics for Students
Considering the lack of work focused on the student view of LA, we have used a User Centred Design (UCD) approach with a group of 82 second year Computer Science students to identify GUI metaphors that will engage and motivate them as learners and personalise their own learning experience.Students were tasked with designing LA Dashboards that allow students to review their own progress, display relevant indicators of engagement with their learning and encourage engagement.They were instructed to consider representing data that is already being collected and suggest new sources of data that could easily be collected.As part of the UCD process, deliverables included sets of User Persona, analysed results of requirements elicitation sessions e.g.card sorts, think aloud sessions, and annotated screen mock-ups of potential LA Dashboards (highlighting 5 key features along with objective justifications wherever possible).
A preliminary thematic analysis of the Dashboards from 22 students has suggested that their understanding of LA and their requirements for it are often formed by the limitations of the technologies and systems that they currently use within the University.Of the 117 LA Dashboard features that were proposed, 86% of the students specified features related to HCI/design e.g. the need for accessibility options, device compatibility, suggested layout and display options.This might be due to the background of the students and the content of the module itself but it is interesting to see that the students realise the importance of appropriate interaction and interface design.The latter is something that, from anecdotal evidence, is lacking in the systems currently in place to support their learning.A similarly high percentage of features proposed by the students related to ways of representing a student's progress.These included representations of their attendance, assessment and VLE activity, often in the form of engagement scores, activity meters, progress trackers and comparisons to their colleagues.An associated set of features proposed by 73% of students related to scheduling e.g.coursework hand-in dates and timetabling.Other features relating to resources e.g.module information, links to recently uploaded resources, relevant book availability, were also mentioned, along with suggestions for alternative/simpler communication methods e.g.instant messaging and the ability to contact lecturers directly about feedback from a particular piece of coursework.
Examples of some of the LA Dashboards that students have created can be viewed online (http://bit.ly/LA_Dashboard_egs) along with the features that they represent.Figure 1 below however shows a common representation that was proposed relating to progress and scheduling using a timeline metaphor.The justification for why the student had included the timeline was as follows: My Timeline: Past events which have been completed successfully provide gratification to the user in the form of positive icons such as "thumbs up" or "smiley faces".The past timeline balances with the upcoming events to try and alleviate future workload stress by demonstrat i n g positive success at the same time.
At the moment, some of the information suggested by students to be included on a LA Dashboard is available to them.However, the data is stored in disparate systems across the University or hidden within module content in different, often inaccessible formats.It also requires the students to combine all of the information, interpret it and create their own composite view of their previous, current and future learning activity.Without the development of higher order thinking skills such as metacognition, this would be difficult to achieve.This also requires knowledge and experience e.g.average class marks and the time it takes to complete a new piece of coursework, that they may not have access to.
The design solution suggested by students to this problem can be summarised as follows: (i) Consider and use well known interaction and interface design principles.(ii) Group data together in one place that represents their engagement and progress.(iii) Provide information and functionality that helps them to schedule their learning.(iv) Highlight resources that will support their learning.(v) Offer alternative forms of communication that directly relate to their learning content.
Conclusions
It is clear that HCI is an important factor for consideration when designing LA systems; not just for interaction with the system itself but also when supporting a studen t's access to an overview of their learning.Current systems have tried to tackle some of these issues but HCI and UCD are often being ignored and users i.e. learners are often not included in the development process.This not only causes issues related to usability and accessibility but also means that the features that students want are often missed.It also means that assumptions are made as to how students want to interact, not only with VLE's, but with resources, educators and their own engagement and assessment.So far Learning Analytics has tended to focus on an educator's and administrator's view of a student's learning, mainly in the form of measuring engagement.
The only way however of truly advancing the envisaged "personalized, supportive system of higher education" [15] is to place the learner at the centre, giving them control of their own Learner Analytics.
Figure 1 .
Figure 1.Timeline representation combining progress and scheduling features.
Table 1 .
Features (grouped during thematic analysis) proposed by students (n=22) for an LA Dashboard. | 2,227.4 | 2016-08-23T00:00:00.000 | [
"Computer Science",
"Education"
] |
The Usability of Pumice Powder as a Binding Additive in the Aspect of Selected Mechanical Parameters for Concrete Road Pavement
In this study, the usability of pumice powder and lime in concrete production as a binding additive for rigid superstructure concrete road pavement was investigated. Following the determination of the optimum binder ratio, these new binder ratios were used in crushed limestone concrete production. The concrete thus formed was named concrete containing cement, pumice powder and lime (PPCC). The normally produced concrete, without pumice powder and lime binder was selected as reference concrete (RC). Regarding the total binder amount of the most appropriate binder ratio 50% was found to be cement, 30% pumice powder and 20% lime in the result of the study. In consequence of the study, the 20 ± 2 °C and 7–28 days compressive strengths of the reference concrete were found to be 33.8 MPa and 38.2 MPa and its bending strengths were 4.2 MPa and 4.7 MPa. The 20 ± 2 °C and 7–28 days compressive strengths of PPCC were found to be 25.1 MPa and 28.3 MPa and its bending strengths were 3.2 MPa and 3.5 MPa. The results of the study showed the usability of PPCC in concrete pavement.
Introduction
The term "pumice" is called "ponce" in French, whereas the stones with medium particle size are called "pumice" in English [1]. Pumice stone formations, which are formed because of volcanic events and have a cavernous, spongy structure, are found in many regions of the world where volcanic activities take place [2]. Pumice contains numerous pores ranging from macro scale to micro scale due to the sudden release and sudden cooling of the gases it embodies during its formation. Since there are disconnected caverns between the pores, its permeability is low and heat and sound insulation is quite high [3]. Today, the use of pumice is developing day by day when compared with the past, and it is being used in various fields. Its usage in other sectors is newly becoming widespread [4]. Pumice sources identified around the world are approximately 18 billion m 3 [5]. Especially regarding pumice beds, Bitlis province has significant potential due to both volcanic area and geological structure. The beds in question are located in the Tatvan district of Bitlis province and 81,500,000 m 3 pumice beds of good quality are available [6,7]. As the pumice grain grows, the grain specific gravity decreases. Pore percentage increases as grain sizes increase. Pumice is a very light, pyroclastic magmatic rock type shaped during the volcanic eruption. The lava is shaped in liquid form, including gas bubbles, throughout the period in which it spurts out into the air as gas froth [8]. Pumice is especially used in the production of trass cement. When pumice stones are ground with cement fineness and then mixed with cement or lime, they acquire a binding property. These types of volcanic rocks are called pozzolana [9]. Small crystals of various minerals are found in pumices, which have an amorphous structure. The most common crystals are feldspar, augite, hornblende and zircon. Pumice is much used in the study. Seventy-two types of concrete samples were formed in different mixing ratios, with pumice powder. The optimum ratios of pumice powder and lime as the binding additive were determined in consequence of all the experiments. Following the determination of the optimum binder ratio, these new binder ratios were used in crushed limestone concrete production. Compressive and bending strength tests of the new concrete produced were performed. The concrete thus formed was named concrete containing cement, pumice powder and lime (PPCC). The concrete produced without pumice powder and lime binder, only with cement binder was selected as reference concrete (RC). RC and PPCC concrete were cured with standard water curing of 7 and 28 days. Following water curing, compressive and bending strength tests were performed on all concrete samples. The results of the study showed the usability of PPCC in concrete pavement.
Materials
CEM I 42.5 R type cement, which complies with TS EN 197-1 (EN 197-1:2011) (2012) standard, was used in all experiments [19]. Chemical properties of CEM I 42.5 R cement are given in Table 1 [20]. The cement appearance is given in Figure 1. Compressive and bending strength tests of the new concrete produced were performed. The concrete thus formed was named concrete containing cement, pumice powder and lime (PPCC). The concrete produced without pumice powder and lime binder, only with cement binder was selected as reference concrete (RC). RC and PPCC concrete were cured with standard water curing of 7 and 28 days. Following water curing, compressive and bending strength tests were performed on all concrete samples. The results of the study showed the usability of PPCC in concrete pavement.
Materials
CEM I 42.5 R type cement, which complies with TS EN 197-1 (EN 197-1:2011) (2012) standard, was used in all experiments [19]. Chemical properties of CEM I 42.5 R cement are given in Table 1 [20]. The cement appearance is given in Figure 1. Potable Bitlis city water was used in the experiments. Pumice powder is shown in Figure 2. Potable Bitlis city water was used in the experiments. Pumice powder is shown in Figure 2. Pumice powder grain diameter was between 0-0.04 mm. Specific gravity of slacked lime was 2.2 g/cm 3 (Miner Mining Transportation Trade Limited Company) [21] and complied with the TS EN 459-1 (2017) (EN 459-1:2015) standard [22]. The lime view is shown in Figure 3. The chemical properties of pumice and lime are shown in Table 2 [23,24]. The pumice powder used as cement additive must comply with TS 25 (TS 25/T1) (2008) Trass Standard. It was indicated in the TS 25 (TS 25/T1) (2008) Trass Standard prepared by TSE (Turkish Standardization Institute) that the SiO2 + Al2O3 + Fe2O3 total should at least have the ratio of 70% [25]. As shown in Table 2, the pumice powder SiO2 + Al2O3 + Fe2O3 was 86.5% in total. This ratio indicates that pumice powder can be used as binder. For comparison purposes, physical and mechanical properties of pumice, cement, and lime are shown in Table 3 [26,27]. Pumice powder grain diameter was between 0-0.04 mm. Specific gravity of slacked lime was 2.2 g/cm 3 (Miner Mining Transportation Trade Limited Company) [21] and complied with the TS EN 459-1 (2017) (EN 459-1:2015) standard [22]. The lime view is shown in Figure 3. Pumice powder grain diameter was between 0-0.04 mm. Specific gravity of slacked lime was 2.2 g/cm 3 (Miner Mining Transportation Trade Limited Company) [21] and complied with the TS EN 459-1 (2017) (EN 459-1:2015) standard [22]. The lime view is shown in Figure 3. The chemical properties of pumice and lime are shown in Table 2 [23,24]. The pumice powder used as cement additive must comply with TS 25 (TS 25/T1) (2008) Trass Standard. It was indicated in the TS 25 (TS 25/T1) (2008) Trass Standard prepared by TSE (Turkish Standardization Institute) that the SiO2 + Al2O3 + Fe2O3 total should at least have the ratio of 70% [25]. As shown in Table 2, the pumice powder SiO2 + Al2O3 + Fe2O3 was 86.5% in total. This ratio indicates that pumice powder can be used as binder. For comparison purposes, physical and mechanical properties of pumice, cement, and lime are shown in Table 3 [26,27]. The chemical properties of pumice and lime are shown in Table 2 [23,24]. [25]. As shown in Table 2, the pumice powder SiO 2 + Al 2 O 3 + Fe 2 O 3 was 86.5% in total. This ratio indicates that pumice powder can be used as binder. For comparison purposes, physical and mechanical properties of pumice, cement, and lime are shown in Table 3 [26,27]. One of the highest cost items in concrete production is the amount of cement used in production. In this study, the compressive strengths of samples formed by using pumice powder (PP) and lime (L) together with cement (C) were calculated in order to reduce the amount of cement. Seventy-two different mixture types were formed for the determination of the optimum binder ratio. With all the mixtures formed to determine the appropriate binder ratio, the water/binder ratio was considered as 0.60. Consistency and workability were not influenced by the substitution. Three samples from each type of mixture were taken, and the average of these three values was calculated. The prepared mixtures are shown in Figure 4. One of the highest cost items in concrete production is the amount of cement used in production. In this study, the compressive strengths of samples formed by using pumice powder (PP) and lime (L) together with cement (C) were calculated in order to reduce the amount of cement. Seventy-two different mixture types were formed for the determination of the optimum binder ratio. With all the mixtures formed to determine the appropriate binder ratio, the water/binder ratio was considered as 0.60. Consistency and workability were not influenced by the substitution. Three samples from each type of mixture were taken, and the average of these three values was calculated. The prepared mixtures are shown in Figure 4.
Binder Mixing Ratios
Type-1 Mixing Ratios Type-1 mixing ratios are shown in Table 4. The mixture contained cement, pumice powder and water. There was no lime in the mixture.
Binder Mixing Ratios
Type-1 Mixing Ratios Type-1 mixing ratios are shown in Table 4. The mixture contained cement, pumice powder and water. There was no lime in the mixture.
Six different types of mixture were formed by taking 0%, 20%, 40%, 60%, 80% and 100% of the cement amount. Table 4 and Figure 5, the 1-1 mixture was the binder's reference mortar for optimal binder fixation. Only cement was used as binder in the reference mortar. As shown in Table 4 and Figure 5, the 1-1 mixture was the binder's reference mortar for optimal binder fixation. Only cement was used as binder in the reference mortar. As shown in Table 4 and Figure 6, only pumice powder was used as the binder in the mixture mortar 1-6.
Type-2 Mixing Ratios
Type-2 mixing ratios are shown in Table 5. The mixture contained pumice powder, lime and water. There was no cement in the mixture. As shown in Table 4 and Figure 6, only pumice powder was used as the binder in the mixture mortar 1-6. As shown in Table 4 and Figure 5, the 1-1 mixture was the binder's reference mortar for optimal binder fixation. Only cement was used as binder in the reference mortar. As shown in Table 4 and Figure 6, only pumice powder was used as the binder in the mixture mortar 1-6.
Type-2 Mixing Ratios
Type-2 mixing ratios are shown in Table 5. The mixture contained pumice powder, lime and water. There was no cement in the mixture.
Type-2 Mixing Ratios
Type-2 mixing ratios are shown in Table 5. The mixture contained pumice powder, lime and water. There was no cement in the mixture. Six different types of mixture were formed by taking 0%, 20%, 40%, 60%, 80% and 100% of the pumice powder amount as lime amount.
As shown in Table 5 and Figure 7, only lime was used as binder in the mixture mortar 2-6.
Mixture Type Cement (C) (%) Pumice Powder (%) Lime (%)
As shown in Table 5 and Figure 7, only lime was used as binder in the mixture mortar 2-6.
Type-3 Mixing Ratios
Type-3 mixing ratios are shown in Table 6. The mixture contained cement, lime and water. There was no pumice powder in the mixture. Table 6. Type-3 mixing ratios.
Type-4 Mixing Ratios
Type-4 mixtures included cement, pumice powder, lime and water. The mixing ratio of each type was different. Nine different types of mixtures were produced from Type-4 mixtures.
Type-3 Mixing Ratios
Type-3 mixing ratios are shown in Table 6. The mixture contained cement, lime and water. There was no pumice powder in the mixture. Table 6. Type-3 mixing ratios.
Type-4 Mixing Ratios
Type-4 mixtures included cement, pumice powder, lime and water. The mixing ratio of each type was different. Nine different types of mixtures were produced from Type-4 mixtures.
Type-4-1 Mixing Ratios
Type 4-1 mixing ratios are shown in Table 7. In this section, the mixing ratio with the highest compressive strength was considered. Table 7. Type-4-1 mixing ratios. In the mixture ratio with the highest compressive strength, six different types of mixture were formed by considering the amount of lime as 0%, 20%, 40%, 60%, 80% and 100% of the cement amount.
Type-4-2 Mixing Ratios
Type-4-2 mixing ratios are shown in Table 8. In this section, the mixing ratio with the highest compressive strength in the second type mixture was considered. In the mixture ratio with the highest compressive strength, six different types of mixture were formed by considering the amount of lime as 0%, 20%, 40%, 60%, 80% and 100% of the pumice powder amount.
Type-4-3 Mixing Ratios
Type-4-3 mixing ratios are shown in Table 9. In this section, the mixing ratio with the highest compressive strength in the first type mixture was considered. In the mixture ratio with the highest compressive strength, six different types of mixture were formed by considering the amount of lime as 0%, 20%, 40%, 60%, 80% and 100% of the (cement + pumice powder) amount.
Type-4-4 Mixing Ratios
Type-4-4 mixing ratios are shown in Table 10. In this section, the mixing ratio with the highest compressive strength in the second type mixture was considered. In the mixture ratio with the highest compressive strength, six different types of mixture were formed by considering the amount of cement as 0%, 20%, 40%, 60%, 80% and 100% of the pumice powder amount.
Type-4 Mixing Ratios
Type-4-5 mixing ratios are given in Table 11. In this section, the mixing ratio with the highest compressive strength in the second type mixture was considered. In the mixture ratio with the highest compressive strength, six different types of mixture were formed by considering the amount of cement as 0%, 20%, 40%, 60%, 80% and 100% of the lime amount.
Type-4-6 Mixing Ratios
Type-4-6 mixing ratios are presented in Table 12. In this section, the mixing ratio with the highest compressive strength in the second type mixture was considered. 30 20 In the mixture ratio with the highest compressive strength, six different types of mixture were formed by considering the amount of cement as 0%, 20%, 40%, 60%, 80% and 100% of the (pumice powder + lime) amount.
Type-4-7 Mixing Ratios
Type-4-7 mixing ratios are given in Table 13. In this section, the mixing ratio with the highest compressive strength in the third type mixture was considered. In the mixture ratio with the highest compressive strength, six different types of mixture were formed by considering the amount of pumice powder as 0%, 20%, 40%, 60%, 80% and 100% of the cement amount.
Type-4-8 Mixing Ratios
Type-4-8 mixing ratios are shown in Table 14. In this section, the mixing ratio with the highest compressive strength in the third type mixture was considered. In the mixture ratio with the highest compressive strength, six different types of mixture were formed by considering the amount of pumice powder as 0%, 20%, 40%, 60%, 80% and 100% of the lime amount.
Type-4-9 Mixing Ratios
Type-4-9 mixing ratios are shown in Table 15. In this section, the mixing ratio with the highest compressive strength in the third type mixture was considered. In the mixture ratio with the highest compressive strength, six different types of mixture were formed by considering the amount of pumice powder as 0%, 20%, 40%, 60%, 80% and 100% of the (cement + lime) amount.
Samples prepared for optimum binder fixation were prepared for 7-day compressive strength determination and each mixture was prepared as three pieces in total, with dimensions of 150 × 150 × 150 mm. Samples were cured for 7 days at 20 ± 2 • C by standard water curing. Samples taken into the curing pool are shown in Figure 8.
In the mixture ratio with the highest compressive strength, six different types of mixture were formed by considering the amount of pumice powder as 0%, 20%, 40%, 60%, 80% and 100% of the (cement + lime) amount.
Samples prepared for optimum binder fixation were prepared for 7-day compressive strength determination and each mixture was prepared as three pieces in total, with dimensions of 150 × 150 × 150 mm. Samples were cured for 7 days at 20 ± 2 °C by standard water curing. Samples taken into the curing pool are shown in Figure 8.
Reference Concrete Mixing Ratios
In reference concrete production, CEM I 42.5 R type cement, which complied with TS EN 197-1 (EN 197-1:2011) standards, crushed limestone as aggregate and Bitlis city water qualifying as drinking water for concrete mixing water were used. Reference concrete class was taken as C30/37. In the study, reference concrete samples were prepared for 7 and 28 days of daily compressive strength determination, in dimensions of 150 × 150 × 150 mm. being three each and six in total. Three samples were cured with a 20 ± 2 °C standard water curing of 7 days, and the other three samples were cured with a 20 ± 2 °C standard water curing of 28 days. The curing pool is shown in Figure 8. The quantities of reference concrete materials are shown in Table 16. Six pieces of samples with dimensions of 100 × 100 × 400 mm were prepared for bending strength. three samples were treated with a 20 ± 2 °C standard water curing of 7 days, and the other three samples were treated with a 20 ± 2 °C standard water curing of 28 days. The curing pool is shown in Figure 8. Compressive and bending strength tests of samples after curing were performed. TS EN 12390-3 (2010) standard (EN 12390-3/2001) [28] was used in the compressive strength test, and the TS EN 12390-5 (2010) standard (EN 12390-5:2000) [29] was used in the bending strength test.
Reference Concrete Mixing Ratios
In reference concrete production, CEM I 42.5 R type cement, which complied with TS EN 197-1 (EN 197-1:2011) standards, crushed limestone as aggregate and Bitlis city water qualifying as drinking water for concrete mixing water were used. Reference concrete class was taken as C30/37. In the study, reference concrete samples were prepared for 7 and 28 days of daily compressive strength determination, in dimensions of 150 × 150 × 150 mm. being three each and six in total. Three samples were cured with a 20 ± 2 • C standard water curing of 7 days, and the other three samples were cured with a 20 ± 2 • C standard water curing of 28 days. The curing pool is shown in Figure 8. The quantities of reference concrete materials are shown in Table 16. Six pieces of samples with dimensions of 100 × 100 × 400 mm were prepared for bending strength. three samples were treated with a 20 ± 2 • C standard water curing of 7 days, and the other three samples were treated with a 20 ± 2 • C standard water curing of 28 days. The curing pool is shown in Figure 8. Compressive and bending strength tests of samples after curing were performed. TS EN 12390-3 (2010) standard (EN 12390-3/2001) [28] was used in the compressive strength test, and the TS EN 12390-5 (2010) standard (EN 12390-5:2000) [29] was used in the bending strength test.
Mixing Ratios of Concrete with Optimum Binding Ratio (PPCC)
In the production of PPCC, CEM I 42.5 R type cement, 0-0.04 mm pumice powder and lime were used as binders in accordance with TS EN 197-1 standards (EN 197-1:2011). Potable Bitlis city water was used as aggregate for crushed limestone and concrete mixed water. C30/37 concrete class was considered in PPCC concrete production. Three samples from each type of mixture were taken, and the average of these three values was calculated. The material mixing ratios obtained at the optimum binder ratio are shown in Table 17. The water/binder ratio of the concrete (PPCC) prepared in the ratio of reference concrete (RC) and optimum binder were taken as 0.42 as shown in Tables 16 and 17.
Sieve Analysis Method
The recommended slump value for pavement concrete is 3 cm according to Table 18 [14]. For 3 cm slump, the Water/Cement (W/C) value of which is 0.42, the cement amount was considered as 450 kg/m 3 and the approximate water amount according to the w/c ratio as 189 kg/m 3 . The mix design target strength for 0.42 w/c ratio was found to be 380 kg/cm 2 (38 MPa) [14]. Table 19 shows the approximate w/c ratios according to the concrete compressive strengths. Since the maximum w/c ratio in the coating concrete is desired to be between 0.40-0.45, for the non-air entrained concrete, the w/c ratio of which was 0.42, an average target compressive strength was found, the 28 days compressive strength of which was 40 MPa [16,30]. In accordance with Table 19, reference concrete (RC) and optimum binder concrete (PPCC) class were established as C30/37, considering the mean target compressive strength. C30/37 concrete properties can be seen in Table 20 [16,30]. The amount of aggregate required for sieve analysis is given in Table 21 [16,31]. Reference concrete (RC) and optimum binder concrete (PPCC) sieve analysis were performed according to the TS EN 933-1 (2012) (EN 933-1:2012) standard [31]. As stated in TS EN 933-1 (2012) (EN 933-1:2012) for sieve analysis, 3 kg sample was taken considering the largest aggregate grain size in the concrete, which was 16 mm [31].
Compressive Strength Test Results for Optimum Binding Ratio Determination
Samples prepared for compressive strength tests are shown in Figure 9. In accordance with Table 19, reference concrete (RC) and optimum binder concrete (PPCC) class were established as C30/37, considering the mean target compressive strength. C30/37 concrete properties can be seen in Table 20 [16,30]. The amount of aggregate required for sieve analysis is given in Table 21 [16,31]. Reference concrete (RC) and optimum binder concrete (PPCC) sieve analysis were performed according to the TS EN 933-1 (2012) (EN 933-1:2012) standard [31]. As stated in TS EN 933-1 (2012) (EN 933-1:2012) for sieve analysis, 3 kg sample was taken considering the largest aggregate grain size in the concrete, which was 16 mm [31].
Compressive Strength Test Results for Optimum Binding Ratio Determination
Samples prepared for compressive strength tests are shown in Figure 9. The appearance of the samples in the compressive strength tester is shown in Figure 10. The appearance of the samples in the compressive strength tester is shown in Figure 10. The results are presented for the 7-day period Sections 3.1.1-3.1.4. Three samples for each mixture design were taken, and the average of these three values was calculated.
Type-1 Compressive Strength Test Results
Type-1 mixture quantity, unit volume weight (BHA) and compressive strength test results are shown in Table 22. Pumice powder was taken as 0%, 20%, 40%, 60%, 80% and 100% of the cement amount. The mixture 1-1 was the reference mortar of the binder paste. Only cement was used as binder in the reference mortar. According to the reference mortar, the mixture with the highest strength was the 1-2 mixture and its compressive strength was found to be 17.3 MPa.
Type-2 Compressive Strength Test Results
Type-2 mixture quantity, unit volume weight and compressive strength test results are shown in Table 23. The amounts of lime were taken as 0%, 20%, 40%, 60%, 80% and 100% of the pumice powder amount.. The results are presented for the 7-day period Sections 3.1.1-3.1.4. Three samples for each mixture design were taken, and the average of these three values was calculated.
Type-1 Compressive Strength Test Results
Type-1 mixture quantity, unit volume weight (BHA) and compressive strength test results are shown in Table 22. Pumice powder was taken as 0%, 20%, 40%, 60%, 80% and 100% of the cement amount. The mixture 1-1 was the reference mortar of the binder paste. Only cement was used as binder in the reference mortar. According to the reference mortar, the mixture with the highest strength was the 1-2 mixture and its compressive strength was found to be 17.3 MPa.
Type-2 Compressive Strength Test Results
Type-2 mixture quantity, unit volume weight and compressive strength test results are shown in Table 23. The amounts of lime were taken as 0%, 20%, 40%, 60%, 80% and 100% of the pumice powder amount.
As can be seen in Table 23, in this section, the mixture with the highest strength was 2-3 type mixture and its compressive strength was calculated as 0.5 MPa.
Type-3 Compressive Strength Test Results
Type-3 mixture quantity, unit volume weight and compressive strength test results are shown in Table 24. The amounts of lime was taken as 0%, 20%, 40%, 60%, 80% and 100% of the cement amount. As shown in Table 24, the 3-1 mixture was the reference mortar of the binder paste. Only cement was used as binder in the reference mortar. In this section, the mixture with the highest strength regarding reference mortar was 3-2 and its compressive strength was found to be 20.1 MPa.
Type-4 Compressive Strength Test Results
Compressive strength tests of the nine different types of mixture obtained from Type-4 mixtures were performed and the strengths of binder pastes were calculated.
Type-1 Compressive Strength Test Results
Type-4-1 mixture quantity, weight per unit of volume and compressive strength test results are shown in Table 25. In this section, the mixing ratio with the highest compressive strength in Type-1 (Table 22) mixture was considered. Accordingly, in the mixing ratio with the highest compressive strength, the amount of lime were taken as 0%, 20%, 40%, 60%, 80% and 100% of the cement amount. As shown in Table 25, the highest strength in this section was a 4-1-1 type mixture with a compressive strength of 17.3 MPa.
Type-4-2 Compression Test Results
Type-4-2 mixture quantity, weight per unit of volume and compression test results are shown in Table 26. In this section, the mixing ratio having the highest compressive strength in the first type of mixture (Table 22) was considered. In the mixing ratio, the compressive strength of which was found to be the highest, the amounts of lime were taken as 0%, 20%, 40%, 60%, 80% and 100% of the pumice powder amount. As shown in Table 26, the highest strength in this section was a 4-2-6 type mixture with a compressive strength of 20.1 MPa.
Type-4-3 Compression Test Results
Type-4-3 mixture quantity, weight per unit of volume and compression test results are shown in Table 27. In this section, the mixing ratio having the highest compressive strength in the first type of mixture (Table 22) was considered. In the mixing ratio with the highest compressive strength, the amounts of lime were taken as 0%, 20%, 40%, 60%, 80% and 100% of the (cement + pumice powder) amount, respectively. As shown in Table 27, the highest strength in this section was a 4-3-1 type mixture with a compressive strength of 17.3 MPa.
Type-4-4 Compression Test Results
Type-4-4 mixture quantity, weight per unit of volume and compression test results are shown in Table 28. In this section, the mixing ratio having the highest compressive strength in the second type of mixture (Table 23) was considered.
In the mixing ratio, the compressive strength of which was found to be the highest, the amounts of cement were taken as 0%, 20%, 40%, 60%, 80% and 100% of the pumice powder amount. As shown in Table 28, the highest strength in this section was a 4-4-6 type mixture with a compressive strength of 20.2 MPa.
Type-4-5 Compression Test Results
Type-4-5 mixture quantity, weight per unit of volume and compression test results are shown in Table 29. In this section, the mixing ratio having the highest compressive strength in the second type of mixture (Table 23) was considered. In the mixing ratio, the compressive strength of which was found to be the highest, the amounts of cement were taken as 0%, 20%, 40%, 60%, 80% and 100% of the lime amount. As seen in Table 29, the highest strength in this section was a 4-5-6 type mixture, the compressive strength of which was 7.6 MPa.
Type-4-6 Compression Test Results
Type-4-6 mixture quantity, weight per unit of volume and compression test results are shown in Table 30. In this section, the mixing ratio with the highest compressive strength in the second type of mixture (Table 23) was considered. In the mixing ratio, the compressive strength of which was found to be the highest, the amounts of cement were taken as 0%, 20%, 40%, 60%, 80% and 100% of the (pumice powder + lime) amount. As seen in Table 30, the highest strength in this section was 4-6-6 type mixture, the compressive strength of which was 21.9 MPa.
Type-4-7 Compression Test Results
Type-4-7 mixture quantity, weight per unit of volume and compression test results are shown in Table 31. In this section, the mixing ratio, having the highest compressive strength in the third type of mixture (Table 24) was considered. In the mixing ratio, the compressive strength of which was found to be the highest, the amounts of pumice powder were taken as 0%, 20%, 40%, 60%, 80% and 100% of the cement amount. As shown in Table 31, the highest strength in this section was a 4-7-1 type mixture with a compressive strength of 20.1 MPa.
Type-4-8 Compression Test Results
Type-4-8 mixture quantity, weight per unit of volume and compression test results are shown in Table 32. In this section, the mixing ratio having the highest compressive strength in the third type mixture (Table 24) was considered. In the mixing ratio, the compressive strength of which was found to be the highest, the amounts of pumice powder were taken as 0%, 20%, 40%, 60%, 80% and 100% of the lime amount. As shown in Table 32, the highest strength in this section was a 4-8-1 type mixture with a compressive strength of 20.1 MPa.
Type-4-9 Compression Test Results
Type-4-9 mixture quantity, weight per unit of volume and compression test results are shown in Table 33. In this section, the mixing ratio having the highest compressive strength in the third type mixture (Table 24) was considered.
In the mixing ratio, the compressive strength of which was found to be the highest, the amounts of pumice powder were taken as 0%, 20%, 40%, 60%, 80% and 100% of the (cement + lime) amount. As shown in Table 33, the highest strength in this section is a 4-9-1 type mixture with a compressive strength of 20.1 MPa. The maximum compressive strengths of each type of mixture are shown in Table 34, taking into account all mixing ratios. As can be seen in Table 34, regarding the fixation of optimum binder ratio, the highest strength was a 4-6-6 type mixture, the compressive strength of which was 21.9 MPa.
The compressive strength test results of all mixture types are shown in Figure 11. As shown in Figure 11, regarding the fixation of optimum binder ratio, the highest strength was a 4-6-6 type mixture, the compressive strength of which was 21.9 MPa.
Sieve Analysis Results
Reference concrete (RC) and optimum binder concrete (PPCC) sieve analysis is shown in Table 35. Table 35. Reference concrete (RC) and optimum binder concrete (PPCC) sieve analysis.
On Sieve Figure 11. The average compressive strength test results of all mixture type.
Sieve Analysis Results
Reference concrete (RC) and optimum binder concrete (PPCC) sieve analysis is shown in Table 35. The reference concrete (RC) and optimum binder concrete (PPCC) sieve analysis graph is shown in Figure 12. As shown in Figure 12, aggregate granulometry of concretes conformed to TS802 (2016) standard. Standard TS 802 (2016) (ACI 211.1-91) emphasizes that the gradation curve for such aggregates must lie between lines A16 and B16 or between lines B16 and C16. Table 36 shows the compressive and bending strength test results of the reference concrete (RC) and the optimum binder concrete (PPCC). The results were presented for 7-day and 28-day periods. Three samples for each mixture design under each curing condition were taken, and the average of these three values was calculated. As shown in Figure 12, aggregate granulometry of concretes conformed to TS802 (2016) standard. Standard TS 802 (2016) (ACI 211.1-91) emphasizes that the gradation curve for such aggregates must lie between lines A16 and B16 or between lines B16 and C16. Table 36 shows the compressive and bending strength test results of the reference concrete (RC) and the optimum binder concrete (PPCC). The results were presented for 7-day and 28-day periods. Three samples for each mixture design under each curing condition were taken, and the average of these three values was calculated.
Concrete Compressive and Bending Strength Test Results
The average compressive strength test results of reference concrete (RC) and optimum binder concrete (PPCC) are shown in Figure 13. The average bending strength test results of reference concrete (RC) and optimum binder concrete (PPCC) are shown in Figure 14.
Conclusions
Pumice powder is a waste material that contributes to environmental pollution and waste landfills. The presence of pores in coarse pumice gives the extremely low compressive and bending strengths, and therefore, such materials cannot be used as aggregates in concrete production. On this The average bending strength test results of reference concrete (RC) and optimum binder concrete (PPCC) are shown in Figure 14. The average bending strength test results of reference concrete (RC) and optimum binder concrete (PPCC) are shown in Figure 14.
Conclusions
Pumice powder is a waste material that contributes to environmental pollution and waste landfills. The presence of pores in coarse pumice gives the extremely low compressive and bending strengths, and therefore, such materials cannot be used as aggregates in concrete production. On this
Conclusions
Pumice powder is a waste material that contributes to environmental pollution and waste landfills. The presence of pores in coarse pumice gives the extremely low compressive and bending strengths, and therefore, such materials cannot be used as aggregates in concrete production. On this basis, coarse pumice has limited applicability in the construction sector. In this study, the usability of pumice powder and lime in concrete production as a binding additive for concrete road pavement was investigated. A total of 72 types of concrete samples were composed with different mixing ratios, which were formed with cement, pumice powder and lime mixtures. The most appropriate ratios of cement, pumice powder and lime as the binding additive were determined in consequence of all the experiments. Following the determination of the optimum binder ratio, these new binder ratios were used in crushed limestone concrete production. Compressive and bending strength tests of the new concrete produced were performed. The concrete thus formed was named concrete containing cement, pumice powder and lime (PPCC). The normally produced concrete, without pumice powder and lime binder was selected as reference concrete (RC). The reference concrete and PPCC concrete were cured with standard water curing of 7 and 28 days. The following results were obtained in the study: • As a result of the study, regarding the total binder amount of the most appropriate binder ratio, 50% was found to be cement, 30% pumice powder and 20% lime; • In consequence of the study, the 20 ± 2 • C and 7-28 days average compressive strengths of reference concrete were found to be 33. Funding: This research has not received any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. | 8,282.4 | 2019-08-27T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
On Being at HOme in Ourselves and tHe WOrld : lOve , sex , gender , and Justice
Helga Varden’s Sex, Love, & Gender: A Kantian Theory (2020) is a rigorous, beautiful, and transformative book, which does vital work not only in fully developing how Kant’s complex understandings of desire, reflection, and relationality should inform our understanding of his arguments about sex and love but also in positioning these Kantian arguments as absolutely critical resources to contemporary debates about gender identity, sexual orientation, and sexual (in)justice. Rarely is a book so comprehensive, so coherent, and so grounded in a vulnerability we rarely find in philosophy; rarely does it so radically expand the resources we have for dealing with what seems like a familiar problem in such a well-read figure. The literature on Kant and sex is extensive, and yet this book absolutely revolutionizes the kinds of questions we can ask about Kant on sex, love, and gender.
PASCOE, J.
Second, Varden opens with a lineage of both Kantian scholarship by women and feminist Kant scholarship, demonstrating the rich and varied ways that Kant scholarship has been transformed over the past four decades by the influx of women into the field, and revealing Kant scholarship as a site of (perhaps surprising) feminist philosophical innovation. For me, this is both resonant and comforting. I came to Kant because it was, at the time, the only seminar taught by a woman in my graduate department, and as such, was the only seminar in which I was not harassed, belittled, or silenced. Writing about Kant not only allowed me to work with a woman advisor; he provided cover for pursuing questions about love, sex, gender, and race that were not understood as "philosophical" within my graduate department, at least at the time I took them up. Varden's book articulates my own sense of Kant scholarship as a gateway into feminist philosophy, as a rare space in mainstream philosophical scholarship that passes, if you will, a kind of philosophical Bechdel test. My engagement with Varden's book is oriented through this gratitude, and through the sense of belonging that is at the center of Varden's project here: an attendance to the ways that women belong in Kant scholarship, that non-ideal questions of love, sex and gender belong in Kantian philosophy, and that the experiences, desires, and traumas of women and LGBTQIA people belong in philosophical inquiries into what it is to be human. Accordingly, I begin by tracing Varden's argument through a central theme of the book: that one way to think about problems of love, sex, and gender, from both a phenomenological and a political perspective, is to tend to the importance of being at home with oneself, in the world, and with others. I explore how this framework allows Varden to develop a distinctly and innovatively Kantian account of our sexually loving and gendered selves, and their implications both for questions of virtue and morality, and for questions of justice. I then consider the ways that Varden's analysis provides us with much needed resources to think about how inhabiting a self-defensive stance in the face of oppression may violate our duties to resist our own oppression. Finally, having traced the arguments at the heart of the book, I turn to two puzzles in Varden's account of the just state: her understanding of sexual consent, and her defense of the state's right to restrict abortion.
Our sexually lOving and gendered selves: On Being "at HOme" in Oursleves and WitH OtHers
At the heart of Varden's argument is a study of our sexual selves, which can be understood as a Kantian map of what Linda Alcoff has called the "making capacities" of our sexual selves (2018). Varden teases out the phenomenology of our humanity, including the moral psychology of our sexual selves, laid out across Kant's practical philosophy, it order to make clear that developing or realizing our sexual and gendered is a particularly important human project, one that not only integrates and develops our rational capacities, but our predispositions to animality and humanity, as well. For Varden, any account of our sexual selves must make space for the ways in which sexual desire, sexual identity, and gender identity are part of how we feel at home in the world and in ourselves, and so are not entirely reflective. This means tending to the tension between the "importantly unreflective" dimension of our sexual selves, and the fact that we can nevertheless be responsible for the sexual ends we choose for ourselves: despite the "givenness" of much of our desires, preferences, and limitations, our faculty of desire is reflective, allowing us to step back and consider what we want, and how it fits into our broader life projects.
Varden's key innovation is in drawing on Kant in order to attend to the structure of this self: how our animality, humanity, and personality as distinctively human (and not "embodied rational") beings allows us to both tend to the "importantly unreflective" dimension of our sexual selves, and to the fact that we can nevertheless be responsible for the sexual ends we choose for ourselves. Thus, for Varden, "sexuality concerns deeply unreflective aspects of us, namely basic ways in which we feel at one with ourselves, at home in the world as who we are, including when together with others" (129). How one orients sexually, then, is not only a question of desire, but of "tracking something true about oneself with regard to how one feels others can complete one safely in sexual and/or affectionately loving activities as an us" (125). Our sexuality is an essential dimensions of how we feel safe in ourselves (our animality), of how we feel at home in relationship to others and in the social world (our humanity), and of how we express and integrate ourselves in these relationships, morally and otherwise, in ways that make us responsible to ourselves and others (our personality).
Varden's analysis emphasizes our practices of setting and pursuing ends for ourselves, while at the same time insisting that while our end-setting projects "develop, transform, and integrate" these aspects of our being, they are not simply subject to choice: they are integral to self-preservation, our social embeddedness, and our moral lives. This allows us to make space for the "givenness" of the ways we feel most at home in the world, including how we feel most at home with others (125). On Varden's account, this is a significant strength of the Kantian account over both essentialist-determinist and social construction accounts, in that it can make sense of the ways in which there is an unreflective "givenness" to our desires, our sexual orientations, and our sense of ourselves as gendered beings. For Varden, this is a particularly powerful framework in that it makes sense of trans experiences, understood as a "deeply felt need to adjust one's physical embodiment so that it fits better with one's subjective experience of oneself " (126), in both embodied and expressive ways. If there is a "givenness" to the unreflective parts of ourselves, and we are required to set and pursue ends in the world in ways that put us at odds with these unreflective parts of ourselves -if we are expected to orient ourselves sexually towards those we do not desire, or required to move through the world in a physical body that does not fit with our deeply felt subjective experience -then it becomes impossible for us to "develop, transform, and integrate" all the aspects of our being. We cannot be at home in ourselves, we cannot be at home in the world, and we especially cannot be at home in our intimate relations with others. Therefore, failing to get this right, and so creating or reproducing a world in which people are not able to feel at home in themselves and with others in these ways are particularly harmful, as reflected by the high rates of suicide amongst LGBTQIA folk, even in our most liberal states: these are "high stakes" questions of human phenomenology, morality, and justice (113). In its attention to the kinds of suffering experienced by those who face violence, oppression, or belittlement in the project of making their sexual and gendered selves, this book makes a particularly important contribution to our ability to articulate why our gender, sex, and sexual identities are so important to us, and why our freedom to set and pursue ends that allow us to integrate, develop, transform, express, and share these identities are central to our experiences as human beings.
Varden's approach refuses Kant's framing of sex as a "cannibalistic" use of another person, and centers the question of what it means to be sexually loving, such that "being sexually attracted to someone is to want their person -and not just their body -as we want the other person to show us their aesthetic, creative playfulness and invite us to be part of their endeavor to develop themselves as who they are, an endeavor that requires us to learn to show respect for one another in this process, and so, pushes us towards morality" (120). By exploring the ways that our sexually loving selves are key to our ability to feel at home in ourselves, in the world, and with others, Varden's account has ample resources to think through sexual violation and trauma in innovative and important ways. As the propensity to good orients her account of the structure of our phenomenological selves, the propensity to evil organizes her account of violation and trauma. Violation and trauma may take the form of frailty (giving into temptation, or being wronged when another does so), impurity (acting on the wrong motives, or being wronged when another does so), and depravity (self-deceptively acting on the wrong motives while telling ourselves they are good, or being harmed in the process). Varden draws on Kant's anthropological writings -including his analysis of the ways that women "dominate" men, and vice versa -in order to show how these violations shape persistent patterns of domination and oppression, and to identify the failures of justice that arise when these patterns shape "pockets" of barbarism or depravity within the state.
Thus, the question of what it means to be "at home" in oneself, in the world, and in relationships with others is at the center of Varden's account of justice. Her analysis of the relationship between the minimally just and the robustly just state is developed through a compelling argument about why access not just to housing, but to the "private" space of a home is particularly critical to LGBTQIA ways of being. Varden argues that while ensuring "emergency" housing may be a sufficient form of poverty relief for a minimally just state, one of the first priorities of states working towards a more robustly just condition must be ensuring that LGBTQIA folk, as well as survivors of rape and domestic abuse, have access to a home in which they have the privacy to "realize [their] sexual, loving, gendered selves," to present themselves in ways that are "open and vulnerable," and to "ground the lives of human beings in their relationships to themselves and others" (308). There is a critical difference, here, between having access to "housing" and having a home which mirrors the ways that, in the first half of the book, she insists on an account of being "at home in oneself " in ways that are profoundly human, not limited to an abstract account of what rational, embodied beings might need to express themselves.
Home, in this sense, is not only housing: Varden describes the right to marry as the right to create a home together, emphasizing the ways that when same-sex and polyamorous couples are denied the right to marry, they are "not given access to laws constitutive of a rightful legal, personal, domestic "us"" (258), which is harmful because "creating a shared, legally recognized home is constitutive of creating such rightful relations with another person" (211). Being denied these rights forces one into a kind of relational "state of nature", since one cannot rely on the public authority to authorize one's rights to one another, and thus to a shared life, a shared home (290). In such cases, Varden argues, one may find oneself in a "pocket" of injustice in an otherwise just state, such as when laws exclude some people, like same-sex couples, from rights that are ostensibly open to all, or when the state creates conditions in which some parts of our lives are subject to unjust laws or barbarity (as in the case of trans people denied gender affirming surgery, or women denied abortions) (288). In these cases, oppressed people are forced into morally impossible situations, in which the duty to obey the law (e.g., not to engage in formal wrongdoing by breaking the law) is in tension with the duties one has to resist one's own oppression.
making and PrOtecting Our sexual selves: end-setting and tHe duty tO resist Our OWn OPPressiOn
Varden's worry about the impossible positions people face when resisting one's own oppression means breaking the laws of a (minimally) just state is informed by her claim that our duty to resist our own oppression -and particularly the oppression of our gendered, sexually loving selves -is a violation of our perfect duties. Other Kantian feminists have framed this as an imperfect duty (Hay 2013, Cudd 2006, allowing us significant latitude in how we fulfill it. But for Varden, because we have "perfect duties not to treat ourselves and each other in aggressive (destructive and damaging) ways" (153), our duty to resist being treated in such a way is a perfect duty (even when it conflicts with the law, placing us in an impossible situation). This is a demanding account -but one that provides us, I think, with important resources for thinking about how resisting our own oppression can shape our sexual selves, particularly under the kinds of pervasive conditions of sexualized oppression in which LGBTQIA folk and women continue to live.
Varden's analysis of the structure of our sexual selves frames end-setting as a deep and grounding part of the project of being who we are, the mechanism through which we integrate all the parts of ourselves, including our sexually loving and gendered selves, and the ways we are at home both with ourselves and with others. Doing this in a way that is productive to each and all is, Varden argues, decidedly difficult, and so we must be attentive to both the harms that can come from being oppressed or violated in these projects, and the conditions of justice in the world that allow us to set and pursue ends that reflect the selves we develop, transform, and integrate. Both our perfect and imperfect duties play a role here: perfect duties alert us to self-and other-destructing behavior, for example by setting limits on the kinds of ends we can permissibly set (e.g. I can't set sexual ends that violate others' rights to set sexual ends of their own, like rape). Our imperfect duties, on the other hand, hold us accountable to our own happiness and development, and to our duties to assist others in theirs.
In other words, our duties to resist the oppression of our sexual and gendered selves must take both the form of resisting or refusing damaging and destructive pressures, and of actively promoting our own happiness by setting and pursuing the sorts of ends that allow us to feel at home in ourselves in the world. This provides us with a more robust model for thinking about what resisting oppression entails, and it avoids a pitfall of those accounts that take a kind of self-defensive resistance to oppression be our core duty. For example, Carol Hay argues that setting ends to resist our own oppression are imperfect duties to ourselves, and the examples she has in mind include taking active steps to report harassment, or engaging in internal resistance to make the wrong of emotional abuse apparent to ourselves (2013). These are indeed ends of resistance to the violations that might characterize our gendered, embodied, and sexual lives. But when we are oriented primarily through resistance, our duties to resist may come to shape our end-setting projects in ways that are transformative and limiting. I am thinking of cases where we prioritize ends of resistance, so that our sexual ends take the form of not wanting to be raped, or not wanting to be traumatized. In these instances, our -necessary --attentiveness to our own oppression can become an orientation, in a sense: we no longer know what sorts of things we may want, because we are so busy orienting ourselves in response to our oppression. We see this when, for example, women are trained to be attentive to sexual threat or violation at the expense of an attunement to their own desires, limits, preferences, or ends; when women and LGTBQIA folk are trained to perform hyper-sexuality as a mode of self-defense; when we have no answer to "what do you want?" because we have not learned to want anything other than to not be violated. Varden's framework allows us to identify such selfdefensive sexual ends as a violation of both our perfect and imperfect duties to ourselves. These are violations of our perfect duties when we become complicit in being treated in damaging and destructive ways, and they are violations of our imperfect duties to develop our sexual desires, limits, and preferences, to set sexual ends of our own that align with our distinctive conceptions of happiness, and not merely with projects of self-defense.
This, in turn, allows Varden to make important interventions in Kant's own infamous account of the distinction between "natural" and "unnatural" sex, which grounds his claim that because same-sex sex and sodomy involve the pursuit of "unnatural," non-procreative ends, they ought to be legally impermissible. There is both a moral and a legal argument here. Morally speaking, if our perfect duties to ourselves set limits on the kinds of ends we can set in the ways I suggest above, then the relevant question for our sexual ends is not whether an end is "unnatural" but whether it is destructive or damaging to our projects of making -developing, integrating, and transforming -our sexual selves. This is not something that can "be alleviated by thinking about it" (129), as Varden argues: I can't just reason my way out of what I want; it's not just a choice. And so given this, and given the sorts of both perfect and imperfect duties I have to myself, I have a duty to set, and pursue, the sorts of ends that allow me to "develop, integrate, and transform" the varied ends that I have.
From a legal perspective, Varden emphasizes that Kant's conception of innate right involves a robust conception of our bodies as an integral part of our person, essential to our capacity to set and pursue ends in the world. Kant's conception of freedom does set some limits on the kinds of ends we can pursue: our pursuit of those ends cannot subject us to the arbitrary choice of another, nor subject someone else, arbitrarily, to our ends, and it cannot violate our ability to set and pursue ends as rational, embodied, human beings. Thus, we can't sell ourselves into slavery, or consent to cannibalism, or consent to sell our organs for profit. This is both because these ends are self-destructive, and to set self-destructive ends violates, as we've seen, our perfect duties to ourselves; and because pursuing those ends, as these cases make clear, violate our freedom. Likewise, setting ends destructive to others (like rape or trafficking or cannibalism) violates our perfect duties to others, and pursuing those ends violates their freedom, necessitating coercive legal enforcement.
Otherwise, however, Varden argues that we are free to set any kinds of sexual ends that we want, and to pursue them together with others through consent -and therefore, Kant must be mistaken in his assertion that we ought to be legally barred from setting and pursuing so-called "unnatural" ends. On Varden's account, sexual relations -procreative or not -are "rightful as long as they are authorized by continuous consent" (237); there's no reason to accept distinctions between the sorts of sex that are "natural" and those that are "not", nor to accept the Kantian line that sex is permissible only within marriage. Instead, "authorizing consent" is the mechanism through which we can permissibly pursue our sexual ends with one another, and that as long as we have a robust conception of consent --one in which consent can be withdrawn at any point (so that access to my body cannot be authorized against my will), and which disqualifies minors, the incapacitated, impaired, coerced, or deceived from authorizing consent -then sex is permissible, regardless of the sexual ends in question ("unnatural" or otherwise).
In making this argument, Varden admits that she departs from Kant's own account, not only in that she rejects his own homophobic, cisist account of "natural" sex, but also in that she rejects his assumption that sex is particularly morally dangerous, and thus requires a special relationship -one that goes beyond consent -to authorize it (257). She has good reason to do so: as she points out, Kant's critique of sexual consent seems to hinge on the premise that sex is profoundly morally dangerous, and thus permissible only within marriage, and neither claim has withstood the test of time. Moreover, Varden's account of the central role of "authorizing consent" in sexual morality is in line with contemporary understandings of the "moral magic of consent", transforming violations into impermissible actions. So, I think there are good reasons for accepting Varden's move here, and taking authorizing consent to be an essential feature of Kantian sexual justice and morality. But it is nevertheless valuable to explore how Kant's own concerns about sexual consent might inform Varden's conception of our sexual selves, and of sexual justice.
Puzzles fOr tHe PrOJect Of Justice: cOnsent
For someone who had a good deal to say about sex, Kant has surprisingly little to say about consent, and what he does say is primarily concerned not with its power to authorize, but with mapping the ways that consent alone fails to authorize permissible sexual relations: in morganatic marriage, in prostitution, in concubinage, in same-sex relations, and in marriage itself. This is because, as Varden notes, Kant had a decidedly grim understanding of sex, and he thought that it involved using ourselves and others in ways that violate both our duties to ourselves and our innate right. Since the body and the person are an analytic unity, to use the body of another is to use their person. So, all kinds of sex -"unnatural" or not -subject us to the danger of being used like things. In this sense, as feminist Kantians have long argued, Kant's conception of sex is not radically unlike feminist concerns about objectification.
Kant solves this problem by proposing that sex is morally acceptable, and legally permissible, only within marriage. His conception of marriage, as Varden argues, is quite robust, organizing shared private lives, granting status relations that allow persons to set and pursue ends together in ways that do not violate one another's external freedom. For Varden, Kantian marriage is important because it creates conditions in which persons can have legally recognized shared lives, with legally enforceable rights to one another. Marriage produces legal equality between spouses, by granting them reciprocal rights to one another's persons, and shared ownership of their possessions. So, it checks the various inequalities that create power imbalances in other sorts of sexual relations, like morganatic marriage and concubinage. But in doing so, it gives married partners special standing with regards to one another's ends: partners share their legal standing, their possessions, and their persons. In each of these ways, they are reciprocally bound to share one another's ends.
For Kant, this is what does the magic: sex is permissible within marriage because marriage is a "shared community of ends." Neither partner can make use of one another as a mere means, since they are united in their commitments to one another's ends; by sharing ends, even the most objectifying, "cannibalistic" sex remains aligned with both partners' ends (even when those ends are "unnatural" on Kant's own account).
The central distinction between Varden's account and Kant's account is not the nature of marriage, but the nature of sex. For Kant, sex is morally dangerous and cannibalistic. Consenting to be used in a way that is cannibalizing and objectifying doesn't solve the problem: you're still being objectified and consumed, even though you consented. For Varden, sex is an essential feature of our sexually loving selves, and when we desire a person, we desire not just their body but their person in ways that can be profoundly humanizing and affirming. Consent affirms our right to pursue these sorts of activities with one another (provided we don't pursue activities like actual cannibalism). In this sense, for Varden, consenting to sex is not significantly different from consenting to other activities, like a game of squash (132). For Kant, because sex involves the direct use of the body, consenting to sex is radically different from consenting to other sorts of activities. We can see the distinction most clearly in how Varden and Kant think about the relationship between sex and slavery: Varden clarifies the permissibility of consenting to sex by contrasting it to the impermissibility of consenting to enslavement (238). But for Kant, the relation sex was most like was enslavement, and much of his early political writings on sex focused on the difficulties of distinguishing sexual contracts from slave contracts, since both involved an impermissible use of a person's body that could not be resolved by consent alone (Kant 6:360). 2 As I said above, I think we have good reasons to side with Varden here: Kant's account of sex is unaccountably grim, and his insistence on the similarities between sexual use and enslavement should trouble us both because it hyperbolizes the moral dangers of sex, and underemphasizes the profound violations of enslavement. Sex is a central part of both our selfmaking and our ways of being at home with others, and consent is an essential mechanism for allowing us to engage in these ways with others. But I also think that we ought not to dismiss too easily Kant's worry that some sex is objectifying and dehumanizing (as MeToo powerfully reminded us) and that consent alone cannot resolve the kinds of violations that this sort of sex poses: if consent is necessary to ensure that sex does not subject us to the arbitrary will of another, it does not follow that consent is sufficient for ensuring that sex is consistent with our innate freedom. In Kant's account of marriage as a shared community of ends I think we have some valuable resources for thinking beyond consent in ways that are consistent with feminist, LGBTQIA, and kink concerns.
To see what I mean, let's turn to an example Varden offers to test the distinction between permissible and impermissible forms of consent. She argues that, while it might be permissible for me to donate an organ to a loved one, or even a stranger, in order to enhance their chances of survival, it would be impermissible for the law to allow me to sell that same organ for profit: in the latter case, the law is authorizing contracts that allow me to be harmed or partially destroyed in order to benefit another, thus authorizing the violation of my innate right (239). A libertarian might say: but of course you can consent to sell your organs; you can consent to anything you like, just as you can set any ends -like profit -that you like. But Kant will say both that the state cannot authorize my consent to sell my organs, and that I cannot pursue an end of selling my organs: there are limits on the kinds of ends I can pursue, and the state can regulate those ends by regulating the mechanisms through which I can pursue those ends with others in the world. Now, we might look at this example and come away thinking that selling one's organs for profit is analogous to selling one's body for profit, and that if the state can prohibit organ sales, it can prohibit prostitution. And that may be, but it is not my point. Rather, I want to pay attention to the difference between selling and donating my organs, arguing that it doesn't map to the distinction between prostitution and consensual sex, but to the distinction between sexual consent and the sharing of sexual ends. When I contract to sell my organs for profit, the contract may fulfill my ends -profit -while fulfilling the other party's ends: access to lifesaving organs. I don't care what their ends are, or what they plan to do with my organs. I care only about the profit: I've consented to the surgery required to remove my organs in order to achieve my ends. My ends, and my reasons, are what concern me. And this is characteristic of consent: I agree to someone else's proposal because doing so allows me to pursue or fulfill ends I have set for myself. I don't much care about their ends, or their reasons; indeed, I don't really need to know what their ends, or reasons, are. We authorize consent to things we don't understand all the time to fulfill our own ends: when we click "accept" on an iphone's terms of service; when we sign waivers; when we have sex with a PASCOE, J. stranger for the first time. Consent is the mechanism through which we can pursue the ends we have set for ourselves in those cases where that pursuit involves others.
But when I donate my organs, to a loved one or to a stranger, the ends in question matter to me. I do not have an end of my own, without profit to motivate me. Rather, I am motivated by ends I share with others involved: to save a life, to extend the life of a loved one. This doesn't mean I can control or dictate those ends: I don't get to choose who gets my organ, unless I am donating it to a specific person, and I don't get to dictate what the person with my kidney does with their life. But I am acting on ends I have chosen to share with others; I have transformed the ends I set for myself through participation in an individual or institutional community of shared ends. The shared end, of saving lives, or of saving this particular life, is what matters to me. Organ donation involves a shared value -of a particular life, or of the value of lifewhile organ selling involves only exchange value. Because this shared end becomes my end, my freedom is not violated, and the state can authorize my agreement to share in these ends. Its authorization may hinge on some evidence that I, in fact, understand and share in these ends, that I understand both the scope and the limits of the ends I share. This evidence operates to ensure that my actions do not violate my innate right, that I am not authorizing the use or destruction of my body as a mere means to someone else's end.
Kant's account of the analytic relation between the body and the will grounds innate right, and requires heightened legal scrutiny for any relation that authorizes the direct use of our bodies. When we share our bodies, Kant says, we must share our ends. This implies an epistemic duty to know the ends we share, as well as a duty to transform our own ends accordingly. There's an internal check here, since I can't share ends that violate my right or my capacities to set my own ends, and I can't share ends that conflict with my other ends or projects. And this is importantly different from consent, since I can consent to someone else's ends to use me as a means as long as doing so gets me to my end. Sharing ends involves developing an understanding of the ends involved, transforming our own end-setting projects accordingly, and integrating those shared ends into our own, broader end-setting projects. This doesn't mean that, to share an end, we have to share all our ends. I can donate an organ to someone without marrying them, and I can share my body with someone without marrying them, too. But when I donate an organ, or share my body, I must engage in a process of sharing the relevant ends in a context in which a determination of the relevance of the ends involved is part of the epistemic project of end-sharing. If donating my organ will change how I can eat, or what kind of exercise I can engage in, then those ends are relevant and would need to be transformed and integrated, as well. If my ends of being respected, pleasured, and remaining disease-free, unpregnant, and independent are relevant to a sexual encounter, then those ends need to be shared by my partner to the degree that they are relevant.
Kant, of course, argues that this sort of end-sharing is possible only within marriage. Marriage, as we've seen, makes end sharing possible by creating a set of background conditions that mitigate the inequalities and power imbalances that organize sexual relationships out in the world, making a shared community of ends possible. Consent, on the other hand, is designed to protect, rather than to transform, power dynamics, by treating parties as if they were equal for the purposes of consent. Kant is particularly attentive to how power dynamics operate through sexual consent, as in his discussions of prostitution, concubinage, and morganatic marriage. 3 Kant's conception of marriage, as Varden points out, deserves to be distinguished from "marriage-as-it-has-usually-existed" (123) in that its primary structural purpose is to create a legally enforceable condition of gendered and sexual equality -e.g., a condition in which ends can be shared. As Christine Korsgaard has pointed out, this has much in common with the Kantian conception of creating a "kingdom of ends", which is also characterized by a relation of reciprocity and equality in which end-sharing becomes viable (1996,. One way of understanding Kant's ideal of marriage, then, is that it creates a "pocket" of "robust" justice even in a minimally just state, creating conditions for sexual justice even in a world which remains patriarchal (and heterosexist and cisist).
Thus, authorizing consent may provide a minimal step towards justice in the barbaric conditions of a patriarchal, heterosexist state, creating conditions in which sexual partners can interact as if they were equal, and meeting the minimal conditions of justice by ensuring that my pursuit of my own ends does not entail treating another as a means only. Varden makes clear that this sort of step towards justice is necessary when working to establish a rightful condition, and that an imperfect state may be entitled to enforce only minimal conditions of justice. Enforcing authorizing consent, then, is a critical step for any state on the way to justice. But this is not to say that it is sufficient for justice. Varden's argument provides a broad reframing of what a map of the route to sexual justice might look like -and a Kantian conception of just sex as a relation in which we share sexual ends in a condition of sexual and gendered equality and inclusivity is a valuable resource for that journey.
Puzzles fOr tHe PrOJect Of Justice: aBOrtiOn
In closing, I turn to another argument that hinges on this distinction between the minimal conditions of justice and a robust conception of a rightful state -a distinction which is amongst the most compelling features of Varden's book. There are a number of reasons to value Varden's analysis of abortion, particularly in the wake of the end of Roe v. Wade in the U.S., where the debate over abortion must adapt to a new reality of state-by-state abortion bans that are violating women's and pregnant persons rights in devastating ways. 4 To begin with, Varden frames abortion as a question of innate right, which must deal with the ways that the relation between our bodies and our persons is "analytic", a necessary unity (218). Given this, Varden argues, "the reason why just states reject [strict] restrictions is that they enslave pregnant persons" (223) and create conditions in which the state denies pregnant persons equal protection under the law (229). This is not a mere question of abstract rights: given Varden's attendance to the deeply embodied, phenomenologically human dimensions of our experience, she has ample resources to take seriously the profound wrong of being forced to gestate against one's will, particularly in cases where the pregnancy was the result of an "ungrounding" sexual experience (e.g. assault or deception) (224). And justifying abortion only in these sorts of cases is insufficient: "what we are looking for…is a stronger argument, one that demonstrates the unjustifiability of outlawing ordinary abortions, not just exceptional ones" (226). Yet Varden's own description of the ways that forced pregnancy in exceptional cases "means forcing them to endure (for nine months!) serious, increasing physical manifestation in their own bodies of the violence and deception done to them" (224) could hold equally for the experiences of those who are forced, by unjust state law, to endure pregnancy as the physical manifestations of state violence. Once we recognize forced pregnancy and gestation as a grave violation of innate right, then it matters less how one became pregnant, and the distinction between "exceptional" and "ordinary" cases becomes less important (particularly given the ways that attendance to and exceptions for such "exceptional" cases is often deployed to "soften" the blow of brutal abortion bans).
Varden is clear that the right to abortion is insufficient, and emphasizes the importance of universal access to abortions, including access to clinics in all geographical areas, legally supported access for minors, and public funding for abortion access for all who need it. A just state protects pregnant persons only when it both secures ready access to abortion (231) and ensures state support for those who choose to become parents (232). Where the state fails to be just in these ways -whether in the case of contemporary abortion bans, or in the case of infanticide which Kant considers in the Doctrine of Right (6:333) -"the authority that is supposed to enable their rightful co-existence with others is radically failing in its ability to do so by permitting some spheres of interaction to remain "barbarous" (234). Varden's argument convincingly shows why rightful and accessible abortion is a key requirement for any state claiming to be just -and why outlawing abortion creates "pockets" of barbarity in which some citizens will face impossible choices.
However, having outlined the various ways that an imperfect state must protect rights and access to abortion, Varden argues that a robustly just state would place restrictions on abortion. Her claim is that such restrictions become rightful at the point in pregnancy at which the fetus is phenomenologically capable of "rationally unified spontaneous action," and is, therefore, deserving of legal protection. Varden acknowledges that there might be "reasonable disagreement" about when this point is (230), and she insists both that such restrictions could be rightful only given true access to abortion up to this point and that restrictive laws must include exceptions, like the health and mental health of the mother, as well as fetal abnormalities (233). Her argument, then, is that as long as abortion is truly accessible during the first trimester or so of pregnancy, then no one can be said to be pregnant against their will, and so the state is justified in restricting abortion in order to protect the fetus: at twelve weeks or so, the fetus ceases to be the sort of entity that merely "divides and multiplies" but becomes a "rationally unified spacio-temporal being" with legal standing (229). This is not full legal standing, to be sure; Varden specifies that because the fetus remains inside the pregnant person, her rights outweigh fetal rights, and only at birth does a fetus become a legal person. But nevertheless, Varden's defense of abortion takes the same line as that proposed by the state of Mississippi in the Dobbs case which overturned Roe: a justification of abortion bans past 12 or 15 weeks. And granted, her argument includes the provision that, where the state does not meet the requirements of justice -e.g., where abortions are not truly accessible (in a state like ours) -no such restrictions should be enforced. But what follows from this is that were the state to meet the minimum requirements of justice, such restrictions would be justified.
I have several worries about this, not the least of which are the terrible harms we've seen women face in states with abortion bans, which reveal the ways that restrictions on abortion produce a ripple effect across women's and pregnant persons' access to health care. But my Kantian worry is that granting -and enforcing --a fetus' legal standing when it is inside a woman conflicts with innate right. As Varden argues, innate right entails that the relation between one's body and one's self is analytic, which means that the law cannot adjudicate between one's body and one's self (218-219). And so, as long as the fetus is inside the woman, they can together be considered "an analytic unity" (223). Viability standards seek a way around this feature of pregnancy by encouraging us to treat the fetus as if it were outside the pregnant person, which by definition it is not. The question Varden's argument poses is: is it the case that "from a legal point of view, the extent of my body is the spatial extent of my legal personhood" (219), so that "the relation between my person and my body…must be seen as one of necessary unity" (218), or is it the case that the fetus becomes a "unified spatio-temporal being" (229) which can be considered legally distinct (even if dependent) upon the "analytic unity" of the pregnant person? This is complicated by Varden's account of the scope of the law, which can "only regulate interactions between beings" (228). This is important: the law does not coercively regulate all actions, since if I were acting alone on a desert island, I would need no law to protect the rights of others. It regulates interactions, or "external freedom, which is limited to what can be rightfully hindered in space and time" (218). If the law can restrict abortions, it must be protecting external freedom and regulating interactions -but whose? If we agree with Varden that, at a certain point, a fetus is capable of "rationally unified spontaneous action" this would still get us only as far as action, not interaction. For the fetus to "interact" with the pregnant person, we would have to grant that they are two separate entities, which would seem to undermine the claim that the pregnant person's body and person are "an analytic unity." To say that the fetus is capable of action -even minimally rational action -is not to say that it has external freedom, "tracking external interaction in space and time" (219). It's difficult to characterize the interaction between fetus and pregnant person -if interaction is the right word -as an "external interaction in space and time" if we grant that the pregnant person's bodywhere this "interaction" takes place --is her person. And so, even if we grant that the pregnant person and fetus are separate beings who interact in these ways, it would not follow that these interactions can be "rightfully hindered in space and time" (218), since any such hindrance involves adjudicating within the analytic unity of the pregnant person's body.
Another worry is that, given Varden's emphasis on continuous authorizing consent to sexual interaction, it is difficult to see why consent to pregnancy should not be subject to the same scrutiny. If the danger of conclusive consent -in other words, a point at which I have consented and can no longer back out -is that it creates a condition in which my body is used against my will, then why would this not be the case in pregnancy, as well? No matter how thoroughly accessible abortion is in the first twelve weeks or so, there will be cases where pregnant people subsequently experience their pregnancies as a use of their body against their will. We could, I suppose, argue that these cases are addressed by Varden's insistence that later abortions "should be legally permissible when continued pregnancy would threaten the pregnant person's mental health" (233). But this would mean that, should I want to rescind my consent to access to my body, I would need to claim a mental health concern, rather than simply claim, in accordance with innate right, that it is my body, and I want it back. It creates a condition in which women would need a "really good reason" to say no at this point (in much the same way that women have been disciplined to assume they would need a really good reason to say no to sex once it has begun). If my body is my person, then surely, my reasons are my own, and any reason I give to rescind consent to have my body used against my will is a good enough reason from the perspective of law. If we are to fulfill Varden's insistence that the law recognize pregnant persons as the moral authority on their own experience of pregnancy, it seems to me that these sorts of restrictions on abortion would violate innate right.
The question I want to ask is: why should legally -coercively --restricting abortion be a feature of a just state? I take Varden's point about the distinction between the moral and legal question of abortion, and I acknowledge that claiming that the state should not restrict abortions at any stage may do little to assuage those who think it morally wrong. Once we have made the distinction between those moral worries and the legal question of justice, however, we need to consider whether restricting abortion can be consistent with justice. The evidence from (unjust) states with abortion bans suggests that there is simply no just way to restrict abortions, that any attempt to legally restrict abortions worsens the availability and quality of care for all pregnant patients, as well as for women who are not pregnant. 5 There is no map of "exceptions" to abortion restrictions that covers every case, and that solves the problems created when doctors must consult with lawyers before providing care. And so my worry is that any attempt to coercively restrict abortions will undermine the project of a robustly just state by creating the sorts of "pockets" of injustice about which Varden worries. This is not to say that a just state could not be in the business of actively seeking to reduce abortions. Many features of Varden's vision of the just state would be likely to reduce the number of abortions, from policies that protect women and LGBTQIA folk from assault and abuse, to poverty relief programs that specifically support families and survivors of domestic, sexual, and gender-based violence, and that prioritize providing permanent, safe housing; Varden's vision is likewise consistent with arguments for public investment in holistic sex education and broad, publicly supported access to contraceptives. We have much to learn from the Reproductive Justice platform about the broad range of legal interventions and social programs which could support reproductive justice and reduce the number of abortions, and most of these are consistent with Varden's vision of the robustly just, liberal republican Kantian state. 6 But in the wake of the end of Roe v Wade, I think it is worth troubling the notion that legal, coercive restrictions are an effective or just means of responding to the moral quandries of abortion, and asking what non-coercive measures just states ought to consider, instead.
ABSTRACT: This paper reflects on the critical philosophical resources developed in Helga Varden's Love, Sex, & Gender: A Kantian Theory, focusing on a central theme of the book: that one way to think about problems of love, sex, and gender, from both a phenomenological and a political perspective, is to tend to the importance of being at home with oneself, in the world, and with others. This framework allows Varden to develop a distinctly and innovatively Kantian account of our sexually loving and gendered selves, and their implications both for questions of virtue and morality, and for questions of justice. The author then considers the ways that Varden's analysis provides much needed resources to think about how inhabiting a self-defensive stance in the face of oppression may violate duties to resist our own oppression, and then turns to two puzzles in Varden's account of the just state: her understanding of sexual consent, and her defense of the state's right to restrict abortion. keyWOrds: Kant, Gender, Sexuality, Philosophy of Sex, LGBTQIA philosophy, trauma, oppression, sexual consent, abortion, feminist philosophy references | 11,487 | 2023-06-19T00:00:00.000 | [
"Philosophy"
] |
Design of an Efficient Binary Vedic Multiplier for High Speed Applications Using Vedic Mathematics with Bit Reduction Technique
Vedic mathematics is the system of mathematics followed in ancient Indian and it is applied in various mathematical branches. The word “Vedic” represents the storehouse of all knowledge. Because using Vedic Mathematics, the arithmetical problems are solved easily. The mathematical algorithms are formed from 16 sutras and 13 up-sutras. But there are some limitations in each sutra. Here, two sutras Nikhilam sutra and Karatsuba algorithm are considered. In this research paper, a novel algorithm for binary multiplication based on Vedic mathematics is designed using bit reduction technique. Though Nikhilam sutra is used for multiplication, it is not used in all applications. Because it is special in multiplication. The remainder is derived from this sutra by reducing the remainder bit size to N-2 bit. Here, the number of bits of the remainder is constantly maintained as N-2 bits. By using Karatsuba algorithm, the overall structure of the multiplier is designed. Unlike the conventional Karatsuba algorithm, the proposed algorithm requires only one multiplier with N-2 bits only. The speed of the proposed algorithm is improved with balancing the area and the power. Even though there is a deviation in lower order bits, this method shows larger difference in higher bit lengths.
Introduction
Vedic Mathematics is the technique used in an Ancient India for solving arithmetical problems mentally and in easier way.It contains 16 formulas and 13 sub-formulas.These sutras are used in solving complex computations, and executing them manually.It is operated on 16 sutras and 13 up-sutras.The algorithms and principles of all sutras were given in [1].Urdhva Tiryakbhyam Sutra is mainly used for multiplication which means "Vertically and Crosswise".The multiplier based on this sutra is known as Vedic multiplier.It is based on a novel concept of array multiplication.In [2], the design and implementation of Triyakbhyam were done and the speed was compared with Nikhilam sutra, squaring and cubing algorithm.
In [3], the implementation of Arithmetic unit that used the Vedic mathematics algorithm for multiplication was discussed.The arithmetic unit was designed to perform multiplication, addition and subtraction and multiply accumulation operations.The MAC unit uses a fast multiplier built with Vedic Mathematics algorithm.The square, cube algorithms along with Karatsuba algorithm have been discussed in [3], to reduce the multiplications required.In [4], a ROM based multiplier is proposed.One out of two inputs is fixed here.So this method can be used in matrix multiplication and other applications which use constant coefficient multiplication.The faster Vedic multiplier consumes more power than convention ones.In [5], the multiplier based on Nikhilam sutra was presented with modification in 2's complement block and multiplication block.7-bit encoding technique is employed in the design of multiple radix multipliers.The high speed 32 × 32 bit Vedic multiplier was presented in [6].The binary implementation using carry save adder was presented in [6] and comparison result was made.
The iterative algorithm for Nikhilam sutra was presented in [7].It reduces two larger numbers in to two smaller numbers by neglecting zeros from the least significant side and performing bit shifting to complete the multiplication operation.But this method demands for closeness of the multiplicands to nearer to power of 10.This algorithm is same as array multiplier final product can be derived based on array of address.The Urdhava Triyagbhyam Vedic is designed for actual development of multiplication and it is presented in [8].Here the partial products are generated at the same time.In [8], 128 × 128 bit Vedic multiplier is implemented.
In [9], the Urdhava Vedic multiplier is compared with Booth multiplier to analyse their speed and delay.From the results, it is concluded that Urdhava multiplier is superior in delay and power.The speed is improved to the extent of 32% compared with Booth multiplier.The Urdhava multiplier is designed using standard cell approach.The higher bit multiplier is constructed using lower order bits [10].In [11], Vedic multiplication is implemented on 8085/8086 microprocessors.In [12], a multiplication algorithm based on Nikhilam Sutra is designed.In [13], high speed and low power Vedic multiplier based on Tridhava approach is designed using BEC (Binary-to Excess-one code converter) based carry select adder.Here, instead of two RCA (Ripple Carry Adder), one BEC and one RCA are used.Based on the carry generation, either RCA or BEC outputs are selected.If Cin = "0", RCA is selected.If Cin = "1", BEC is selected.From this, power optimal high speed multiplier is designed.In [14], the Vedic multiplier is designed using modified Carry Select adder.By this, area is reduced by 40% compared with [13].
In [15], Vedic multiplier is designed using modified square-root carry select adder (MSQRT-CSA).They compared the performance with various adders like CSA, Carry look-ahead adder (CLA), Square root CSA (SQRT-CSA).Compared with other methods, multiplier with MSQRT-CSA is faster.In [16], the comparison between traditional binary multipliers is made.
From the survey, Tridhava multiplier can be used for any range of inputs.Different papers were proposed based on the modification in the adders.Though Nikhilam sutra covers all range of inputs, it is efficient when the multiplicands are closer to the multiple of 10.When it is implemented for binary numbers, normally 2's complement will be taken.The changes are made in the portion of adders.Similarly, Karatsuba algorithm is good for higher order bits.In Karatsuba algorithm, three multipliers are required along with shift operations.But in the proposed method, Karatsuba and Nikhilam sutras are combined.The multiplier required is reduced to one and the number of bit to the multiplier is reduced by two.
Nikhilam Sutra
First, Nikhilam Navatascharam Dashtah sutra means all from 9 and last from 10.The Sutra is well explained for multiplication of decimal numbers.The steps for multiplying decimal numbers using this sutra are given below.The mathematical expression explaining this sutra is given in Equation 1.Let A and B are the two input numbers.The product P is derived as follows: ) Here x acts as base B and it is considered as multiple of 10 depending upon the closeness of multiplicands.The remainders of A and B are known as "a" and "b" respectively.The product "ab" denotes the Right Part (RP).The Left Part (LP) is represented by the first term in (1).
Karatsuba Algorithm
The Karatsuba algorithm is suited well for multiplying very large numbers.It is a divide and conquer method, in which the number is divided in to Most Significant half and Least Significant half.The multiplication operations are replaced by addition operations and hence the delay of the algorithm is reduced.This algorithm is more efficient when the number of inputs increase.The algorithm is optimal if width of inputs is more than 16 bits.
The numbers are divided as follows where A l , B l and A r , B r in Equation 2 are the most significant half and least significant of the numbers A and B respectively and "n" represents the number of bits.Then, the product can be written as, ( ) From Equation (3), four multiplications and two shift operations are needed.The number is divided equally into two parts.This method is efficient for higher bit length.
Proposed Method
In the proposed method, both Nikhilam sutra and Karatsuba algorithm are combined.Using Nikhilam sutra, the remainder is calculated from the nearest value of base 2 by taking 2's complement.Afterwards, the multiplication is done using Karatsuba algorithm.By doing this, the multiplication required is less.
Let A and B are the numbers and they can be written using Karatsuba algorithm as, The remainders are derived using Nikhilam Sutra in such a manner that the number of bits for remainder is always N-2. Four modules are generated for different combination of removed MSB values.The weight of remainder is reduced.Then the product is derived as follows,
(
)( ) while comparing (3) and ( 5), the proposed algorithm requires only one multiplier.And there is no change the shift operation.The multiplier used in the proposed algorithm requires only N-2 bits.Therefore, proposed work reduces the number of multipliers as well as the number of bits of multiplier.The sign will be selected based on the remainder type of A and B. The example for proposed algorithm is shown below: Step 4: Shifting the remainder r 2 by N times (P 3 ) Step 5: The product can be calculated as, The proposed architecture for the multiplier for the MSB values 11 is given in Figure 1.
Algorithm II
Input: A, B Output: P Step 1: Calculating remainders for both multiplicands by removing first two bits.If the removed bits are 10, the remainders are determined by taking the numbers without the MSB values.(i.e.N-1, N-2 = 10) 2 N B r − = + r 2 = B (without first two bits) Step 2: Multiplying both remainders using N-2 × N-2 bit multiplier (P 1 ).
Step 3: Shifting A by N-1 times (P 2 ) Step 4: Shifting the remainder by N-1 times (P 3 ) Step 5: The product can be calculated as, ( ) ( ) The architecture for the proposed algorithm is shown in Figure 2.
Algorithm III
Input: A, B N bits Output: P 2N bits Step 1: Calculating remainders for both multiplicands by removing first two bits.If the removed bits are 01, the remainders are determined by taking the numbers without the MSB values.(i.e.N-1, N-2 = 10) Step 2: Multiplying both remainders using N-2 × N-2 bit multiplier (P 1 ).
Step 3: Shifting A by N-1 times left side (P 2 ) Step 4: Shifting r2 by N-1 times left side (P 3 ) Step 5: The product can be calculated as, ( ) ( ) The architecture for the proposed algorithm is shown in Figure 3. 2 N B r
Step 3: Shifting A by N-2 times (P 2 ) Step 4: Shifting r 2 by N-2 times (P 3 ) Step 5: The product can be calculated as, ( ) ( ) The architecture for this algorithm is shown in Figure 4. From the above said four algorithms, the combined architecture is designed to perform multiplication for binary numbers starting with any value.The combined architecture is shown in Figure 5.In [12], the detailed explanation is given.For "10" combination, the remainder derived is positive and for the other combinations the remainders are negative.For negative remainders, the complement should be taken.The input A is shifted N-2, N-1 or N times according to the MSB bits of B. For remainder multiplication, multiplier with N-2 bit is used.The remainder of B is shifted N-1, N-1 or N times.Finally, all the terms added or subtracted according to the algorithm.A simple control circuit is used to select the operation in adder/subtractor module.This architecture is suitable for any input range of inputs.The multiplication is required only when both inputs are nonzero values.The multiplexer is used to reduce the power dissipation when the remainder is zero.In [12], for the multiplication of remainders, various conventional multipliers are used.By comparing other methods, the proposed method gives high speed.Therefore, in this work, proposed algorithm is applied continuously in the algorithm to increase the speed further.
Results
AVedic Mathematics is a technique to solve the arithmetic operations easily and mentally.Traditionally, Urdhva Tiryakbhyam Sutra is known as Vedic Multiplier because it covers all range of inputs.It means "vertically and crosswise".In most of the research papers [2]- [9] listed in references concentrated on this type of multiplier.The hardware implementation for binary numbers is also done.This involves RCA and lower order Vedic multipliers.The parallel implementation has also been done [9] [10].The Nikhilam Sutra is designed for special case.It refers "all from 9".The method is efficient when the multiplicands are closer to the multiple of 10.There is no efficient hardware implementation for Nikhilam Sutra for binary numbers.The algorithm is well defined when both numbers are of with positive or negative remainder.When one number is with positive remainder and the other is with negative remainder, the calculation is different.
Depending upon the value on Right part, correction is made on left part.In the proposed method, all range of inputs can be given.Karatsuba algorithm is efficient for higher order multipliers.Karartsuba multiplier uses three multipliers based on Equation.(3).But in the proposed method, the numbers of multiplier is reduced to one and number of bits of the multiplier is reduced to N-2.Here, Nikhilam sutra and Karatsuba algorithm are combined to get the high speed.The implementation for various Vedic multipliers is done using Xilinx Spartan 3e kit.In the main module, the proposed multiplier is itself used.The comparison with Vedic multiplier for delay is listed in Table 1.The comparison with conventional methods is also given in Table 2.
The proposed algorithms are written in VHDL and simulated using ModelSim .The simulation result for 32 bit size is shown in Figure 6.We implemented the algorithms of existing and proposed work in Xilinx SPARTAN 3E FPGA.Additionally, implementation result in Xilinx SPARTAN 6 for the proposed method is shown in Table 1.While comparing with SPARTAN 3e, SPARTAN6 delivers high speed but consumes more power.While comparing delay with other methods, the combination of Nikhilam sutra and Karatsuba algorithm gives minimum delay.From Table 1, the delay difference between other methods for higher bits is high.Therefore, this algorithm can be used for high speed applications with wider bit length.
Conclusion and Future Work
In this research paper, successive approximation of Vedic multiplier is proposed for high speed applications.The algorithm of Karatsuba is modified to reduce the number of multipliers required in the calculation.Instead of splitting the binary number into half, the number is split based on remainder value.The remainder is calculated using Nikhilam Sutra such that the number of bits is reduced to N-2.By combining Nikhilam Sutra and Karatsuba algorithm, the number of bits to the multiplier is reduced.Four modules were created based on remainders.From the results', it is clear that the proposed method produces output faster than other methods.This hybrid multiplier is best suited for multiplying large numbers in high speed applications.
1 .
The nearest base value of multiplicands is considered.Let us assume the multiplicands are A and B. 2. The remainders of A and B are calculated by subtracting the chosen base value from step 1. 3. The product of remainders of A and B are computed and considered as Right part of final product.4. The Left part can be computed in three ways.The methods are listed below.a) The two multiplicands A and B are added and the nearest base value is subtracted from the sum A and B. LP = A + B-10.b) The Left part is calculated by adding the remainders of A and B with Base value.c) The remainders of A and B are crossly added with the multiplicands A and B. 5.The Final result is attained by removing the demarcation between LP and RP and concatenating the same.The remainder is derived using this sutra.Based on the type of the remainder, the algorithm is developed.The two types are: a) Positive remainder.b) Negative remainder.
1 : 2 :
A and B are the multiplicands with N bits.Their remainders are calculated by removing first two bits and taking 2's complement for the remaining N-2 bits.(N-1, N-2 Multiplying both remainders using N-2 × N-2 bit multiplier 1 1 2 P r r = × .Step 3: Shifting A by N times (P 2 )
Figure 1 .
Figure 1.Architecture for Algorithm I.
Figure 6 .
Figure 6.Simulation waveform for the multiplier with 32 bit.
Table 2 .
Delay comparison with other conventional binary multipliers. | 3,556 | 2016-07-05T00:00:00.000 | [
"Computer Science",
"Engineering",
"Mathematics"
] |
Resilience in inter-organizational networks of red buses: dealing with their daily disruptions in critical infrastructures
ABSTRACT
Resilience in inter-organizational networks of red buses: dealing with their daily disruptions in critical infrastructures
1. Introduction
Overview
The production, supply and delivery of services and goods are increasingly organized into networks (Baloch & Rashid, 2022).Scholars have increasingly focused on complete supply networks rather than dyadic inter-organizational relationships (Braziotis et al., 2013).The following is revealed in current supply chain research of international journals (Gremyr & Halldorsson, 2021).To attain firm-level objectives (e.g., schedule, production, adherence) and cooperative network results (e.g., sustainability, environmental), organizations in these supply chains are critically dependent upon one another's performance and inputs (Kim et al., 2011).The most significant supply network is the operation and maintenance of critical infrastructures (CIs).Several critical infrastructures (CIs) provide vital services like transportation, water supply and electricity delivery.These services are highly interconnected and cannot be replaced by many or any alternative systems (Van, Stevenson, & Scholten, 2020).
Critical infrastructures are commonly experienced for supply systems (A, 2014) in minor disturbances (such as human errors or equipment breakdowns) that cause regular service interruptions.The consequences of these disruptions must be contained quickly, although they are initially small and localized (Waugh & Cigler, 2012;Dufort, 2007).However, managing such disturbance within critical infrastructures (CIs) is challenging and complex, as public and private companies may often have differing commercial interests and operational approaches (McConnell & Boin, 2007;Essens & Vegt, 2015).Furthermore, Because of these networks' deep interconnectivity, any local issue or inadvertent mistake one organization makes in resolving a disturbance can affect all CIs (Ouyang, 2014;Chen et al., 2016).Consequently, the UK experienced almost two months of train delays, cancellations, and overcrowding due to a single rail company's schedule change error.The resilience of the CI reflects the effective handling of disruptions by the inter-organizational (supply) network.When resilient, critical infrastructure can rapidly recover and continue operating during disruptions, whereas non-resilient infrastructure suffers extended downtime and diminished performance (McConnell & Boin, 2007;Stevenson et al., 2015).Primarily, a focused and centralized organizational structure (e.g., incident command structures) handles significant disruptions (Roberts & Bigley, 2001).When faced with daily disturbance, Because of this absence of centralized assistance, CI organizations frequently have to organize coordinated responses in addition to their regular responsibilities (Dufort, 2007).
So much research has been done on it, but with different variables and regions.Organizations need to find out whether they can learn how to deal with minor CI disturbances that are more regular and variable from prior research on large-scale disruption management (Bhattacherjee & Premkumar, 2004).This research will be on resilience in the Red Buses network in Karachi, what kind of trouble they face and how they deal with them in their infrastructure.
Problem Statement
As this is a very fast-paced century, and people want to update themselves with new products and ideas, even though existing studies have provided significant insight into CI flexibility, our understanding of how inter-organizational networks can reduce the unpleasant consequences of CI disruptions in daily operations remains unclear (Bhattacherjee & Premkumar, 2004).Most research on the management of supply chain focuses on network-level phenomena in general (Braziotis et al., 2013) and network flexibility precisely (Van Donk et al., 2020;Stevenson et al., 2015), with a focus on researching the function of the focal organization or a specific dyad between organizations (Kim et al., 2011).As a result, the complicated, non-linear linkages between all the organizations that build supply chains should be considered in this study.On the other hand, previously conducted studies on the resilience of critical infrastructure (CI) have mainly focused on how CIs can recover from infrequent and widely recognized events such as Hurricane Katrina (Cigler, 2007) credit crisis as a substitute for analyzing how they involved inter-organizational network can manage the more archetypal recurring interruptions that distress CIs every day (Waugh & Cigler, 2012;Dufort, 2007).Moreover, day-to-day troubles typically differ on these dimensions, whereas large-scale disruptions are often non-routine and complicated.Therefore, inter-organizational networks should be able to handle regular and simple disruptions and comparatively complex and unpredictable (Eeten & Boin, 2013).Organizations need to find out whether organizations can learn how to deal with more minor CI disturbances that are more regular and variable from prior research on large-scale disruption management.
Therefore, the current study aims to understand better how an inter-organizational network, Red Buses, that uses and maintains a CI might increase the CI's daily resilience.Due to a shortage of time, my research is limited to Karachi and Red Buses.So that we can know how inter-organizational networks can deal with the troubles and what kind of troubles they face.Therefore, the following are the research questions 1.Is there any negative relationship between disruption non-routineness and CI resilience to the focal disruption? 2. Is there any relationship between disruption co-occurrence and CI resilience to the focal disruption? 3. Does cross-border information exchange moderate the association between disruption cooccurrence and CI resistance during a disruption? 4. Does cross-border information exchange moderate the association between disturbance nonroutineness and CI resistance during a disruption?
Purpose of Study
The study fills critical gaps in our current understanding of CI resilience by extending the reasoning of OIPT to comprehend how the inter-organizational system utilized and operated by a CI can reduce the adverse effects of the usual problems that impact the CI on every day and to comprehend the way the inter-organizational system is capable of handling such interruptions (Eeten & Boin, 2013;Welch et al., 2018).There are various theoretical ramifications of this finding.By expanding the concept of OIPT to comprehend how the inter-organizational system using and managing a CI might minimize the adverse effects of the regular disturbances which impact the CI regularly, we contribute to the field of CI resilience study.Our research fills crucial gaps in our knowledge of CI resilience by illuminating how the inter-organizational system might manage such interruptions (Van, Stevenson, & Scholten, 2020;Bhattacherjee & Premkumar, 2004), even though "the majority of supply [Chains] are significantly more inclined to be coping with persistent, recurring threats of interruption (Stevenson et al., 2015).By shedding light on the advantages of cross-border exchange of information for the resiliency of supply systems, this study adds to the body of literature on robustness across the particular setting of CIs, contrary to earlier studies (Stevenson et al., 2015).
Significance of the Study
This study is critical as it contributes to CI resilience research by demonstrating that the advantages of cross-boundary exchange of information rely on the features of the disturbance that this kind of system faces.This research doubts this exchange's general efficiency for enhancing such networks' resilience.Cross-border exchange of information is beneficial for controlling more intricate or irregular interruptions.This supports Feldman and Quick's (2014) assertion that, especially under unfavorable circumstances, the advantages of cross-boundary exchange of information surpass any potential drawbacks, such as prolonged consensus-seeking and making decisions.The results of this study will help managers in the corporate and public sectors deal more effectively with the little disturbances that daily impact their CI.These regular interruptions to a CI might have more negative effects when they co-occur with additional interruptions or are non-routine circumstances.Our findings indicate that managers should aim for and support the greater cross-boundary exchange of data with other organizations within their organization in these situations in order to share knowledge, create well-integrated defenses, and prevent duplicative or conflicting operations among organizations.However, our study emphasizes that rather than occurring within dyadic interorganizational connections, such cross-boundary transfer of information should occur at the level of the entire inter-organizational network.Our results show significant direct interaction and knowledgesharing between pertinent organizations.
The research is limited to analyzing the dealing of red buses with day-to-day disruptions in critical infrastructures in Karachi.There are so many Red Busses (People Busses) routes in Karachi, but due to limited time, I will cover only two routes.The data will be collected by visiting red bus stops from customers and drivers.
Critical infrastructure resilience, daily disruptions, and service supply networks
A system of interconnected and dependent businesses is referred to as a "supply network," their goal is to better the flow of resources and information from suppliers to end consumers.(Braziotis et al., 2013).The inter-organizational network is concerned with managing and operating a CI.It is a supply network that coordinates the actions of different organizations and combines the resources (e.g., equipment and infrastructure) to ensure efficient operation.Inter-organizational networks guarantee CI resilience by addressing disruptive events that could jeopardize service continuity "Day-to-day disruptions" are a subcategory of those events.According to McConnell and Boin (2007) and Linnenluecke (2017), there is a subcategory of such events called "day-to-day disruptions," which are "Less spectacular but more frequently occurring occurrences like equipment malfunctions, supplier supply delays, and modifications to client order requirements are nonetheless major managerial problems" (Salvador & Tenhiälä, 2014;Van Donk et al., 2020;Stevenson et al., 2017).
Resilience, in general, refers to a supply network's capacity to anticipate interruptions, react to them, and rapidly resume normal operations (Chowdhury & Quaddus, 2016;Rotaru et al., 2016;Harrison & Sawyerr, 2020) because of how quickly the inter-organizational network involved tries to restore services to end users, resilience in CIs is therefore readily apparent.As a result, we evaluate a CI's resistance to routine disturbances based on how quickly the inter-organizational network involved can successfully create and implement countermeasures or its recovery time (Mattsson & Jenelius, 2015).A quicker recovery period shows that the organizations were able to immediately separate the disruption's effects, preventing the complete paralysis of the CI (McConnell & Boin, 2007;Welch et al., 2018).On the other hand, a lengthier recovery period suggests that the organizations still need to locate or fix the issue.When this happens, the disturbance continues to impact CI operations and can lead to significant issues for them (Farkas et al., 2008).The appropriate definition of CI flexibility is "a system's ability to regain performance levels after experiencing an interruption" (Britt, 1988).
Building a resilience viewpoint for critical infrastructures based on information processing
Organizations within the inter-organizational network must gather, combine, and interpret all relevant information to ensure CI flexibility to daily disturbance.This information can then be used to make well-informed decisions on how countermeasures must be applied (Eeten & Boin, 2013).According to OIPT, organizations can efficiently deal with this challenging task by applying a theoretical perspective (Galbraith, 1974).A theory initially developed to understand interorganizational behaviour, OIPT has been expanded to more fully describe the behaviour and performance of a focus (purchasing) organization in dyadic inter-organizational connections (Venkatraman & Bensaou, 1995) in supply systems nowadays (Foerstl et al., 2017).As a result of this research, we expand OIPT to the system stages of analysis.
OIPT involves the organization's "capacity to deal with unusual, momentous circumstances that cannot be foreseen or predicted " (Galbraith, 1974).OIPT offers helpful details on how businesses should handle disruptions (Nishant et al., 2020;Macdonald & Bode, 2017).OIPT advises attempting to either decrease the volume of information that needs to be processed or raise their capability for processing." non-standard, significant occurrences."additionally, OIPT suggests that the intricacy and unfamiliarity of the unexpected occurrence determine whether or either technique is beneficial (Repenning & Rudolph, 2002).By lowering slack resources and capacity, minimizing the interdependencies between activities, and reducing slack resources, it is possible to reduce the information processing requirements for less complicated and more routine events.However, tackling more sophisticated or unexpected occurrences can mean these two approaches could be more effective (Galbraith, 1974).To boost their capacity for processing information in such circumstances, OIPT advises organizations to invest in formalized information systems and create lateral links among various organizational components (Venkatraman & Bensaou, 1995).
We propose that the typical daily disturbances CI faces are more complex when they co-occur by merging the conceptual insights from OIPT with more extensive resilience studies (Repenning & Rudolph, 2002;Khansa & Zobel, 2014).Additionally, they must be more familiar with and call for non-standard answers when reflecting on unexpected situations (Macdonald & Bode, 2017).We emphasize that, while facing co-occurring and non-routine CI disturbances, the members of the various organizations should participate in more intense communication to broaden further the intraorganizational-level perspective of OIPT (i.e., cross-boundary) information exchange.We place more emphasis on cross-border information exchange than other OIPT-recommended strategies (like formalized information systems), as it enables real-time adjustments and coordination during disturbance management and has been recognized as a critical policy for ensuring supply network flexibility (Macdonald & Bode, 2017;Harrison & Sawyerr, 2020;Van Donk et al., 2020).
Critical infrastructure resilience and disruption co-occurrence
When more than one critical infrastructure disruption affects the network simultaneously, this is known as a co-occurrence of disturbance in an inter-organizational network (Khansa & Zobel, 2014).Suppose organizations within the system cannot gather the essential data during a specific daily interruption in conjunction with numerous other disturbances within the same CI.In that case, they may become overburdened and overloaded.Organizations may need additional time to create successful preventative measures for the focused disruption when concurrently with multiple interruptions due to the difficulty of understanding and processing all pertinent information for each interruption (Repenning & Rudolph, 2002).Additionally, businesses that experience multiple interruptions require enough time to address and manage the links between potentially impacted vital infrastructure elements (Ouyang, 2014).If a disturbance takes more time to resolve, its negative impacts could linger longer and spread across the essential structures, decreasing its capacity for recovery (Rohleder & Cooke, 2006).On the other hand, organizations can concentrate on obtaining and analyzing information to repair problems with just one interruption (Khansa et al., 2020;Welch et al., 2018).Organizations can build countermeasures more quickly and ensure CI resilience since they are not compelled to investigate the intricate relationships between disruptions in such circumstances (Macdonald & Bode, 2017).
Impact of economic performance
Productivity in the supply chain is crucial for the smooth functioning of economies, and issues can result in constraints that have a detrimental effect on productivity and economic growth (Salvatore, 2020).Although supply chains have a variety of components, their efficient (often successive) operation is essential for the timely delivery of goods to customers and contributions to companies (Chen et al., 2016).Economic growth continues to be the main force behind the prosperity of nations and the sustainability of their political structures.However, the new coronavirus outbreak poses unforeseen challenges to the countries' growth paths.This necessitates reevaluating the forces that shape monetary evolution, especially considering variables pertinent to the present situation.The financial crisis of 2008-2009 caused the most recent interruption to the world's supply chain.Even so, there are some subjective disparities even though historical examples might be instructive now.At that moment, it was more of a supply-side than a demand-side issue, but the present crisis has significantly affected demand and supply.Global value chains (GVCs), mandate that items be produced in various nations before being assembled in another.
Red bus service in Karachi
A developed nation is one where the well-off and low-income people use public transport, not one where the poor own vehicles.One of the biggest and most important problems facing the majority of the population in the world is road mobility.Karachi and other major cities experience regular delays in traffic.Congestion in traffic frequently causes delays, obstacles, and unproductive economic activity in urban areas.Plenty of research on traffic-related problems is being carried out under different groups, considering economic and budgetary concerns.Congestion in traffic has been linked to financial losses in several urban areas worldwide.According to research, Karachi's traffic jams cost the city more than $1 billion annually in 2018.This represents 2% of Karachi's annual gross domestic product (GDP).That rate is calculated by necessary time, Utilizing energy and oil prices.
Long commutes, the growth of private and shared transport, and the decrease of public transport are all characteristics of Karachi's transport system, which only provides mobility for some.2.8 million passengers are served daily by 4,000 privately operated buses and roughly 4,000 public vehicles.These loosely regulated services are unpredictable and must set stops and client expectations.Congestion worsens, and safety is compromised as drivers fight and stop randomly to pick up customers or wait idly on the edge of the road until the vehicles are filled.It is usual for commuters to hang over the sides of moving cars or sit on rooftops during rush hour.The irregular network's automobiles need to be updated and in better condition, which raises operating expenses and pollution.For the urban poor, services are typically expensive because they have to pay for each mode change.Now, the Sindh Government has provided Karachi's desperately needed public transport.The "Peoples Bus Service" is the name of the new buses in Karachi.These Pink Buses (Dedicated for Ladies), White Buses, and Red Buses (Electric Buses) will travel along eight distinct routes throughout Karachi, linking important housing hubs with business and industrial centers (think transportation).Finally, there is a daring new bus service in Karachi.The company's buses are brandnew, cooled, and crimson red.The good thing is that.However, the bus system is already developing bad habits that have ruined previous transportation attempts in the city.
Theoretical Model/ Framework
Information Processing Theory is a theory of cognition.IPT concentrates on how data survives in our brains.The theory describes how the brain filters information by keeping track of what we are currently concentrating on before going on to what is stored in our short-term and long-term mental memory.These involve how the brain processes information.Primary research refers to the procedures of obtaining, storing, and retrieving information.As per Information Processing Theory, creating long-lasting memories happens in stages.In the first stage, we view or observe something through our senses, which comprise everything we can see, hear, smell, or taste.Then, we use our temporary memory to store information briefly, like telephone numbers.And at last, permanent memory is stored permanently in our brains.As in IPT, we save information in long-term memory so that it stays longer.There are some steps in IPT, including: 1. break the information into small parts.Allowing your students plenty of rest periods and time to assimilate the knowledge.2. Make it significant.Your students are more likely to retain a lesson if you connect it to real-world events and your personal experiences.3. Connect the point.By providing sufficient background information and drawing links between the current lesson and what has previously been taught and what will be cultured next, you can "layer" the content to make it more likely that it will be retained information for the extended term.4. Repeated Information.One of the simplest methods for encoding fresh information into permanent memory is by repeatedly presenting it in different forms, such as oral, written, graphical, and tactile.We will apply organizational information processing theory (OIPT; (Galbraith, 1974).The notion provides a significant viewpoint on how companies might manage consequential occurrences, such as disruptions that cannot be entirely expected (Nishant et al., 2020).According to OIPT, demands to process information will typically increase as events become more complex or different (Venkatraman & Bensaou, 1995;Repenning & Rudolph, 2002).In order to process information more quickly, OIPT declares that a more robust information switch within and crossway companies (i.e., cross-boundary processing is required when information processing demands increase (Galbraith, 1974).With this theoretical insight in mind, the inter-organizational network accountable for the operation of a CI should align its extent of cross-border exchange information with its disruption challenges.We propose that organizations within a system can deal with CI troubles most effectively by aligning the strength of exchanging cross-boundary information with the non-routineness and complexity of daily disruptions.IPT will help us to repeat the information and to keep this information in long-term memory so that we can solve any trouble more quickly.Hence, it makes the inter-organizational network more resilient.We will check in our research that if the information processing system of an organization will be effective, how much it will easier for them to sort out the trouble.
H1. There is a negative relationship between disruption co-occurrence and CI resistance to focal disruption H2. A negative relationship exists between disruption non-routineness and CI resistance to focal disruption.
H3.There is a negative relationship between economic performance and CI resilience to focal disruption.
Methodology
Quantitative research methods are frequently employed in studies aimed at assessing an issue, ascertaining particulars such as "what" or "how many," and comprehending the relationship between dependent and independent variables in a population (Rashid et al., 2023).The primary objective of this research is to examine how the inter-organizational system of Red Buses responds to routine disruptions in their critical infrastructures.Furthermore, this study aims to gain insight into the types of disruptions that Red Buses experience and their strategies to manage them.The subject matter of this discourse pertains to the domains of science, numbers, and statistics (Rashid & Amirah, 2017;Rashid, 2016).The utility of qualitative research methods lies in their ability to furnish comprehensive portrayals of intricate phenomena, illuminate the interpretation and experience of infrequent or exceptional occurrences, amplify the voices of marginalized perspectives, and facilitate the conduct of preliminary research in uncharted domains to engender theories, hypotheses, and elucidations (Rashid & Rasheed, 2023).The present study employs a quantitative research methodology to examine the effects of routine disruptions on inter-organizational systems and to evaluate the corresponding hypotheses.
Deductive research is founded on literature reviews, hypotheses, and theories, enabling logical conclusions (Rasheed & Rashid, 2023).The validity of these propositions is subsequently tested through the collection and analysis of empirical data.Formulating a research hypothesis based on an existing theory and devising a plan to verify the hypothesis is the fundamental principle of deductive research.Given that our objective is not to formulate a novel theory but to scrutinize an established one, our investigation adheres to this rational approach.The research methodology employed in this study is primarily deductive and quantitative.Statistical analysis is a fundamental aspect of deductive methods that reveal latent associations among various factors (Rasheed et al., 2023).The present study employs the SmartPLS 4 software for Structural Equation Modeling (SEM) analysis.
Sampling Strategy
Sample design is an established strategy for drawing a sample from a specific population and the number of things to include in the sample.(Rashid et al., 2022a, b).Sampling is essential to the entire research process since it significantly impacts the reliability of study findings.Improper sampling practices can cause interpretation issues, for example, forming incorrect inferences about a population (Rashid et al., 2021;Hashmi et al., 2021a, b).The veracity of a study's findings is significantly contingent upon the precision of the sampling methodology.Hence, it is imperative to establish the sample design meticulously.The target audience is the subgroup of those for whom the programmer is intended, for whom you will aggressively recruit and retain employees, and for whom you will hold yourself accountable for results (Hashmi et al., 2020a, b).The individuals involved in this investigation work and operate the Red Bus Service in Karachi.The sample size is crucial to research methodology (Rashid et al., 2020).A sample of 152 individuals, comprising both workers and drivers, was randomly selected from the Red Bus Service in Karachi.The methodology for sampling is Random sampling, a probabilistic sampling technique, which was utilized due to its simplicity and efficiency in terms of time and resource requirements (Alrazehi et al., 2021;Das et al., 2021;Haque et al., 2021).
Instrument of Data Collection
Data was collected from both electronic and non-electronic sources.Data was collected using Google Forms distributed via various internet channels such as WhatsApp and email addresses.The data was obtained through in-person interviews with Red Bus drivers and passengers at predetermined locations.Data was collected from both electronic and non-electronic sources.Data was collected using Google Forms distributed via various internet channels such as WhatsApp and email addresses.The data was obtained through in-person interviews with Red Bus drivers and passengers at predetermined locations.This quantitative analysis aims to elucidate the impact of different factors on the variables under investigation.The investigation is supported by empirical evidence, with statistical evaluation conducted through the Structural Equation Modeling (SEM) technique implemented in SmartPLS 4 (Rashid & Rasheed, 2022).The questionnaire I used in my research is divided into two sections.The first section deals with demographic information, which I calculated on a nominal scale.The main research topic is discussed in the second section of the questionnaire.It comprises six criteria and nineteen objects rated on a five-point scale.One represents a severe disagreement, and five represents firm agreement (Hashmi & Modh, 2020).All of the constructs employed in the investigation were derived from past studies.The source count for the items in the questionnaire is detailed in Table 1.The entire questionnaire is also included as an appendix.
Respondent's Characteristics
The census takers went to different red bus stops and their offices; 200 questionnaires were distributed and received 152questionnaire.Table 2 displays the profile of the responders.22.3% of the 152 responders were female, while 77.6% were men.Regarding their ages, we discovered that 4.6% of respondents were between the ages of 15 and 20; 3.2% were between the ages of 21 and 25; 25% were between the ages of 26 and 30; 34.8% were between the ages of 31 and 35; and 31.5% were between the ages of 36 and 40.According to the results of the educational categorization, 42.7% of the participants did matric certificates, 23.6% did inter, 23.6% did bachelor's levels of education, 7.8% followed advanced degrees, and the other 1.9% did other education.
Descriptive Analysis
The descriptive analysis Table 3 contains important reliability and validity statistics for the measurement constructs used in this research.The result shows that Cronbach's alpha 0.696 of DC (Disruption Co-occurrence), while slightly below the typical threshold of 0.7, is still acceptable, indicating reasonable internal consistency (Rashid et al., 2019).The composite reliability of 0.832 surpasses the recommended threshold of 0.7, suggesting good reliability (Rashid & Rasheed, 2023).The AVE of 0.624 indicates that the construct explains a substantial proportion of the variance in the items, demonstrating adequate convergent validity (Fornell & Larcker, 1981;Khan et al., 2023a;b).Like DC, DN demonstrates good internal consistency and reliability, with Cronbach's alpha and composite reliability exceeding 0.7 (Hashmi, 2022;Khan et al., 2021).The AVE of 0.660 suggests that this construct also possesses convergent validity.In EP (Economic Performance), Cronbach's alpha is slightly below 0.7, and the composite reliability is above the threshold, indicating satisfactory internal consistency.The AVE of 0.614 suggests reasonable convergent validity, and the construct "Flexibility to the Disturbance" demonstrates acceptable reliability with a Cronbach's alpha above 0.7 and composite reliability above the threshold.However, the AVE of 0.531, while still reasonable, suggests that there may be room for improvement in convergent validity.This construct exhibits excellent internal consistency and reliability, with Cronbach's alpha and composite reliability well above 0.7 (Hashmi, 2023;Khan et al., 2022).The AVE of 0.512 indicates reasonable convergent validity.In summary, most of your constructs (DC, DN, EP, and Info Sharing during Disturbances) demonstrate excellent internal consistency and reliability.The AVE values suggest that these constructs have reasonable convergent validity, with Info Sharing during Disturbances being particularly strong.The construct "Flexibility to the Disturbance" has a slightly lower AVE, indicating a potential area for improvement in convergent validity.Fornell and Larcker's (1981) standards for evaluating discriminant validity were applied to this investigation.Table 4 presents the outcome's executive summary.The findings indicate that the variables tested in the research are unique and special because the AVE values square root are more significant than the Pearson Correlation values.(Fornell and Larcker, 1981;Agha et al., 2021).Three direct and two moderate hypotheses have been put forth by my research.In this research, I used scaling to test the hypotheses.Table 5 illustrates the findings of the hypothesis.The finding of my research is in favour of all the direct hypotheses except the subsequent one (1) DN effects elasticity to the trouble (β 5 -0.083, t 5 0.939, p > 0.05).Similarly, in the moderating hypothesis relationships, our findings do not favour both moderating hypothesis relationships.Info sharing during disturbances x DC -> flexibility to the disturbance (β 5 -0.139., t 5 1.383, p > 0.05) and Info sharing during disturbances x EP ->flexibility to the disturbance (β 5 0.085., t 5 0.849, p > 0.05).
Discussion
Hypotheses Conclusion H1.There is a negative relationship between disruption co-occurrence and CI resistance to focal disruption Accept H2.There is a negative relationship between disruption non-routineness and CI resistance to focal disruption.Rejected H3.There is a positive relationship between economic performance and CI resilience to focal disruption.Accept H4 The association between disruption co-occurrence and CI resistance to concentrated disruption is moderate by cross-border information exchange during disruption.When cross-border information flow is minimal, this negative association is enhanced, and when it is high, it is attenuated.Rejected H5.The link between disruption Economic Performance and CI disruption resilience is moderated by crossborder information exchange during disruption.When cross-border information flow is minimal, this positive association is enhanced, and when it is high, it is attenuated.
Source: Results outcome
My research has explored various facets of this intricate relationship, shedding light on the dynamics between disruptions, organizational characteristics, economic performance, and the role of cross-border information exchange.This discussion aims to provide a nuanced understanding of the implications of our research in the context of critical infrastructure resilience.
We found substantial evidence to support Hypothesis 1, which posited a negative relationship between disruption co-occurrence and critical infrastructure (CI) resistance to focal disruption.This finding underscores the significance of coordinated response mechanisms within inter-organizational networks of Red Buses.Organizations that effectively manage disruptions through collaborative efforts are better equipped to resist the impact of focal disruptions, ultimately contributing to CI resilience (Barratt-Pugh et al., 2020).Contrary to our expectations, Hypothesis 2 was rejected, suggesting a negative relationship between disruption non-routineness and CI resistance to focal disruption.This unexpected result highlights the adaptability and versatility of Red Bus networks in responding to disruptions of varying natures.A certain level of non-routineness can be advantageous, allowing organizations to adapt quickly to unexpected challenges.
Our analysis supported Hypothesis 3, indicating a positive relationship between economic performance and CI resilience to focal disruption.This finding underscores the importance of economic stability in bolstering CI resilience.Red Bus networks prioritizing economic performance are better positioned to withstand the adverse effects of disruptions through investment in contingency planning and resource allocation (Bhamra et al., 2011).Our research did not support Hypotheses 4 and 5, which proposed that the association between disruption co-occurrence and Economic performance with CI resilience would be moderated by cross-border information exchange during disruption.This suggests that the impact of information exchange on CI resilience may be more complex than initially anticipated.It emphasizes the need for further investigation into how information exchange influences resilience within inter-organizational networks.
Practical and Managerial Implications
The acceptance of Hypotheses 1 and 3 highlights the practical significance of fostering collaboration among Red Bus organizations and prioritizing economic performance.Organizations should strengthen their inter-organizational ties and focus on financial robustness to enhance their CI resilience.Additionally, the rejection of Hypothesis 2 underscores the importance of flexibility and adaptability within these networks.Policymakers and regulators can leverage our findings to shape policies promoting inter-organizational network resilience.Potential policy actions include encouraging the development of standardized procedures for cross-border information exchange, incentivizing investments in infrastructure maintenance, and establishing contingency planning requirements.Practitioners should invest in training and development programs for their teams.Training in crisis management, communication protocols, and scenario planning exercises can enhance the readiness of Red Bus network members to respond effectively to disruptions (Barratt-Pugh et al., 2020).
Limitations and Future Research
We applied the Information Processing Theory to analyze the dynamics within these networks.We tested hypotheses related to disruption co-occurrence, non-routineness, economic performance, and the moderating role of cross-border information exchange.The results of our study offer valuable insights for practitioners, decision-makers, and policymakers in the transportation and critical infrastructure sectors.Our study has provided valuable insights, but there is room for further research.Our study has laid the groundwork for further research.Future studies could delve deeper into the specific mechanisms through which information sharing influences resilience.Additionally, exploring the role of emerging technologies such as AI and IoT in enhancing response capabilities is a promising avenue for future research.
Conclusion
In conclusion, our research on resilience in Red Buses' inter-organizational networks dealing with critical infrastructures' daily disruptions offers practical and managerial implications.The findings emphasize the importance of collaborative response, adaptability, economic stability, and information exchange.By implementing the recommendations outlined above, organizations and policymakers can contribute to the resilience and reliability of critical infrastructure networks, ensuring their continued functionality in the face of disruptions.The conducted study presented understandings contributing to the discussion about how much SCRs and SP are influenced by implementing the LM practices in the SME sector of Pakistan, which resultantly enhances firm performance.It was revealed that LM positively related to SCR (SCR) and SP (SP), which enhanced firm performance.Results indicated that the direct impact was not as strong as others.However, it contributed to a significant extent.It showed a positive direct relation when exploring the direct impact of LM in enhancing SP because it primarily focuses on reducing waste and improving efficiency.Implementation of LM techniques and tools also improved SP indirectly by enabling cooperation and building trust between supply chain partners. | 7,536.2 | 2024-01-01T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Business"
] |
Myeloid-derived suppressor cells impair CD4+ T cell responses during chronic Staphylococcus aureus infection via lactate metabolism
Staphylococcus aureus is an important cause of chronic infections resulting from the failure of the host to eliminate the pathogen. Effective S. aureus clearance requires CD4+ T cell-mediated immunity. We previously showed that myeloid-derived suppressor cells (MDSC) expand during staphylococcal infections and support infection chronicity by inhibiting CD4+ T cell responses. The aim of this study was to elucidate the mechanisms underlying the suppressive effect exerted by MDSC on CD4+ T cells during chronic S. aureus infection. It is well known that activated CD4+ T cells undergo metabolic reprogramming from oxidative metabolism to aerobic glycolysis to meet their increased bioenergetic requirements. In this process, pyruvate is largely transformed into lactate by lactate dehydrogenase with the concomitant regeneration of NAD+, which is necessary for continued glycolysis. The by-product lactate needs to be excreted to maintain the glycolytic flux. Using SCENITH (single-cell energetic metabolism by profiling translation inhibition), we demonstrated here that MDSC inhibit CD4+ T cell responses by interfering with their metabolic activity. MDSC are highly glycolytic and excrete large amount of lactate in the local environment that alters the transmembrane concentration gradient and prevent removal of lactate by activated CD4+ T. Accumulation of endogenous lactate impedes the regeneration of NAD+, inhibit NAD-dependent glycolytic enzymes and stop glycolysis. Together, the results of this study have uncovered a role for metabolism on MDSC suppression of CD4+ T cell responses. Thus, reestablishment of their metabolic activity may represent a mean to improve the functionality of CD4+ T cells during chronic S. aureus infection. Supplementary Information The online version contains supplementary material available at 10.1007/s00018-023-04875-9.
Introduction
Staphylococcus aureus is a major human pathogen and a leading cause of morbidity and mortality worldwide [1]. S. aureus can cause recurrent and chronic diseases such as chronic implant-related bone infections despite appropriate antibiotic treatment [2]. An important factor contributing to the chronicity and infection recurrence is the failure of the host to develop effective T cell responses against S. aureus [3,4]. An understanding of the factors responsible for the host failure to generate effective cellular immunity against the pathogen is important for improving the management of chronic staphylococcal infections.
Several lines of evidence from animal models and human studies have converged to show that CD4+ T cells, in particular Th17 and Th1 subsets, are at the frontline of the adaptive immune response to S. aureus [5][6][7][8][9][10]. After recognition of antigen via the T cell receptor (TCR) and the receipt of costimulatory signals, CD4+ T cells become activated, undergo extensive proliferation and acquire effector functions, such as the ability to produce cytokines and other effector molecules [11]. Thus, although CD4+ T cells are not directly involved on bacterial killing, they can facilitate S. aureus clearance by increasing the recruitment of phagocytic cells to the site of infection and by enhancing their antimicrobial activity via the production of cytokines such as IFN-γ and IL-17 [8,9,12]. However, we have reported in previous studies that the functionality of CD4+ T cells become compromised with the progression of S. aureus infection toward chronicity [13]. CD4+ T cells dysfunction was manifested by a decreased production of effector cytokines and impaired proliferative responses upon TCR stimulation [13]. We have also reported that CD4+ T cell dysfunction observed during chronic S. aureus infection was attributed to extrinsic suppressive mechanisms exerted by MDSC [14,15]. MDSC comprise an aberrant population of immature myeloid cells that accumulate during pathological conditions such as cancer and chronic infections and are potent inhibitors of T cell responses [16,17]. MDSC have been reported to play an important role in chronic infections caused by S. aureus in humans as well as in experimental infection in mice [14,[18][19][20]. Therefore, targeting the immunosuppressive mechanisms exerted by MDSC could be a promising strategy to restore T cell dysfunction and facilitate S. aureus clearance during chronic infection. Although the molecular mechanisms underlying the suppressive effect of MDSC on T cell responses in the cancer setting have been addressed in many studies, the suppressive mechanisms of MDSC in chronic infections remained unclear.
The aim of the current study was to elucidate the mechanisms through which MDSC mediate T cell dysfunction in the setting of S. aureus chronic infection.
Following antigen recognition, CD4+ T cells become activated and undergo metabolic reprograming, switching from an oxidative phosphorylation-dependent catabolic condition to a highly glycolytic state and adopt aerobic glycolysis in order to support the increased energetic demands required for cell proliferation and for the synthesis of effector molecules [21][22][23][24][25]. During aerobic glycolysis, glucose is converted to pyruvate that is reduced directly to lactate in the cytoplasm by the lactate dehydrogenase instead of entering the mitochondria for oxidation [25]. In this reaction, NAD+, which is an important co-factor for several glycolytic enzymes [26], is regenerated from NADH to enable the continuation of glycolysis [25]. Activated CD4+ T cells also need to constantly excrete the excess of lactate produced during aerobic glycolysis in order to avoid the reversal of the lactate dehydrogenase reaction that can shut down regeneration of NAD+ and discontinue the glycolytic process. Lactate is largely exported by activated CD4+ T cells via proton-linked monocarboxylate transports following a concentration gradient [27]. A limitation in the capacity of activated T cells to undergo metabolic shift toward aerobic glycolysis either by nutrient limitation or by inhibition of lactate excretion has been shown to impair T cell proliferation and cytokines production [21,[28][29][30][31][32].
We have recently shown that MDSC have an aberrant metabolism with very high glycolytic activity associated with the consumption of large amounts of glucose and the released of elevated levels of lactate in the extracellular milieu [15]. In the present study, we provide evidence that the lactate-rich local environment generated by MDSC in S. aureus-infected mice hinders CD4+ T cell activation by impairing NAD+ regeneration and disruption of glycolytic flux.
Bacteria
The S. aureus strain 6850 was used in this study [33]. S. aureus was grown to the mid-log phase in brain heart infusion medium (BHI, Roth) at 37 °C with shaking (120 rpm), collected by centrifugation, washed with sterile PBS, and diluted to the required concentration.
Experimental murine infection model and spleen cells isolation
Pathogen-free 10-week-old C57BL/6 female mice were purchased from Charles River (Germany) and maintained according to institutional guidelines in individually ventilated cages with food and water provided ad libitum. Mice were intravenously inoculated with 10 6 CFU of S. aureus in 100 μl of PBS via a lateral tail vein, and sacrificed by CO 2 asphyxiation at day 21 after bacterial inoculation. The spleen was removed from infected mice and single-cell suspensions were prepared by gently teasing the spleen tissue through a 100 µm pore size nylon cell strainer and the bone marrow was flushed out of both tibia and femur. Spleen and bone marrow cells were spun down, erythrocytes were lysed after incubation for 5 min at RT in ammonium-chloridepotassium lysing buffer (Lonza), washed with PBS + 10% FCS and resuspended in RPMI-1640 medium (Gibco) supplemented with 10% FCS and antibiotic-antimycotic (VWR International).
Proliferation assay
Spleen cells were seeded in 96-well plates at a concentration of 5 × 10 6 /ml and incubated in the presence of 2 μg/ml of Armenian hamster anti-mouse CD3 plus 2 μg/ml of Syrian hamster anti-mouse CD28 antibodies (BD Pharmingen) for the specified time periods. Before stimulation, spleen cells were stained with CellTrace™ CFSE Cell Proliferation kit (Invitrogen) according to the manufacturer′s instruction. Cell proliferation was monitored by flow cytometry using CFSE dilution.
In some experiments, nicotinamide riboside (NR) (Sigma-Aldrich) at a concentration of 200 µM or the MCT1selective inhibitor AZD3965 (Cayman) at a concentration of 100 nM was added to the cultures.
Flow cytometry
Cell suspensions were incubated with anti-mouse CD16/32 (eBioscience) for 5 min at RT to block Fc receptors and stained for 20 min at 4 °C with anti-mouse CD4 antibodies (Biolegend). Cells were washed with PBS + 10% FCS and analyzed on a LSRII cytometer (Becton Dickinson).
For intracellular cytokines staining, cells were stained first with anti-mouse CD4 antibodies as described above, fixed for 15 min at RT with fixation buffer (Biolegend), washed twice with permeabilization buffer (BioLegend), and stained with anti-mouse IL-2 or anti-mouse IFN-γ antibodies. After washing with permeabilization buffer, cells were analyzed on a LSRII cytometer.
Cell viability was determined by flow cytometry using Zombie fixable viability kit according to the manufacturer's recommendations (BioLegend).
MDSC depletion
Ly6C + Ly6G + MDSC were removed from the spleen cell suspensions prior to in vitro stimulation using the mouse Myeloid-Derived Suppressor Cell Isolation Kit (Miltenyi Biotec) according to the manufacturer's instructions. The negative fraction constituted the MDSC-depleted spleen cells population. Efficacy of depletion was > 90% as confirmed by flow cytometry.
SCENITH assay
SCENITH was performed according to protocol published by Argüello et al. [34]. In brief, spleen cells isolated from either uninfected or S. aureus-infected mice at day 21 of infection were seeded in 96-well plates at a concentration of 5 × 10 6 /ml and incubated in the presence of 2 µg/ml of anti-CD3/anti plus CD28 antibodies for the specified times at 37 °C and 5% CO 2 . Cells incubated in medium without antibodies were used as control. DMSO or the metabolic inhibitors deoxy-d-glucose (2-DG, 100 mM) (Sigma-Aldrich), oligomycin (1 μM) (Sigma-Aldrich), or 2-DG plus oligomycin were added to the wells at the specified time points and further incubated for 1 h. Puromycin (Sigma-Aldrich) was added to the wells at a final concentration of 10 μg/ml during the last 30 min of incubation. After washing with cold PBS, cells were stained for surface CD4 as described above, fixed and permeabilized using the FOXP3 fixation and permeabilization kit (eBioScience) according to manufacturer's instructions. Intracellular staining of puromycin was performed after incubation with anti-puromycin antibodies (Merk) for 1 h in permeabilization buffer. After washing with permeabilization buffer, cells were analyzed on a LSRII cytometer.
Glut-1 staining
For intracellular staining of Glut-1, spleen cells were stained first with anti-mouse CD4 antibodies as described above, fixed for 15 min at RT with fixation buffer (Biolegend), washed twice with permeabilization buffer (BioLegend), and stained with anti-Glut-1 antibodies (Novus Biologicals). After washing with permeabilization buffer, cells were analyzed on a LSRII cytometer.
Lactate measurement
Lactate concentrations were measured in culture supernatants or in homogenized spleen tissue using the commercially available Amplite Colorimetric l-Lactate Assay Kit (Elabscience) following the manufacture's instruction.
Isolation of CD4+ T cells
CD4+ T cells were isolated from cultured spleen cells using the mouse CD4+ T cell Isolation Kit (Miltenyi Biotec) according to the manufacturer's instructions.
NAD/NADH ratio assay
NAD+ and NADH levels were measured in isolated CD4+ T cells using Amplite™ Colorimetric NAD/NADH Ration Assay kit according to the manufacturer's instructions (AAT Bioquest).
Analysis
Statistical analysis was performed using GraphPad Prism version 9.4.1 software. Differences between two groups were determined using a student t test. Groups of three or more were analyzed by one-way analysis of variance (ANOVA). Data were analyzed using FlowJo v9.3 software.
Inhibition of CD4+ T cell responses by MDSC is linked to metabolism
We have previously shown that the functionality of CD4+ T cells became compromised during chronic S. aureus infection [13]. Thus, whereas CD4+ T cells in spleen cells isolated from uninfected mice actively proliferated in response to stimulation with anti-CD3/anti-CD28 antibodies (Fig. 1a, b) and produced significant amounts of effector cytokines such as IL-2 and IFN-γ (Fig. 1c, d), CD4+ T cells in spleen cells isolated from S. aureus-infected mice were impaired in their capacity to proliferate (Fig. 1a, b) and to produce IL-2 and IFN-γ (Fig. 1c, d) upon stimulation with anti-CD3/ anti-CD28 antibodies. We have also previously reported that immunosuppression of CD4+ T cell responses in S. aureus-infected mice was mediated by MDSC, an aberrant population of myeloid cells that expand during pathological conditions including chronic infections [14,15]. MDSC accumulate in the spleen (Supplementary Fig. S2a and b) and bone marrow ( Supplementary Fig. S2c, d) of S. aureusinfected mice and are responsible for T cell dysfunction [14,15]. Indeed, CD4+ T cells in spleen cells isolated from infected mice recovered their capacity to proliferate (Fig. 1a, b) and to produce IL-2 and IFN-γ (Fig. 1c, d) in response to stimulation with anti-CD3/anti-CD28 antibodies after depletion of MDSC.
In the study presented here, we investigated the mechanisms underlying the suppressive effect exerted by MDSC on CD4+ T cell responses in S. aureus-infected mice. Because activated CD4+ T cells undergo a shift in metabolism toward aerobic glycolysis which is absolutely required to synthesize intermediates required for cell proliferation and cytokine production [21][22][23][24][25], we considered the possibility that MDSC could impair CD4+ T cell responses in S. aureus-infected mice by limiting their capacity to undergo this metabolic shift. To investigate this hypothesis, we first assessed the metabolic activity of CD4+ T cells in spleen cells isolated from either S. aureus-infected or uninfected mice upon stimulation with anti-CD3/anti-CD28 antibodies using the recently described single-cell energetic metabolism by profiling translation inhibition (SCENITH) method [34]. SCENITH is based on the analysis of protein translation as surrogate of metabolic activity and enables to analyze the metabolic activity of specific cell subsets within heterogenous populations [34]. The degree of protein translation is determined by measuring the extent of puromycin incorporation into nascent polypeptide after staining with anti-puromycin antibodies using flow cytometry [34]. The results of this analysis showed that stimulation with anti-CD3/anti-CD28 antibodies induced a significant increase in the metabolic activity of CD4+ T cells in spleen cells isolated from S. aureus-infected mice (Fig. 2a, c), although to a significantly lower extent than that observed in CD4+ T cells in spleen cells isolated from uninfected animals (Fig. 2b, c). Furthermore, whereas the metabolic activity of anti-CD3/anti-CD28-stimulated CD4+ T cells in the spleen cells from uninfected mice was maintained at high levels during the whole incubation period, the metabolic activity progressively decreased in stimulated CD4+ T cells from infected mice during the incubation time (Fig. 2a, c).
CD4+ T cells in the spleen of S. aureus-infected mice are not impaired in their capacity to up-regulate Glut-1 and to uptake glucose upon TCR stimulation
The first step in glucose utilization by CD4+ T cells during metabolic reprograming upon activation is an increase in glucose uptake via up-regulation of the glucose transporter-1 (Glut-1) [35]. Therefore, we investigated if the lower metabolic activity of anti-CD3/anti-CD28-stimulated spleen CD4+ T cells from S. aureus-infected mice was due to an impaired capacity to up-regulate Glut-1. Determination of Glut-1 expression by flow cytometry analysis indicated that Glut-1 was significantly up-regulated in CD4+ T cells in spleen cells from infected mice at 24 h upon stimulation and to an extent similar to that observed in CD4+ T cells in the spleen from uninfected mice (Fig. 3a, b). We also measured the levels of glucose uptake by spleen CD4+ T cells from infected or uninfected mice in response to stimulation with anti-CD3/anti-CD28 antibodies using 2-NBDG. The results depicted in Fig. 3c, d show that although glucose uptake increased upon stimulation in CD4+ T cells from infected mice, the levels of glucose uptake were significantly lower than those of CD4+ T cells from uninfected animals. Since we have previously shown that MDSC present in the spleen of S. aureus-infected mice in high numbers consumed elevated amounts of glucose [15], we hypothesize that the lower amount of glucose taken up by stimulated CD4+ T cells from infected mice may result from a reduced glucose bioavailability due to the high consumption by the MDSC. S. aureus-infected mice (middle panels) upon stimulation with anti-CD3/anti-CD28 antibodies. Proliferation of stimulated CD4+ T cells in the spleen cell population from S. aureus-infected mice that has been depleted of MDSC prior to stimulation is shown in the lower panels. The gating strategy is described in Supplementary Fig. S1. The percentage of divided CD4+ T cells in each group is shown in b. c Flow cytometry contour plots showing the intracellular stain-ing of IL-2 (upper panels) and IFN-γ (lower panels) in CD4+ T cells within the spleen cell population isolated from uninfected (left panels) or S. aureus-infected (middle panels) mice as well as in MDSCdepleted spleen cells isolated from S. aureus-infected mice (lower panels) cultured for 72 h in the presence (green) or absence (red) of anti-CD3/anti-CD28 antibodies. The frequencies of CD4+ T cells expressing IL-2 (upper panel) or IFN-γ (lower panel) are shown in d. Each bar shows the mean ± SD of three independent experiments. ***p < 0.001, ****p < 0.0001 To investigate this possibility, we determined the effect of adding increasing concentrations of glucose (ranging from 10 to 100 mM) in the culture medium on the proliferative response and production of IL-2 and IFN-γ by CD4+ T cells upon activation with anti-CD3/anti-CD28 antibodies. The results show that addition of increasing concentrations of extracellular glucose did not rescue the capacity of CD4+ T cells from infected mice to proliferate (Fig. 3e) or to produce cytokines (Fig. 3f) upon stimulation. Thus, glucose deprivation seems not to be the mechanism underlying the inhibitory effect exerted by MDSC on CD4+ T cell responses in the spleen of S. aureus-infected mice. Because CD4+ T cells critically depend on reprogramming their metabolic activity toward aerobic glycolysis to meet the bioenergetic demands during activation [21][22][23][24][25], we next investigated whether CD4+ T cells in the spleen of S. aureus-infected mice were capable to reprogram their metabolic activity toward aerobic glycolysis upon stimulation with anti-CD3/anti-CD28 antibodies. To this end, we determined the effect of inhibiting glycolysis by treatment with 2-DG or oxidative phosphorylation by treatment with oligomycin on their metabolic activity using SCENITH. We also included in the analysis spleen cells from uninfected mice for comparison. In line with previously published data [36], we observed that the metabolic activity of unstimulated spleen CD4+ T cells from either uninfected (Figs. 4a, 5a) or S. aureus-infected (Figs. 4b, 5b) mice heavily relied on oxidative phosphorylation since inhibition of glycolysis by treatment with 2-DG did not have a significant effect on their metabolic activity but inhibition of oxidative phosphorylation using oligomycin resulted in marked reduction of their metabolic activity. However, the contribution of oxidative phosphorylation to the total metabolic activity was reduced upon anti-CD3/anti-CD28 antibodies stimulation (from > 80% in unstimulated to 50-60% in stimulated cells) in CD4+ T cells from uninfected mice whereas the contribution of glycolysis increased progressively during the stimulation time (Figs. 4a, 5a). In CD4+ T cells from infected mice, the contribution of oxidative phosphorylation to the total metabolic activity was also reduced upon anti-CD3/ anti-CD28 antibodies stimulation (from > 80% in unstimulated to 50-60% in stimulated cells) (Figs. 4b, 5b). However, in contrast to the CD4+ T cells from uninfected mice, the contribution of glycolysis to the total metabolic activity increased sharply at 24 h and progressively decline during the incubation time (Figs. 4b, 5b). These data indicate that CD4+ T cells in the spleen of S. aureus-infected mice are capable to increase their glycolytic activity during the first 24 h upon TCR stimulation, but they are unable to sustain their glycolytic activity during the entire stimulation time.
Inhibition of CD4+ T cell responses in the spleen cells of S. aureus-infected mice is mediated by increased extracellular lactate and reduction of intracellular NAD+/NADH ratio
Several studies have reported that high levels of extracellular lactate can impair T cell proliferation and cytokines production [29,[37][38][39]. Lactate is a byproduct of aerobic glycolysis that is produced via lactate dehydrogenase by conversion of pyruvate and NADH into lactate and NAD+ [25]. NAD+ is an important cofactor that regulates glycolysis through its electron transfer function in redox reactions where is reversibly reduced to NADH [26]. NAD+ needs to be regenerated from NADH through the conversion of pyruvate to lactate by the lactate dehydrogenase to maintain active the glycolytic flux [25]. Because this reaction is reversible, lactate needs to be continuously excreted by activated T cells to enable the reaction to further move toward lactate and NAD+ production [25]. Lactate export is largely achieved by transport systems such as monocarboxylate transporter 1 (MCT1), which is a bidirectional proton-assisted transporter that cotransport protons and lactate anions through the plasma membrane depending on the concentration gradient [27,40]. In lactaterich environments, lactate accumulates within activated T cells leading to impaired NAD+ regeneration, blockage of glycolytic NAD + -dependent enzymatic reactions and drastic reduction of intermediates needed for proliferation [29]. Since we have previously shown that MDSC in the spleen of S. aureus-infected mice excreted high levels of lactate [15], we speculated that MDSC generate a lactate-rich environment in the spleen of infected mice that may hamper the lactate excretion and NAD+ regeneration of CD4+ T cells during activation. A graphic schema of this hypothesis is shown in Fig. 6a. To investigate this assumption, we first determined if the concentration of lactate differed between the spleen of S. aureus-infected and the spleen of uninfected mice. We found a significantly greater concentration of lactate in the spleen of infected mice compared to uninfected animals (Fig. 6b). Furthermore, spleen cells from infected mice produced significantly greater levels of lactate than those from uninfected animals after in vitro incubation in culture medium (Fig. 6c). The concentration of lactate increased in the culture supernatant of spleen cells from both infected and uninfected mice after stimulation with anti-CD3/anti-CD28 antibodies, although the lactate levels were significantly greater in spleen cells from infected than in those from uninfected mice (Fig. 6c). Removing the excess of lactate by changing the medium every 24 h resulted in significant recovery of proliferative capacity of CD4+ T cells in spleen from infected mice (Fig. 6d, e). On the other hand, abrogation of lactate excretion in stimulated spleen CD4+ T cells using the specific MCT1 inhibitor AZD3965 [41] resulted in significant reduced proliferation of CD4+ T cells Fig. 4 Metabolic profile of spleen CD4+ T cells from either uninfected or S. aureus-infected mice upon stimulation with anti-CD3/ anti-CD28 antibodies. Flow cytometry histograms showing the levels of protein translation (puromycin MFI) in CD4+ T cells in spleen cells from uninfected (a) or S. aureus-infected (b) mice at the indicated times of stimulation with anti-CD3/anti-CD28 antibodies and treated with either the glycolysis inhibitor 2-DG (upper histograms) or the oxidative phosphorylation inhibitor oligomycin (lower histograms). Quantification of protein translation levels (puromycin MFI) in the different conditions is shown in the lower panels in a and b. *p < 0.05, ****p < 0.0001 ◂ from uninfected mice but did affect the unresponsiveness of CD4+ T cells from infected animals (Fig. 6f, g). These observations underscore the relevance of lactate excretion via MCT1 for a proper response of CD4+ T upon activation.
Since high concentration of lactate can alter NAD+/ NADH redox conditions in activated CD4+ T cells that can affect cell proliferation [29], we determined the intracellular NAD+/NADH ratio in stimulated CD4+ T cells in spleen cells isolated from either uninfected or S. aureus-infected mice. For this purpose, spleen cells were cultured in the presence or absence of anti-CD3/anti-CD28 antibodies and NAD+/NADH ratio was determined in isolated CD4+ T cells at 24 h, 48 h and 72 h of incubation. The results show that whereas the NAD+/NADH ratio was not significantly changed during the incubation period in stimulated CD4+ T cells from uninfected mice, NAD+/NADH ratio progressively declined with the time of incubation in stimulated CD4+ T cells from infected mice, indicating thus a redox shift from NAD+ to NADH (Fig. 7a). To further investigate the relevance of redox shift from NAD+ to NADH in the suppression of CD4+ T cells responses in the spleen of infected mice, we determined the effect of increasing NAD+ by supplementing the cell cultures with nicotinamide riboside (NR), a precursor that has been shown to increase the levels of NAD+ in cells [42]. We found that NR supplementation increased NAD+/NADH ratio in activated CD4+ T cells from infected mice (Fig. 7a). This increase was most evident at 72 h of incubation (Fig. 7a). We also observed that NR supplementation improved the capacity of spleen CD4+ T cells from infected mice to proliferate after stimulation with anti-CD3/anti-CD28 antibodies but has not major effect on the proliferative activity of CD4+ T cells from uninfected mice (Fig. 7b, c). Together, these results indicate that the high levels of lactate released by MDSC in the spleen of infected mice likely provoked a redox shift in activated CD4+ T cells that may be responsible, at least in part, for their unresponsiveness to TCR stimulation.
Discussion
We have previously reported that MDSC expand during S. aureus infection and exert a suppressive effect of CD4+ T cells that support infection chronicity [14]. We have also shown that MDSC exhibited a dysregulated metabolism in chronically infected mice characterized by high glycolytic activity and release of large amounts of lactate [15]. In this study, we have investigated the mechanism underlying the immunosuppressive effect exerted by MDSC on CD4+ T cells responses during chronic S. aureus infection. Our results support the notion that the dysregulated metabolic activity of MDSC generates a lactate-rich local environment in the spleen of infected mice that is responsible for the suppression of CD4+ T cell responses. The sensitivity of CD4+ T cells toward high concentrations of exogenous lactate can be attributed to the specific metabolic reprograming that CD4+ T cells undergo upon activation to supply the energetic requirements associated with highly proliferative and biosynthetic processes [21, 23-25, 36, 43-45].
The primary metabolic adaptation of CD4+ T cells upon TCR stimulation is a switch from oxidative phosphorylation to aerobic glycolysis involving a marked increase in glucose uptake and a change in the fate of glucose carbons [23-25, 36, 44, 46-48]. In resting CD4+ T cells, glucose is converted into pyruvate that enters the TCA in the mitochondria to undergo oxidative phosphorylation leading to the production of ATP [49]. Activate CD4+ T cells, on the other hand, use predominantly aerobic glycolysis where a large proportion of pyruvate is not entering the TCA in the mitochondria but rather converted into lactate in the cell cytosol through Fig. 5 Contribution of glycolysis and oxidative phosphorylation to the metabolic activity of spleen CD4+ T cells from either uninfected or S. aureus-infected mice upon stimulation with anti-CD3/anti-CD28 antibodies. Metabolic dependence on glycolysis (cyan bars) or oxidative phosphorylation (purple bars) of spleen CD4+ T cells isolated from uninfected (a) or S. aureus-infected (b) mice upon stimulation with anti-CD3/anti-CD28 antibodies. Metabolic dependence was determined as described in the "Materials and methods" section. Each bar shows the mean ± SD of five independent experiments. *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001 the action of lactate dehydrogenase [49]. Although aerobic glycolysis is less efficient than oxidative phosphorylation yielding only four moles of ATP per glucose molecule, this pathway produces ATP faster than oxidative phosphorylation to meet the energy demand of rapidly dividing cells [22]. However, lactate molecules have to be exported by activated CD4+ T cells to ensure the continuation of glycolysis. Since lactate anions cannot cross the plasma membrane by free diffusion [50], lactate is exported via monocarboxylate transporter systems, which cotransport protons and lactate anions following a concentration gradient [27,51]. A high concentration of extracellular lactate can reverse lactate flux and, in this way, interfere with CD4+ T cell activation. Indeed, inhibition of lactate export by monocarboxylate transporter MCT1 using pharmacological inhibitors have been shown to suppress T cell proliferation [52]. In this study, we provide evidence that the high concentration of extracellular lactate resulting from the metabolic activity of MDSC may be responsible for the suppression of CD4+ T cell responses in the spleen of S. aureus-infected mice. A situation similar to that described in the cancer setting where lactate released by tumor cells in the local environment opposes lactate efflux from T cells, leading to decreased cytokine production and cytotoxic activity which hampers the anti-tumor activity of effector T cells and favors tumor growth [31,32,[53][54][55]. Lactate has been also implicated in dysregulation of T cells in each group is shown in g. Each bar shows the mean ± SD of five independent experiments. *p < 0.05, **p < 0.005, ***p < 0.001, ****p < 0.0001 immunometabolism during chronic inflammatory processes and autoimmune diseases [30,39].
Regarding the mechanisms underlying the suppressive effect of extracellular lactate on CD4+ T cell responses, some studies have reported that lactic acid causes an acidification of the medium that can impair T cells activation [56][57][58]. For example, Calcinotto et al. [56] reported that acidic pH impaired cytolytic activity and cytokine secretion of T cells after TCR activation, although the mechanisms mediating these effects were not identified in that study. Other studies, however, have provided strong evidence for a pH-independent suppressive effect [29,31]. Thus, treatment of T cells with hydrochloric acid resulted only in one half of the suppressive effect on T cell proliferation and cytokine production than that induced by lactic acid [31]. Furthermore, Quinn et al. [29] reported that lactate impairs T cell responses by inducing reductive stress, independently from extracellular acidification. They showed that, in lactate-rich conditions, export of lactate by activated T cells is blocked, leading to an accumulation of intracellular lactate that impedes recycling of NADH to NAD+ and the continuation of glycolysis [29]. NAD+ plays an important role in glycolysis, as it is required for enzymatic reactions such as glyceraldehyde 3-phosphate dehydrogenase and 3-phosphoglycerate dehydrogenase [59]. Therefore, a low NAD+/ NADH ratio inhibits these reactions and dampen the glycolytic process. For this reason, to maintain active aerobic glycolysis, NAD+ needs to be continuously regenerated from NADH through the conversion of pyruvate to lactate by the lactate dehydrogenase [29]. Excretion of lactate is pivotal for this process since this reaction is reversible. The observation that lactate can impair T cell proliferation through a redox shift from NAD+ to NADH led us to question whether the detrimental effect of lactate on CD4+ T cell responses in the spleen of infected mice may be associated by redox shift form NAD+ to NADH resulting in altered NAD+/NADH ratio. We found that the NAD+/NADH ratio in activated CD4+ T cells was lower in spleen cells from S. aureusinfected mice than in spleen cells from uninfected animals. We also demonstrated that supplementing cultured spleen cells isolated form infected mice with the NAD+ precursor NR increased significantly the proliferative capacity of CD4+ T cell, further supporting a role for altered NAD+/ NADH ratio on the suppression mechanism.
In summary, the results of our study suggest that release of high concentration of lactate by MDSC in the local microenvironment suppresses CD4+ T cell activation via blockade of lactate efflux, resulting in altered redox homeostasis and thereby disturbance of CD4+ T cell metabolism. Therefore, therapeutic manipulation of lactate levels or redox metabolism may open new approaches to overcome CD4+ T cells immunosuppression and improve immunity during chronic S. aureus infection. This could be accomplished for example by blocking the production of lactate using inhibitors of key enzymes involved in this process such as lactate dehydrogenase or by blocking lactate transport using inhibitors of the lactate transporters monocarboxylate transporters. These strategies have been shown to be effective at reducing lactate levels in the tumor environment in preclinical studies [60]. However, these therapeutic strategies still face many Each bar shows the mean ± SD of five independent experiments. *p < 0.05, **p < 0.005, ***p < 0.001 challenges and may have unintended adverse consequences due to their off-target effects and the important role of lactate in the maintenance of cellular functions and immune regulation. Alternatively, boosting NAD+ content either by supplementation of NAD+ precursors such as nicotinamide mononucleotide may provide another strategy to ameliorate T cells immunosuppression [61]. However, further studies are needed to determine the optimal dosing and effects of NAD+ supplementation on immune function during infection.
One major limitation of this study is that the experiments have been performed with ex vivo isolated spleen cells, which may not accurately reflect the complex interactions and responses that occur in the in vivo system. Furthermore, although MDSCs have been extensively studied in mouse models, the role of MDSC in humans is less well-defined. Human and mouse MDSC differ in the phenotypic markers that characterize the specific MDSC subsets as well as in some physiological aspects [62]. However, they also exhibit some similarities for example in the expression of cell surface markers such as CD11b and in their immunosuppressive functions on T cell activation [62]. Overall, while the role of MDSCs in human diseases is still being elucidated, there is growing evidence that these cells play an important role in immune regulation and disease progression. Further research is needed to fully understand the function of human MDSCs and develop effective therapies that target these cells.
Acknowledgements
The authors would like to thank S. Beyer for technical assistance.
Author contributions Both authors contributed to the study conception and design. OG performed the experiments. OG and EM analyzed the data and wrote the manuscript.
Funding Open Access funding enabled and organized by Projekt DEAL. This work was supported by internal funding provided by the Helmholtz Centre for Infection Research.
Data availability All data generated or analyzed during this study are included in this article and its supplementary information file. Additionally, data are available from the corresponding author upon request.
Conflict of interest
The authors declare that they have no conflict of interest.
Ethics approval Animal experiments were performed in strict accordance with the German regulations of the Society for Laboratory Animal Science (GV-SOLAS) and the European Health Law of the Federation of Laboratory Animal Science Associations (FELASA). All experiments were approved by the ethical board Niedersächsisches Landesamt für Verbraucherschutz und Lebensmittelsicherheit, Oldenburg, Germany (LAVES; permit N. 33.19-42502-04-19/3307).
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 8,006.6 | 2023-07-22T00:00:00.000 | [
"Biology"
] |
Student Perceptions of the Use of Multimedia for Online Course Communication
A great deal of research exists in the use of multimedia communications in online classrooms as a means of furthering student engagement. However, little research exists that examines the perceptions of students when such technologies are used. Additionally, it is unclear that students are likely to engage in the use of such technologies when available. This research explores the perceptions of 69 students taking both online and hybrid undergraduate project management courses. Specifically, the study seeks to explore how students experienced the use of multimedia by their instructor and classmates in both online announcements and discussions, as well as whether these same students used or would be likely to use multimedia for similar communications. Finally, student perceptions of social presence, the degree to which one is perceived as a real person in computer-mediated communication (Gunawardena, 1995), are examined. The results of the study indicate that while students overwhelming enjoy the instructor’s use of multimedia communication, they are unlikely to engage in using these technologies themselves. A discussion of these results and recommendations for further research complete this paper.
Introduction
While there are numerous best practices that suggest how instructors should engage with students in online discourse, there is little known about students' attitudes and perceptions of these practices.Some best practices include using small discussion groups (Dixson, 2010), rapport and trust building (Ragan, 2007), student-led discussions (Pelz, 2004), promoting constructivist thinking through stimulating questions, brainstorming, and comparing ideas (Muilenburg & Berge, n. d.), and building a warm and inviting learning community by welcoming students, posting personal introductions, and providing lots of encouragement (Ragan, 2007).Results of studies in these areas suggest that students are more satisfied with their online experience when such approaches are implemented.However, faculty still lament that online discussions often lack significant engagement and quality (Morrison, 2012).
Significant research exists in the use of multimedia in online courses that use both synchronous and asynchronous technologies.Computer-mediated technologies in online courses have been available for many years and include videos, web chats, instant messaging, and synchronous classroom environments.However, little is known as to whether students value these tools as a means of engaging with class peers and instructors, or whether these tools help to "humanize" the instructor or peers to students.Less is known whether students themselves will choose to use these tools as a means of participating in discussions, thereby increasing engagement.
Literature Review
The review of current research focuses on three factors in online class discussions; best practices, the use of computer-mediated technologies, and the importance of both instructor and social presence.
Online discussion best practices
While there are obvious differences in an online environment versus a face-to-face one, relationship building is key to a successful environment no matter the modality.For instance, research suggests that communication with intention matters (Cerniglia, 2011).Communicating with intention includes how an instructor communicates with the written word.For example, feedback on assignments should vary based on the student's ability (Cerniglia, 2011).Written communication strategies include timeliness, having a student feel valued, and explicitly asking questions of the student in order to encourage a conversation (Cerniglia, 2011).
In addition to how instructors communicate through the written word, a teacher's effectiveness level increases with video communications (Cerniglia, 2011).For example, sometimes writing can be overwhelming to a student to read, however a video can create a more engaging environment not only for the student but for the instructor (Cerniglia, 2011).Video feedback can also enhance engagement through more timely and easily understandable feedback (Crook, Maw, Laweson, Drinkwater & Lundgvist, 2012).Supporting this research, Dias and Trumpy (2014) provided timely audio and video feedback-either personal or general feedback enhanced social presence and student's perception of instructor engagement was higher with these methods, as opposed with just use of the written word to communicate with students.
Finally, discussion boards are an effective tool for learning; however, instructors need creativity in how discussion boards are implemented and used.Not only should discussion boards be open ended in nature, but other considerations include encouraging students to "extend, expand on, question, or challenge ideas" (Cerniglia, 2011, p. 58).Any strategy that allows the expansion of student experiences and stories in the discussion boards deepens the learning and helps to focus the conversation (Cerniglia, 2011).In addition, Sung and Mayer (2012) indicate discussion boards can be helpful for faculty in creating positive social presence for themselves, but "social sharing" can build community.
The challenge with discussion boards is balancing how time consuming discussion boards can be for students and instructors compared to the learning the discussion board is attempting to demonstrate (Goldman, 2011).The success of the online learning environment is highly dependent on the quality implementation of online discussion boards (Maddix, 2012).Unlike a physical classroom, the ability for every student to participate is an advantage of online learning (Maddix, 2012).Discussion guidelines include a focus on design and development of the questions, setting up expectations on responses, and launching and managing the discussion (Goldman, 2011).In giving time and attention to a discussion guideline document, an instructor can implement the best balance between the learning experience of the student, the quality of the discussion and learning, and the workload for all parties (Goldman, 2011).
One element that is critical for the instructor in the discussion board learning environment is the clear expectation of a substantive interaction (Maddix, 2012).Substantive interactions would include a focus on three elements of timeliness, effectiveness of writing, and how the student is expressing the knowledge elements necessary in learning the material (Maddix, 2012).Faculty can increase their effectiveness in learning how to ask good questions through using Bloom's Taxonomy, the Socratic Method, showing a different way of looking at a topic by playing devil's advocate, and relating ideas to personal experience (Maddix, 2012).
Essentially, through focusing on the discussion board elements, a learning community is formed (Hilton, 2013;Maddix, 2012).Learning communities are strengthened by how relationships are built in an online environment and the tools available to the student and the faculty member in the learning management system (Hilton, 2013).A faculty can enhance the ability to encourage different viewpoints by demonstrating contrasting viewpoints in sources of information and demonstrating that all viewpoints are part of the whole and contribute to the full understanding of a topic (Hilton, 2013).
Ultimately, the quality of discussion boards is under scrutiny as a measure of assessing student thinking (Williams, Jaramillo & Pesko, 2015).Research suggests that the ability for students to obtain a higher level of discourse is dependent upon the ability for an instructor to explicitly express expectations on the quality of these interactions (Williams, Jaramillo & Pesko, 2015).These expectations will be reinforced through grading expectations, including commenting on a student's ability to go beyond socializing to convergent and divergent thinking by providing examples of when these levels of thinking are achieved (Williams, Jaramillo & Pesko, 2015).To increase the effectiveness of discussion boards in learning, a higher level of engagement is required by all parties in the learning experience.
Computer-mediated technology
Using the computer to facilitate human communications can have both advantages and disadvantages in online classrooms.Frequently, student engagement is measured in terms of the number of interactions in the classroom (Dixson, 2010).However, the quality of the content, specifically, the instructor posts has been shown to be an equally important factor.While instructor facilitation may help lead the discussions and encourage a deeper connection with the content, students more fully engage with their peers in the discussions (Dixson, 2010).The quality of content seems to be an important part of the student engagement in the online discussions (Canney, 2015;Lowes, Lin, Wang, 2007).In addition, Lowes et. al (2007) confirmed that the quality of the interaction between instructor and student helped further engagement in online discussions.Additional information as well as provocative or probing questions were two examples of techniques that furthered engagement (Canney, 2015).Dixson (2010) indicates that students that were highly engaged with other students in their course were more satisfied with their course experiences.The instructor role was that of facilitator, encouraging a deeper level of discussion.Social presence theory classifies various types of communication along a continuum.Sallnas (2000) defines social presence as the degree of awareness of the other person in any given communication.For example, face-to-face communication has the highest social presence, while written or text-based communication has the least social presence.The social presence, in the online classroom, includes the extent to which the instructor is perceived as a real person, as opposed to a webmaster.This presents an interesting challenge to online instructors: how to create a social presence online while utilizing mediums that may be limiting.The role of an online instructor is that of a facilitator, organizer, and manager (Cooper & Hendrick-Keefe, 2001).Understanding this is key to understanding the use of multimedia in creating social presence in the online classroom.
Social presence theory and application
In an online classroom, there are eight possible social presence cues identified by Abdullah (1999) andRourke, et al. (2001).These cues include humor, emotions, self-disclosure, support or agreement for an idea, addressing people by name, greetings, complimenting another's idea, and illusions of a physical presence.
• Humor: Use of humor in the online classroom, such as through announcements or emails can reduce social distance and conveys goodwill (Aragon, 2003).• Emotions: Showing emotions to students such as happiness can add clarity to a message and forge connections (Scollins-Mantha, 2008).Sharing of feelings and emotions using emoticons in emails to students, for example, is a way to do this in writing (Tu & McIsaac, 2002).• Self-disclosure: While instructors may hesitate to share personal information, sharing of some personal information can build the online relationship between student and instructor.For example, noting in an email your plans for the weekend "I am going kayaking, do you have big plans for the weekend?" posting pictures of the instructor performing his or her favorite activity can also heighten social presence, (Savery, 2005).• Support or agreement for an idea: Through online feedback such as discussion boards and allowing students to peer review posts and assignments, the instructor can generate social presence in this manner.• Greetings and addressing students by name: Rather than simply replying to an email or communication, saying, "Hi Lisa," or "Good afternoon, Roger" can create greater social presence online.• Complimenting: Telling students of a job "well done" or "keep up the good work" on assignment feedback can enhance instructor social presence, and develop confidence and connection in the online classroom (Scollins-Mantha, 2008).• Illusions of a physical presence: Social presence in this manner (Johnson & Keil, 2002) can be accomplished through synchronous tools such as audio or video recordings, feedback, and lectures.Instructors must understand the isolation felt by students when communication lags (Tu & McIsaac, 2002).
Based on this information, the focus of this paper is an important topic-how are student attitudes and perceptions affected by using multimedia tools?The purpose statement and research will be presented in the forthcoming sections.
Methods
The purpose of this quantitative study was to explore the attitudes and perceptions of students to the use of multimedia in online class discussions and announcements posted by their instructor.The research question guiding this study was "what are student attitudes and perceptions of the use of multimedia tools for announcement and discussions posts in online and hybrid courses?"
Study Design
The intention of this study was to uncover the attitudes and perceptions of students to the use of multimedia, both voice and web camera enabled communication in online class announcements and discussions.A survey-based approach was used to gather data and simple statistical analyses were performed as a means of exploring these responses.Students in three undergraduate project management classes at a university in central Washington State were the subject of this study.Two classes were fully online and one was offered as a hybrid class.Approval had been obtained by the institutional review board before proceeding with data collection.
Five questions were added to the end of term student course surveys.These questions were intended to gauge the student's review of the multimedia responses posted by the instructor, as well as their own use of such multimedia tools.Finally, students were asked if they felt that the use of multimedia, either voice or web camera helped them identify with their instructor or classmates more as real individuals.Appendix A contains the questions.
The university where the study took place uses Canvas as the learning management system.Canvas allows the recording of both audio and video as an alternative to text for announcements, discussion responses, and instructor feedback.Both instructors and students may use these technologies without limitation.At the beginning of the course, the instructor encouraged students to participate by engaging in discussions using the multimedia method of their choosing.Instructions were provided to students and regular encouragement was given throughout the course.
Methodology
The study questions were added to the standard end of course student evaluation survey and students were incentivized to complete the evaluations by earning a small number extra credit points when the overall class percentage of completion hit 80%.The data were obtained from institutional effectiveness and processed through SPSS.
Results
The student response rates for the three classes are listed in Table 1 The results for each question were analyzed cumulatively across the three courses and are as follows: Question 1: During this term, your instructor used multimedia methods of communication, specifically voice recordings and web camera recordings to communicate announcements and participate in the class discussions.How often did you view or listen to these recordings?
With N=56 students responding to this question, over half the students surveyed (31) reported that they Always or Frequently viewed or listened to multimedia posts.Here, N=56 students responded to this question and more than half (39) admitted that they Never or Rarely used this technology themselves to respond to discussions or announcements.Of the N=56 students that responded to this question, the majority (37) reported that the multimedia recordings were Somewhat helpful and Definitely Helpful in helping them relate to their instructor and classmates as real people.
Possible Errors
Internal and external validity issues may stem from the classes chosen to study, feeling toward the instructor, and overall student performance.Different results may occur if these were not online or hybrid courses, but fully on-campus courses.In addition, there may be variance between student's attitudes about school and performance online and hybrid students.It is questionable whether this study could be generalized over long periods of time, and the classes studied may not be a representation of the general population.
Discussion
Two distinct findings were identified in these results.The first was that while students admitted to watching these multimedia posts (31 of 56), found them useful (37 of 55) and enjoyed the experience (39 of 56), students chose not to participate in using multimedia for their own responses, even though instructions and encouragement were provided throughout the course and the technology was readily available within the learning management system.While students responded positively to the experience, they did not themselves engage with these tools.This finding may support the construct of trust as a best practice for discussions (Kelly, 2008).Students who feel uncertain or vulnerable may be unlikely to take risks.Second, while these same students admitted that they did not participate in the use of multimedia tools themselves, they believed that these tools helped them relate more to their instructor and classmates as real people (37 of 56).This was especially interesting as it represents an attitude that students may wish to have a more intimate relationship with their instructor, but on their own terms.Using multimedia can help facilitate community among the students (University Teaching & Learning Center, The George Washington University, n.d.; Ragan, 2007).A mixed methods study by Mandernach (2009) indicated in the quantitative data that there was no significant difference in student engagement or learning when multimedia was used in the online class, yet in the qualitative responses these same students felt more "engaged."This study seems to support our finding; while students value multimedia, there is a reluctance to use these tools themselves.Research performed by Miller (2013) may explain this.While students may not participate in the multimedia, it still gives the illusion of social presence, thereby adding value regardless."Social presence is the ability of participants to identify with the group, communicate in a trusting environment, and develop social relationships by way of expressing their individuality."(Wilcoxon, 2011, para. 8), lending more importance to the use of multimedia to help establish social presence, both instructor presence and student presence.
So far, the research shows students find the multimedia addressed in this study useful, and it creates greater social presence, but the question remains, why don't they use the multimedia tools provided to them?Some possibilities are lack of comfort with technology, a lack of understanding of how their grade may be impacted by using multimedia for discussion responses, and a tendency to desire maximizing their time and approach class completion in a transactional manner.
First, lack of comfort using the technology, and/or the fact students may be unsure of how to use the technology could be a reason why students do not use the multimedia, despite the advantages the student finds with such tools.It is important to note 14% of all higher education students are taking 100% of their courses online while another 14% takes some of their courses online (Allen & Seaman, 2014).In addition, research shows high comfort levels with technology, with 97.8% of students owning a mobile phone.In addition, students who are younger than 20 report frequent engagement with instant messaging, texting, use of social network sites, and downloading or streaming TV and video (Jones, 2012).Those students under 20 are comfortable using many methods of technology, while of those students over 35, 78.5% never use social networking sites and other similar forms of technology.It would be expected with the high frequency of online course offerings combined with comfort levels in technology, students would not find classroom and learning management system technology a challenge to use.Further, one would expect students would be comfortable using the multimedia options available to them.
Early research by Rodrigeuz Ooms & Montanez (2008) shows comfort with technology is not related to student satisfaction in an online course, but rather, comfort level is related to the individual student motivation to learn how to use the technology.This could be a possible reason for not wanting to use the technology-motivation to learn something new.Also, students of all ages may be comfortable using technology from a personal perspective, but not from a classroom perspective due to lack of motivation, rather than comfort levels.In addition, the research study addressed in this paper did not measure demographics, but perhaps a larger share of students in the courses were non-traditional students, less comfortable with technology on the whole as Jones (2012) research suggests.The authors believe that comfort level with use of the multimedia technology is likely not a factor in the fact they don't use the multimedia, but instead it may be simple lack of motivation to use it, despite students seeing the benefit of such multimedia technology.
Second, students want clear expectations of how assignments will be graded (Mupinga, Nora, & Yaw, 2006).Additionally, grading discussion responses tends to be more subjective and therefore more difficult to define quality expectations (Beckett, Amaro-Jiménez, and Beckett, 2010).With respect to expectations, students come to online classes with various learning styles and preferences of how they engage with course material (Mupinga, Nora, & Yaw, 2006).These preferences may manifest themselves in active vs. inactive learning or visual vs. auditory preferences.In a study conducted by Mupinga et al., students identified four key needs for support in their online classes: "technical help, flexible and understanding instructors, advanced course information, and sample assignments" (p.187).It may be possible that students would prefer to hear examples of discussion responses that would meet quality expectations before they commit to trying multimedia for a response.One open-ended response from a student surveyed indicated that "Sometimes it is difficult to understand exactly what an instructor is looking for without being in class . .." (p.187).Examples may help fill these gaps.
Beckett, Amaro-Jiménez, and Beckett (2010) found that students may need clear instructions on how to complete the assignment and clear evidence of how the assignment will be graded.As a result, it has been suggested that one way to avoid the subjectivity involved in grading discussion posts is to use rubrics (Robins, 2016).While rubrics may help avoid the subjectivity of grading, Robins suggests that the use of strong rubrics without a strong instructor social presence may lead students to become apathetic, believing that the discussion is simply a burdensome task.Rubrics and instructor social presence, specifically through the mimicking of excellent examples will help students see more meaningful performance expectations.However, this still may not be enough to encourage students to engage in using social media for discussion responses unless specific performance measures are addressed through assignment instructions or examples.Students may simply lack the confidence with the process of public speaking to believe that they will successfully meet quality performance expectations.
Finally, according to Brilleslyper, Ghrist, Holcomb, Schaubroeck, Warner and Williams (2012), students tend to focus on the points accumulation within a class, thus, they tend to not focus on learning outcomes.It is possible that we design courses for learning, but the points becomes the overriding goal of the student (Kohn, 1999).This focus on points can often lead to the student that argues over a grade rather than the learning of the objectives.
In addition, in a transactional approach to learning, a student will often only ask questions that are related to deliverables and the requirements of those deliverables, and not demonstrate an inquisitive learning approach in their questions of faculty (Farias, Farias, & Fairfield, 2010).If you hear a student asking about word count, or how many pages to write, or is there an opportunity for extra credit, then these are transactional based, grade concern questions -not learning questions.
An interesting statistic was discovered by Maats and O'Brien (2012) where research was conducted on the grade versus learning dilemma.They found that 90% of students wanted a good grade, and only 6% cared about the learning.This highlights the fact that grades may not be the motivator that we think they are for learning.Thus, faculty should find ways to refocus students on learning and connections in the classroom rather than focusing solely on the grade.If faculty can move the needle on learning and natural curiosity then student behavior can move from a transactional process to a transformational process.
Additional research might further address the reasons students don't use technology and seek student perceptions.Additionally, a larger population of students, multiple instructors, and a diverse selection of courses is recommended to generalize this study.
Table 1 .
. Number of participants by class and modality
Table 2 .
Responses to Question 1 Question 2: If you listened to or viewed these recordings, did you find them useful?With N=55 students responding to this question, again over half of those responding (37) indicated that the multimedia was useful.Table3contains the responses.
Table 4 .
Table 4 contains the responses.Responses to Question 3 Question 4: If you participated using voice recording or web camera recordings, did you enjoy the experience?While 19 students admitted that they did not participate in discussions, those that did Somewhat Enjoyed or Enjoyed the experience (30).Table 5 contains the responses.
Table 5 .
Responses to Question 4 Question 5: If you used multimedia tools, either by listening to or viewing the recordings or by recording responses yourself, did you feel the experience helped you relate to your faculty or fellow classmates as real people?
Table 6 .
Table 6 contains the responses.Responses to Question 5 | 5,468.4 | 2017-09-01T00:00:00.000 | [
"Computer Science",
"Education"
] |
Hot-Pressed Transparent PLZT Ceramics from Low Cost Chemical Processing
Lanthanum-modified lead zirconate titanate (PLZT) ceramics were obtained with high transmittance in the visible range by a combination of an inexpensive chemical processing and hot pressing. Optical, microstructural, pyroelectric, ferroelectric and dielectric properties characterized in this study attested the applicability of the employed method in the production of PLZT transparent ferroelectric ceramics. In fact, the corresponding analyzed physical parameters are in very good agreement with those obtained in samples traditionally prepared by other methods. Furthermore, due to high sample quality, a phenomenological analysis of the PLZT 10/65/35 relaxor features was performed in these ceramics.
Introduction
Transparent ferroelectric ceramics (TFC) were produced by Haertling and Land in 1969 at the Sandia National Laboratories after about ten years of extensive work based on lead titanate zirconate (PZT) 1 .The composition was Pb0.92La0.08(Zr0.65Ti0.35)0.98O3or simply PLZT 8/65/35.The incorporation of aliovalent lanthanum into the lattice enhanced the densification rates of the PZT ceramic bodies, leading to pore-free homogeneous microstructures.In 1971, a detailed work on the preparation and characterization of PLZT transparent ceramics, obtained by conventional mixed oxides technique and hot-pressing densification stage, was published 2 .Only one year later, Haertling and Land reported a novel powder processing of PLZT by chemical routes 3 .This process was based on co-precipitation of alkoxides in presence of PbO and proportioned high chemical and optical uniformity of the hot-pressed PLZT slugs, which started to be produced in commercial scale.Up to that date, many researches had reported routes to obtain TFC with better characterization and cost benefits, resulting in a list of different processing techniques and hundreds of compositions (most of them with lead and lanthanum elements).However, hot-pressed PLZT ceramics from co-precipitated powders have maintained themselves at the top of the most utilized ceramics for electro-optic devices 4 .
Menegazzo and Eiras 5 have developed an alternative two-stage calcination process to obtain high chemical homogeneity in lead titanate zirconate ceramics.It was developed a chemical method in which all constituents are dissolved and co-precipitated from the same source solution.PZT powders were obtained from mixed oxides method and pre-calcined before dissolution.Thus, precipitation was carried out by aqueous ammonium hydroxide addition, followed by rinsing and drying of the slurry.A second calcination stage completed the powder crystallization.Differential thermal analysis and X-ray diffraction patterns showed that the PZT phase is formed at temperatures as low as 823 K. Recently, Menegazzo, Garcia and Eiras 6 showed that this method, under optimized preparation conditions, can be applied to lanthanum modified lead zirconate titanate ceramics reaching fine single phase powders.
In this work, the goal is the production of transparent PLZT ferroelectric ceramics combining the chemical method developed by Menegazzo and Eiras and uniaxial hot-pressing.Although, the focus is the microstructural, optical, pyroelectric, ferroelectric and dielectric characterizations of the synthesized PLZT ceramic bodies to discuss their physical features and potential quality for applications.
Experimental
The ceramic powders were prepared following the experimental route proposed in Refs.5,6 which can be basically described as follow.Firstly, Pb0.90La0.10(Zr0.65Ti0.35)0.9975O3powders (hereafter PLZT 10/65/35) were produced by conventional mixed oxides method, being submitted to the first calcination stage at 1173 K/3 h.Then, the calcined powder was dissolved in a solution of nitric acid under controlled pH at 343 K.After that, precipitation was promoted by addition of HNO3 to the solution up to pH ~9-10.The slurry, after rinsing, filtering and vacuum drying, was submitted to the second calcination stage at 1173 K/3 h.Thus, cold-pressed ceramic pellets were prepared from the ball-milled powders.Prior the milling, 2 wt% excess of PbO was added to compensate further losses by volatilization.
Green disc shape samples, with 20 mm in diameter, were densified for 4 h at 1523 K, under uniaxial pressing of 5MPa in an alumina die.Partial O2 atmosphere was kept during the heating and cooling.Density measurements, by Arquimedes method, revealed apparent density of ~7.9 g/cm 3 , showing that hot-pressed slugs reached densities closer to 100% of the expected theoretical values.Optically polished samples presented yellow-orange color and high transparency.Examples of PLZT 10/65/35 samples obtained in this work are presented in the Fig. 1.
X-ray diffraction (XRD) analysis was performed in the ceramic (crushed) powder, using a Rigaku diffractometer with rotatory anode, CuKα radiation, and 2θ from 10° to 80°, at room temperature.Scanning electronic microscopy (Jeol, model JSM 5800LV) was carried out for the microstructure observation of the polished and thermally attacked surface.
Gold electrodes were sputtered on the polished disc faces for pyroelectric, ferroelectric and dielectric measurements.The pyroelectric current was measured with an electrometer Kethley 619 in a rate of 6 K/min.Before the pyroelectric measurement, the PLZT sample was heated up to 440 K and let to cool down to low temperatures in the presence of an electric field of 10 kV/cm.Prior measurement and after the electric field removal, the sample was short-circuited for 15 min.to avoid space charge buildingup.Hysteresis loops measurements were performed employing a Sawyer-Tower bridge in a temperature range from 240 K to 350 K.A triangular electric field of 10 kV/cm and 1.0 Hz was applied on the sample.This frequency of the electric field assured that the sample temperature was kept constant, avoiding a possible selfheating that could change drastically the ferroelectric properties 7 .Dielectric characterizations were performed as a function of frequency (from 100 Hz to 1 MHz), employing an Impedance Gain Phase Analyzer HP 4194A.The amplitude of the probe oscillating electric field was 5 V/cm for all measuring frequencies.For temperature measurements, the samples were kept in a cryogenic system (APD Cryogenics Inc.) that can be operated from 450 K to 20 K, with a precision of ±0.1 K in all covered temperature ranges.Taking in mind the influence of the aging effects on the physical properties of PLZT ceramics, the data were collected using samples aged for one day, following a procedure employed by Kutnjak et al. 8 in similar studies.
A Carry 4G spectrophotometer was used to measure the relative transmittance in the region of ultraviolet to near infrared, at room temperature.The scanning rate was 250 nm/min.
Results and Discussions
The X-ray diffraction pattern of the hot-pressed PLZT 10/65/35 ceramic, which may be observed in Fig. 2, reveals a single-phase material with pseudo-cubic perovskite structure.The lattice parameter is a = 4.16 Å, in a good agreement with the reported values for the same composition [1][2][3][4] .Figure 3 shows a SEM micrograph of the hot-pressed PLZT 10/65/35 ceramic.As can be observed, the high densification achieved in this material is confirmed by its non-porous and straight grain boundary microstructure.In fact, no segregated or liquid phases are noticeable even for higher magnifications (around 15000x).This fact might indicate that PbO excess, added to the precursor powder, was practically consumed.Microstructure such as this obtained here justifies the high transparency observed.For 0.9 mm thick samples, transmittance in the visible range is around 60% (without excluding multiple reflection losses, which are ~28%), as presented in Fig. 4.
The pyroelectric current Ip and the remanent polarization Pr are shown in Fig. 5.The Ip peak arises at 249 K, where the remanent polarization presents an inflection point.In fact, at temperatures few degrees above Tp (temperature of the pyroelectric current peak) the remanent polarization reaches values approximately close to zero.This result also agrees with other previously reported for PLZT 10/65/35 ceramics 8 .The sharpness of the pyroelectric current curve clearly indicates the single-phase nature of this sample.Nevertheless, the Ip vs. T curve do not reach zero values until 320 K.This effect may be related to some spurious conductivity, certainly originated from spatial charge building up during the heating.
Figure 6 shows the temperature dependence of the hysteresis loops for the PLZT 10/65/35 sample.The results show that, increasing the temperature, the coercive field and the remanent polarization gradually decrease.At 350 K the hysteretic behavior disappears completely and the P vs. E curve becomes a so-called slim loop, as commonly observed for ferroelectric relaxor materials 9 .The coercive field and remanent polarization reach their highest values at 240 K (Ec = 3.5 kV/cm, Pr = 4.4 µC/cm 2 ).The hysteresis loops observed in our measurements present a slight rounded shape that is attributed to conduction mechanism effects.The origin of this conduction comes either from point defects, introduced in the lattice, as by a PbO layer in the grain boundary.Nevertheless, the later could not be observed at SEM micrographs, as above mentioned.
The temperature dependence of the polarization for the static case (pyroelectric current measurement) is different from the dynamic case (P vs. E measurement) due to the different measurement regimes.At temperatures higher than 240 K, remanent polarization gradually decreases in hysteresis loops while an abrupt change is observed in the pyroelectric measurement.In the pyroelectric measurement a static electric field was applied during cooling, while in hysteresis measurement, an ac (time dependent) field was applied at fixed temperatures.Therefore, in the former, the sample ferroelectric state is reached and, by heating, a consequent depolarization is observed.In the latter, the polar ferroelectric clusters (ferroelectric domains) can be reoriented under field driving.In fact, the square-to-slim-loop transition of the hysteresis curves dur- ing heating (Fig. 6) can be associated with ferroelectric clusters interactions controlling the kinetics of the polarization reversals and the consequent freezing process 10,11 (see discussion below).
The dielectric characterization of the hot-pressed PLZT 10/65/35 ceramic, as a function of temperature and frequency, is shown in Fig. 7. Lower values for the dielectric constant at room temperature (ε'RT(1kHz) = 2879) and higher values for the maximum dielectric constant temperature (Tm(1KHz) = 339.6K) in comparison with those obtained from other studies are observed.This fact might be a consequence of a certain compositional deviation from the batch stoichiometry formula.By the results, it may be a concentration of La lower than 10 mol % or a Zr/Ti ratio lower than 65/35.Zirconium oxide segregation at the grain boundary can be preferential than titanium oxide, when PbO losses occurs in lead-based perovskites 12,13 .Since no segregated phases were observed in the SEM and XRD analysis, probably the Zr or even La precipitation can be rejected.Lin e Chang 14 reported the formation of the phase La2O3.4PbO at the PLZT grain boundary after quenching procedures.They proposed that a possible concentration gradient of lanthanum could be rising into the grain during the hot-pressing.Higher concentrations of La would be found closer to the grain boundary and could react with the PbO excess when thermal treatments are applied.This lanthanum concentration gradient could explain our results (on the dielectric parameters), since a lower La effective content into the whole grain is assumed.Qualitative energy dispersive X-ray spectrometry (EDS) analysis showed fluctuations on the La concentration along the grain.Although, those variations were observed within the equipment resolution, becoming the EDS analysis inconclusive.Further transmission scanning microscopy analysis might aid in this study.
As it can be observed, the PLZT ceramic shows a typical relaxor dielectric behavior (Fig. 7).When the measurement frequency increases, the maximum dielectric constant, ε'm, decreases, but increasing the temperature of the maximum, Tm.On the other hand, for dielectric losses, when the frequency increases, tan δ increases, as well as its temperature of maximum, T'm.The frequency dependence of the dielectric constant can provide direct information on the dynamic processes occurring in ferroelectric relaxors.In this way, it was performed the Vogel-Fulcher analysis 11 of the frequency dependence of the dielectric constant maximum temperature (Tm) in the PLZT 10/65/35 ceramic (inset in the Fig. 7).The attempt frequency ν0 = 1.48 x 10 15 Hz, the average activation energy U = Ea/kB = (2512 ± 100) K and the freezing temperature Tf = (251 ± 3) K, were obtained fitting the 1/Tm vs. ln ν curve with the following expression: Independently of the analyzed frequency, the maximal dielectric constant and dielectric loss temperatures are not coincident with the freezing temperature (Tf), which is the temperature where the ergodicity is effectively broken.However, Tf coincides with the pyroelectric current peak temperature (Tp = 249 K).This is a typical feature of the relaxor ferroelectric materials, where the ergodic state (where the long-range interaction between the ferroelectric clusters is practically absent) is effectively broken at Tf, and not at Tm or T'm.In fact, the ergodicity is broken due to the divergence of the longest relaxation time in the vicinity of 251 K, i.e., the temperature where the ferroelectric state, with long-range interactions, can also be induced by applying sufficiently high electric fields, as in pyroelectric or dielectric measurements with high applied bias electric fields 15,16 .
Conclusions
Hot-pressed transparent lanthanum modified lead titanate zirconate (PLZT) ceramics were obtained from powders prepared by a low cost chemical method.The results clearly indicates that this powder processing is very efficient to obtain transparent PLZT ceramics.Moreover, the analyzed physical properties showed very similar values to those reported in the literature.The high quality samples obtained with this alternative chemical route permitted a phenomenological study of the PLZT relaxor features.Indeed, the Vogel-Fulcher analysis of the dielectric data indicated the ergodicity breakdown at Tf, which is a typical relaxor characteristic. | 2,986.2 | 2001-10-01T00:00:00.000 | [
"Materials Science"
] |
Structural health monitoring of aircraft through prediction of delamination using machine learning
Background Structural health monitoring (SHM) is a regular procedure of monitoring and recognizing changes in the material and geometric qualities of aircraft structures, bridges, buildings, and so on. The structural health of an airplane is more important in aerospace manufacturing and design. Inadequate structural health monitoring causes catastrophic breakdowns, and the resulting damage is costly. There is a need for an automated SHM technique that monitors and reports structural health effectively. The dataset utilized in our suggested study achieved a 0.95 R2 score earlier. Methods The suggested work employs support vector machine (SVM) + extra tree + gradient boost + AdaBoost + decision tree approaches in an effort to improve performance in the delamination prediction process in aircraft construction. Results The stacking ensemble method outperformed all the technique with 0.975 R2 and 0.023 RMSE for old coupon and 0.928 R2 and 0.053 RMSE for new coupon. It shown the increase in R2 and decrease in root mean square error (RMSE).
INTRODUCTION
The structure of the aircraft is made up of composite materials because of its well-known properties like excellent resistance to fatigue, high strength, weight, high modulus, and stiffness.The carbon composite materials are widely used for manufacturing the aircraft structure (Yue & Aliabadi, 2020).However, the composite materials in the structure are damaged due to aging, fatigue, dynamic load, and cyclic load.Structural health monitoring (SHM) plays a vital role in identifying these damages.Inadequate SHM leads to catastrophic failures and the damages caused by catastrophic failure is costly (Xu et al., 2017).The factors to be considered for SHM are strain pattern, fiber failure, matrix cracking, delamination, and skin stiffener (Larrosa, Lonkar & Chang, 2014).This work concentrates on delamination.
2. The delamination size was calculated from given X-ray images of multiple composite coupons and considered as ground truth.
3. An ensemble regression technique is used with five base level models to predict the size of delamination.Toyama et al. (2005) presented the variation in stiffness of carbon fiber reinforced polymers (CFRP) laminates using guided lamb waves.The quantitative damage of laminates was calculated by in situ quantification of the wave velocity.It provides more location accuracy than other conventional technique.Johnson & Chang (2001) introduced the two-part verification to find the stiffness and strength of composite laminates.The first part represents the characterization of matrix crack which helps for damage progression.The second part calculated the amount of damage.The proposed technique has been implemented using computer code, PDcell.Saxena et al. (2011) experimented how the delamination are influencing on the velocity of guided lamb wave.The density of matrix crack in a particular path and delamination was identified using local regression technique in Su, Ye & Lu (2006).Larrosa, Lonkar & Chang (2014) and Larrosa et al. (2011) classified and predicted damage.These researchers clearly indicate the effective use of ML algorithms to classify the data generated from the piezoelectric actuators in the surface of composite materials.Nevertheless, there is no clear method to calculate the delamination size, which is the objective of this work.
LITERATURE REVIEW
ML and deep learning techniques are also used for infrastructure health monitoring.Lei et al. (2017) proposed multi-task architecture and ensembleDetNet technique to detect and classify infrastructure damage.This technique improved 5% accuracy than other state-ofart detection and classification technique.Liu et al. (2017) represented faster R-CNN technique with RestNet101 architecture to detect and measure external damage in historic masonry buildings.This proposed methodology identified spalling and efflorescence damage with 0.950 and 0.999 respectively.ASTM International (2000a) used ultrasonic becons instead of GPS in unmanned aerial vehicles (UAV) and CNN for damage identification.This method processed video data collected by UAV and produced 91.9% sensitivity and 97.7% specificity.
Currently ML algorithms are used to analyse the relationship among the features available in a data set is used to predict the damage (ASTM International, 2000b).The delamination prediction problem is formulated as regression problem.However, much work not been carried out on expansion of delamination using ML method.To address this problem Liu et al. (2017) experimented to find the length of the path around the delamination instead of calculating the delamination area (Huston, 1994) by ML methods.This technique provided the solution to the overfitting problem in modelling phase.Though it provides the results in acceptable range, exact calculation of delamination size is remained unsolved.However, the prediction rate of delamination needs to be improved.This work focuses on this.
NASA performed experiments of fatigue aging on CFRP using following ASTM standards D3039 (Rohit, Chandrawat & Rajeswari, 2021) and D3497 (Sikka et al., 2022).The test was done by using Torayca T700G.These materials are used in aircraft and sports goods which needs high property of composite materials.In composite materials, weight of the surface is 600 g/m 2 , fabric thickness is 0.90 mm, density is 1.80 g/cm 3 and tensile strength is 4.900 MPa and it is called as coupon.Finally, it is fabricated and divided into 10-inch length and 6-inch-wide piece is presented in Fig. 1 (Liu et al., 2021).
Huston narrated the effects of fatigue in unidirectional carbon fiber reinforced proxy using residual stiffness and strength model (Ma et al., 2017).These results are compared with Chiachio et al. (2013) result.The authors taken the fatigue cycling test for each 50,000 cycles then collect the Piezoelectric Transducer (PZT) sensor data of 36 trajectories and seven interrogation frequencies.The outcome of the fatigue test is (1) malfunction data collection for actuator-to-sensor system (2) delamination size quantification (3) analyse the variation among coupons.All these outcomes are considered into account for this work.The sample coupon in Fig. 1 has six actuators and six sensors.The lamb waves are disseminated from actuator and sensed by sensors.To calculate the delamination area Xray images are used and to initiate the delamination at a point, notch with necking geometry was used.
To the best of my knowledge, existing research uses ML techniques to forecast delamination in aircraft structures.The proposed method predicts the size of the delamination using the ensemble algorithm, which combines one or more ML approaches.Furthermore, the computation of delamination size from X-ray is automated.
Dataset description
The dataset used in this article is downloaded from NASA Ames Research Center.It is a CFRP materials dataset.It clearly indicates that size of delamination is direct proportional to loading cycle, which was calculated against fatigue cycling.To improve the efficiency of experimental data the calculation was repeated number of times.Figure 2 represents the X-ray image of composite coupon at 150,000 and 100,000 loading cycles, sequentially.
The lamb waves are disseminated along the coupon surface to identify the delamination interrogation.The waves propagated through the delamination area has change in its strength while reaching the sensor.The delamination size is increased for increase in loading cycle.While the delamination is increase, the signal strength reaches through the delamination path is reduced.The changes in spectral amplitude of time and frequency domain intimates the delamination on the surface of coupon.To calculate the delamination size, sensor signal features loading cycle, interrogation frequency, power spectral density (PSD) and time of flight (ToF) are considered in this article.To characterize the property of materials the above features are mostly used by the researchers (Larrosa, Lonkar & Chang, 2014;Toyama et al., 2005).Figure 3 represents the raw sensor and actuator signal for data of CFRP coupon.Loading cycle: To get the sensor signal for various load, fatigue test is done on composite coupon.The output of every 10,000 cycle is recorded.The actuator and sensor signal for various loading cycle is given in the NASA dataset.Interrogation frequency: To decompose the actuator and sensor signal into various frequency spectrums, Fast Fourier Transform (FFT) is used.The input frequency correlate with high amplitude is considered as interrogation frequency.
Power spectral density: PSD for various frequencies is calculated using FFT by the function of time.The peak in the PSD values is reduced by increase in delamination size.However, the strength of the signal input is reduced due to wave scattering in delamination area.
Time of flight: The time difference between the actuator signal peak and sensor signal peak is ToF.
Figure 4 represents the association between features in the dataset with scatter plots depends on the correlation matrix method is shown in Eq. ( 1).
The x mn is correlation coefficient, m and n are random variables and m and n are the means of m and n.The scattering of sensor signal feature is replicated on left axis and bottom axis and the diagonal represents the density plot of the feature.
Figure 5 represents the correlation between the pair of features.None of the correlation value exceeds 0.8, it clearly indicates that the features are not closely correlated with each other, and all the features are taken into account for further process.The figure also represents that there is a negative correlation between cycle and PSD.
MATLAB is used to process the raw data given by NASA to obtain the specified features.To calculate the ground truth (i.e., delamination size) Area property of region props method is used on X-ray images with delamination in MATLAB.Finally, the dataset has 150,949 data points with six features like cycle, load, frequency, PSD, ToF and ground truth.
Delamination size prediction using machine learning
In this work, sensor signal features acquired from composite coupon is used to predict the delamination size.A deterministic technique is entrenched by regression investigation which permits the diagnostic values obtained by independent variable n specified the dependent variables m x .Figure 6 shows the workflow of the prediction technique.The four sensor features are formed as the vector m x = [m 1 , m 2 , m 3 , m 4 ] = [cycle, frequency, PSD, ToF] as input to the prediction method.Delamination size is used as the ground truth n.
In recent years, ML algorithms are widely used to predict the delamination size regression problem and provided the best results (Liu et al., 2021).Consequently, this work implemented the regression models like support vector machine (SVM), extra tree, gradient boost, AdaBoost and decision tree and finally, stack ensemble technique is used to improve the prediction accuracy.
Support vector machine: The SVM is basic and widely used prediction technique.Due to SVM's scalable capability, it can be well suited to small datasets.With the help of loss function SVM can be applied to prediction problems.In this work, SVR (support vector regression) with 'rbf' kernel is used.The degree of polynomial kernel method is set as 3, kernel coefficient for 'rbf' is set as scale, value for gamma is set as 1/(n_features à X.var()) and stopping condition tolerance is set as 1e-3 by default.Kernal size used for this implementation is set as 200 MB.SVM regression technique is presented in Algorithm 1 (Rätsch, Onoda & Müller, 2001).
In the above algorithm k represents kernel, g represents gamma, c_size represents cache_size and m_it represents maximum iteration.
Extra tree model: The extra tree model contains number of prediction trees capitulated from various training data (Soni, Arora & Rajeswari, 2020).Every tree is considered as selfprediction method and average of every prediction tree's output gives the final regression.Extra tree regression technique is presented in Algorithm 2. The increase in number of prediction trees yields to better performance.In this work, the amount of prediction tree available in forest is 100, mean squared error criterion, the amount of samples needed in leaf node is 1, amount of samples needed to divide in internal node is 2 are used.In the above algorithm n_est represents n_estimators, c represents criterion, m_s_s represents min_sample_split, m_s_l represents min_samples_leaf and m_ft represents max_features.
Gradient boosting model: Gradient boosting is a supplement model in an onward stepwise technique.It permits for improvement of random differentiable loss method.At every epoch a prediction tree is fit on the negative gradient of the specified loss method (Rajeswari et al., 2022).Gradient boosting technique generate a regression technique in the structure of an ensemble of weak regression technique.Gradient boosting regression technique is presented in Algorithm 3. Squared error loss function is used for regression.
The contribution of prediction tree shrinks by learning rate and it was set as 0.1.The increase in boosting epoch provides good performance and it was set as 100.
AdaBoost model: An AdaBoost regressor is a meta-estimator.It starts by fixing a prediction on the given dataset, after that fixes extra copy of the predictor to the coupled dataset (Agyemang et al., 2022).The instance weights are modified depends on the error of present regression.In essense, the successive predictors concentrate on hard instances.AdaBoosting regression technique is presented in Algorithm 4. The highest amount of estimates used till boosting is stopped is set as 50.The weight put into every predictor at every boosting epoch is called as learning rate.The increase in learning rate, improves the benefaction of every predictor.The learning rate is set as 1.After every boosting epoch, the weights are getting changed by loss function.The linear loss function is used.
In the above algorithm n_est represents n_estimators, learn_r represents learning_rate.Decision tree model: Decision tree is a non-criterion supervised learning technique.The main is to produce a technique that regress the estimate of a desired variable through studying effortless decision rules worked out from the data features (Wang et al., 2019).The trees are known as piecewise constant imprecision.The decision trees study from data to imprecise a sine curve with group of if-then-else decision rules.The decision tree regression technique is presented in Algorithm 5.The method to calculate the standard of a split is known as criterion.Squared error criterion is used.The amount of samples needed to divide an internal node is set as 2 and amount of samples needed at leaf node is set as 1.
In the above algorithm c represents criterion, m_s_s represents min_sample_split, m_s_l represents min_samples_leaf.
Ensemble model: The ensemble technique takes on several base prediction techniques, whose regression accuracy is best than any other learning model.It is contrast from the ensemble technique in statistical devices, which is normally limitless.This ensemble-based ML technique increases the pliable structure of alternate technique who is finite (Kang & Cha, 2018).
This work used the stacking ensemble.It is an ambiguous loss-based ML framework.Stack ensemble comprises in stacking the output of separate regressor and utilize a predictor to calculate the end prediction.Stack ensemble permits to utilize the robustness of every separate predictor by utilizing their result as input to end predictor.The base regressor used for ensemble technique in this work is SVM, extra tree, gradient boosting, Adaboost and decision tree.Consequently, the base regressor techniques are implemented separately is presented in Algorithm 6.
RESULTS
To prove the efficiency of the proposed work, raw sensor data collected from one composite coupon is considered.The database was constructed with needed features for predicting delamination are ToF, cycle, frequency, PSD, and ground truth, i.e., delamination size.At final, the data set has 150,949 data points.From this, 75% data points are used to train the model and remaining 25% data points are used for testing.First, SVM regression technique was implemented with RBF kernel, but the prediction results were not preferable.Hence, further regression techniques like extra tree, gradient boost, AdaBoost decision tree were used to predict the delamination size.Finally, stack ensemble technique was used to combine the above said regression techniques.MATLAB is used to process raw sensor data before building the data collection.Six piezoelectric actuators and six piezoelectric transducer (PZT) sensors are included in the composite coupon to collect raw data.Python scikit learn runs machine learning algorithms on an i5 processor, 8 GB of RAM, and Windows 10.
Model estimation:
To calculate the efficiency of machine learning model used for delamination prediction is estimated using root mean square error (RMSE) and coefficient of determination (R 2 ).The formulas are as follows: where RMSE is absolute estimate to fit, the less RMSE is best estimate to fit and R 2 is a relative measure to fit, it varies from 0 to 1, the high R 2 specify a better model.Figure 7 illustrates the R2 and RMSE values when combining two ML techniques.Ensembling is the combination of one or more approaches that enhance the outcome of the SHM procedure.Ensembles are very good at preventing overfitting, improving generalization, and handling noisy or inconsistent data.They provide a robust solution to a wide range of datasets, as different models may thrive in different areas of feature space.Furthermore, ensemble approaches are less susceptible to hyperparameter tuning and outliers, making them more durable and adaptive to a variety of real-world circumstances.Overall, the diversity and aggregation of numerous models inside an ensemble framework result in more robust, accurate, and reliable predictions in machine learning applications.This stage involves evaluating the efficiency of combining two strategies.Combining gradient boost with decision tree surpasses all other models in terms of maximising R2 and decreasing RMSE.
Figure 8 illustrates the effectiveness of combining three strategies.The comparison of Figs. 7 and 8 illustrates the performance, which demonstrates that gradient boost combined with other approaches delivers superior results in comparison to other approaches.Similarly, the combination of three ML methods does not outperform the combination of two ML methods.This demonstrates that the combination of three ML approaches does not always yield positive results.This combination of ML techniques yields results dependent on the characteristics of the dataset.
Figure 9 depicts the result of combining four methods.Combining three ML techniques and four ML techniques.Observing Figs. 8 and 9 demonstrates conclusively that integrating multiple ML algorithms does not produce optimal results for all datasets.Before utilising ensembling techniques, thoroughly examine the test data and then use the appropriate combination of ML algorithms.
After analysing each and every test data, combination of five ML approaches forms an ensembling approach.The combination ML methods (SVM, extra tree, AdaBoost, gradient boost and decision tree) outperforms the best result compared to individual ML methods as well as combination two, three and four ML methods.The evidence is provided in the Table 1.
Table 1 represents the evaluation result of each separate model and stacking (ensemble) model.The evaluation result shows that ensemble model outperforms all the single model with lowest RMSE and highest R 2 value.Table 2 represents the evaluation result of each model and stacking ensemble model for a new composite coupon which was not trained yet.The new composite coupon is made up of different materials and tries to check the performance of ensembling techniques.
Several cause for error in the accuracy of prediction are delamination area calculation (ground truth) in MATLAB.Sensed signal orientation, external noise affected the sensed signal and less amount of data.Also, there are some technical difficulties for constructing the data set which may cause some error in delamination size prediction.
Ensembling techniques are tested against the new coupon and analyse the performance metrics of R 2 and RMSE.The comparison of new coupon and old coupon are displayed in the Fig. 10.Even though the prediction accuracy is less than old coupon, the ensemble model outperforms all the single model with lowest RMSE and highest R 2 value.
DISCUSSION
The experimental assessment shows an efficient technique for delamination prediction using machine learning model.In this research work, ensemble technique produces better accuracy with error rate, because ensemble technique has the strength of each regression technique and acted better than each technique.
The SVM, in particular, wraps the perseverance of the variables for a given usefulness of the method, kernel variables, and kernel possibility.The SVM technique assures the difficulty of overfitting from variable enhancement to procedure choice.Nonetheless, kernel approaches will be entirely diplomatic in terms of overfitting the technique determination criteria.In a decision tree, it will be difficult to evaluate all possible attribute combinations in order to find unseen data with deprecatory failure.The decision tree focuses on discovering errors by distinguishing between success and error data.Extra trees are typically powerful for discrepancy.Anyway, due of its proclivity for overfitting, it is prone to sampling errors.When the testing data set differs significantly from the training data set, the extra tree cannot be fitted.Overfitting is possible with boosting approaches (gradient and AdaBoost), and the maximum number of regression trees is not allowed for one.
Each regression technique has its own advantage and disadvantage when these features are interrelated.Accordingly, the stacking ensemble technique, take in from each regression technique's advantages to balance their disadvantages, accomplishing correctly in together or more than the best individual technique with reference to improving the prediction accuracy.The main strength of the stacking ensemble model is, considered each separate regression technique and taken their advantage and produced better accuracy for given data set.
CONCLUSIONS
The primary outcome of this research is to focusses on finding the suitable ML algorithm to predict the delamination size in the structure of the aircraft.The work represented in this article focuses on construct a damage assessment technique for structural health monitoring of aircraft.In this article, the damage assessment mainly aims in designate the increase of delamination in composite materials.This work shown a innovative approach to identify the damaged area through delamination size prediction with machine learning model.Five machine learning techniques with stacking ensemble approach were used to identify the size of delamination in a composite coupon.Analysed the results produced by SVM, extra tree, AdaBoost, gradient boost, decision tree and stacking ensemble technique, the stacking ensemble method outperformed all the technique with 0.975 R 2 and 0.023 RMSE for old coupon and 0.928 R 2 and 0.053 RMSE for new coupon.It shown the increase in R 2 and decrease in root mean square error (RMSE).
The features frequency, cycle, ToF and PSD alone considered in this article.Adding more features will increase the performance.Other than delamination, skin stiffener, matrix cracking, stain patterns, fiber failure also need to be considered while monitoring the structural heath of aircraft.These things will be concentrated in future work.
Figure 10
Figure 10 Comparison of R 2 value and RMSE value for old and new coupon.Full-size DOI: 10.7717/peerj-cs.1955/fig-10
Table 1
Model evaluation. | 5,012 | 2024-03-27T00:00:00.000 | [
"Engineering",
"Materials Science",
"Computer Science"
] |
Performance Analysis of Conventional Machine Learning Algorithms for Diabetic Sensorimotor Polyneuropathy Severity Classification
Background: Diabetic peripheral neuropathy (DSPN), a major form of diabetic neuropathy, is a complication that arises in long-term diabetic patients. Even though the application of machine learning (ML) in disease diagnosis is a very common and well-established field of research, its application in diabetic peripheral neuropathy (DSPN) diagnosis using composite scoring techniques like Michigan Neuropathy Screening Instrumentation (MNSI), is very limited in the existing literature. Method: In this study, the MNSI data were collected from the Epidemiology of Diabetes Interventions and Complications (EDIC) clinical trials. Two different datasets with different MNSI variable combinations based on the results from the eXtreme Gradient Boosting feature ranking technique were used to analyze the performance of eight different conventional ML algorithms. Results: The random forest (RF) classifier outperformed other ML models for both datasets. However, all ML models showed almost perfect reliability based on Kappa statistics and a high correlation between the predicted output and actual class of the EDIC patients when all six MNSI variables were considered as inputs. Conclusions: This study suggests that the RF algorithm-based classifier using all MNSI variables can help to predict the DSPN severity which will help to enhance the medical facilities for diabetic patients.
Introduction
Diabetes mellitus (DM), one of the fastest rising health concerns of the 21st century. The number of patients affected with DM worldwide has increased from 151 million in 2000 to 463 million in 2019; over just 20 years [1]. International Diabetic Federation (IFD) estimated that globally by 2045, approximately 700 million people will be affected by diabetes [1]. DM is a common yet costly metabolic disease, which leads to serious damage to different organs of the body with the long-term uncontrolled blood glucose level [2][3][4][5]. Among all the complications that arise due to DM, diabetic sensorimotor polyneuropathy (DSPN) a very common form of neuropathy caused by diabetes. It affects limb nerves, especially in the lower limb. Globally, 40 to 60 million people with diabetes are suffering from lower limb complications due to DSPN [1]. Long-term DSPN leads to ulceration and amputations, significantly increasing the chance of early death and reducing the quality of life. Globally, every 30 s, one lower limb amputation is happening due to DSPN [6]. Henceforth, early identification of DSPN to provide proper treatment to prevent the life-threatening condition man corneal in-vivo. Much research is being conducted, emphasizing the automation of the CCM system using ML for a more accurate, reliable, and regenerable diagnosis of DSPN [25][26][27]. However, as CCM uses corneal images for identifying DSPN, it requires specialized personal and equipment which made it difficult to be available in regular healthcare facilities. In the initial stage, composite scores (NDS, MNSI, etc.) are widely used for screening DSPN signs and symptoms [12]. Intelligent systems using these DSPN scoring techniques can be a potential solution to solve the uniformity agreement problem with the DSPN severity grading due to their ability of reliable, accurate, reproducible diagnosis. In literature, few intelligent systems-such as fuzzy inference system (FIS) [28][29][30], multicategory support vector machine (SVM) [31], and adaptive fuzzy inference system (ANFIS) [32]-are being reported to use composite scoring methods for stratification of DSPN severity. The studies reported DSPN classifiers using fuzzy systems are not reliable because the FIS works relaying on the if-then rule base set by the experts or research based on human experience, Kazemi et al. [31], developed a multiclass SVM based DSPN severity classifier using NDS; however, their reported accuracy was only 76%. Fahmida et al. [32] have developed an ANFIS system for DSPN severity classification using MNSI with an accuracy of 91%, however, they have only considered three MNSI variables (questionnaire, vibration perception, and tactical sensitivity) to design their model. MNSI has been recommended on the position statement by ADA for the clinical diagnosis of DSPN [11]. The MNSI is very simple, inexpensive, and can be managed by any healthcare professional treating diabetic patients. The reliability and accuracy of the MNSI have been discussed elsewhere [10,33]. Therefore, this research proposes an ML-based DSPN severity classifier using MNSI.
In literature, conventional ML algorithms such as support vector machines (SVM) [34], k-nearest neighbor (KNN) [35], random forest (RF) [36], and artificial neural network (ANN) [37] are being used in different diseases diagnosis problems. Although the application of ML in disease diagnosis is a very common and well-established field of research, the application of ML in DSPN diagnosis using composite scoring techniques like MNSI is very limited in the existing literature. More studies are required to understand the performance of different ML techniques in DSPN diagnosis and stratification. In this research, we aim to observe the performance of eight different conventional ML algorithms: support vector machine (SVM), k-nearest neighbor (KNN), random forest (RF), discriminant analysis classifier (DAC), ensemble classifier (EA), naive Bayes (NB), linear regression (LR), and artificial neural network (ANN) for severity classification of DSPN using MNSI. A descriptive statistical analysis will be performed to evaluate the performance of these algorithms in DSPN severity classification. We aim to classify the DSPN patients into four severity classes: absent, mild, moderate, and severe classes with a good classification accuracy using different conventional ML.
The novelty of this research work is the implementation and performance analysis of different conventional ML-based intelligent classifiers that will be able to classify DSPN severity levels using MNSI scores. This study will benefit DSPN patients as well as diabetic patients with accurate, reliable, and early identification and stratification of DSPN and will help to received early treatments to prevent severe complications like ulceration and amputation. As the classifiers will be using the MNSI, it can be used with regular checkups in normal healthcare centers. This study will also investigate the effect of MNSI variables on DSPN severity classification using feature ranking. This study will investigate the best performing ML algorithms with different MNSI variable combinations in the stratification of DSPN. Still now, the identification and stratification of DSPN are based on offline analysis by the experts. This study can support health professionals in accurate, reliable, and realtime decision-making. Also, the problems due to lack of uniformity and agreements in the severity grading by different experts can be solved using an ML-based intelligent DSPN severity classifier. Therefore, this research aims to analyze the performance of different conventional ML-based intelligent classifiers for screening and stratification of DSPN severity and find the best performing classifier and MNSI variables.
Data Acquisition
In this research data was collected from the Epidemiology of Diabetes Interventions and Complications (EDIC) clinical trials which are conducted by the National Institute of Diabetes, Digestive and Kidney Diseases (Bethesda, Maryland, USA) to annually assess DSPN among type 1 diabetic patients since 1994 [38,39]. This clinical trial is still under continuous process which initially started with 1375 patients. Eight EDIC years of MNSI data were collected with 10,543 samples in total. MNSI is used to annually screen DSPN among the enrolled participants in EDIC trials.
Data Imputation
The MNSI dataset had a total of 10,543 samples, with 363 blank entries. After removing the blank entries, 10,180 samples were retrieved after removing the blank entries with missing values for MNSI variables. The k-nearest neighbor [35] data imputation technique had been used to fill the missing data.
Data Augmentation
The imputed EDIC dataset with 10,180 samples was imbalanced. Duplicate data were removed from the dataset keeping the first combinations only. Synthetic Minority Oversampling Technique (SMOTE) technique [40] had been used to balance the dataset with no overfitted data. Python 3.7 in-house written code was used for data imputation and augmentation.
DSPN Severity Scoring for MNSI
There are two parts to the MNSI [10] scoring system. The first part is a questionnaire consists of 15 yes/no questions related to the patient's symptoms. The second part consists of five clinical examinations: the appearance of the foot (AF), ulceration (Ulc), ankle reflection (AR), vibration perception (VP), and tactile sensitivity (TS) are included in the clinical tests. The detailed scoring mechanism is described in [10]. In this study, a total of six MNSI variables (Questionnaire, AF, Ulc, AR, VP, and TS) were used. The preprocessed MNSI dataset was graded using the scoring technique proposed by Watari et al. [30]. The scoring was ranged from 0 to 10 and the severity classes are divided as follows: (i). x ≤ 2.5: absent neuropathy (ii). 2.5 < x < 5.0: mild neuropathy (iii). 5.0 ≤ x < 8.0: moderate neuropathy (iv). x ≥ 8.0: severe neuropathy
Feature Ranking
The eXtreme Gradient Boosting (XGBoost) [41] algorithm-based feature ranking model was developed to observe the effects of MNSI variables for DSPN diagnosis. XGBoost is a decision-tree-based ensemble Machine Learning algorithm that is capable of finding the effectiveness of different features from a prediction model. The preprocessed dataset was used to rank the MNSI features according to their impact on DSPN identification. The design of the XGBoost model has been discussed in our previous study [32].
ML Model Development Using MNSI Data
This study focus on performance analysis of different conventional ML algorithm based DSPN severity classifier using MNSI variables. Here we have considered eight conventional and supervised ML algorithms: support vector machine (SVM), k-nearest neighbor (KNN), random forest (RF), discriminant analysis classifier (DAC), ensemble classifier (EA), naive Bayes (NB), linear regression (LR), and artificial neural network (ANN). All the ML models were designed using MATLAB ver. R2020a, (The MathWorks, Inc., Natick, Massachusetts, MA, USA) with two different input combinations from the MNSI variables and DSPN severity level as output. Stratified 10-fold cross-validation was used to train and test the designed ML models. The performance of the designed ML models was evaluated using confusion matrices and the calculation of different performance parameters. A multiclass SVM model had been considered in this work. KNN model was designed for 20 nearest neighbors. RF model was designed with a 100 bagged decision tree. ANN was designed with 100 hidden layers and trained for 100 epochs for each fold.
Statistical Analysis
For Statistical analysis SPSS software (version 21.0; SPSS Inc., Chicago, Illinois, IL, USA) was used. All the statistical analyses for baseline characteristics of the EDIC patients were performed based on the DSPN and Non-DSPN groups and expressed as mean ± standard deviation (SD). Analysis of variance (ANOVA) was used to find out the statistical significance of the variables. An independent t-test was used to find out the 95% confidence intervals (95% CI). Pearson's correlation coefficient was used to find out the correlation between different variables with DSPN classes. For the performance analysis of the ML models, ANOVA was used to find the statistical significance, Cohen's kappa statistic [42] was used to find the reliability of the performance of the ML models, and Matthews Correlation Coefficient [43] was used to find the correlations between the observed and predicted classifications. Statistical significance was considered at p < 0.05.
MNSI Dataset
EDIC patients' baseline demographic variables have been observed to understand the characteristics of the patients and been shown in Table 1. The EDIC patients' average age in the first year was 35.93 ± 6.945 years (657 male, 598 female), and the mean diabetic duration was 14.56 ± 4.906 years. Initially, we could have retrieved 957 non-neuropathic patients and 298 neuropathic patients, a total of 1255 patients' data from the first year of the EDIC trials. 8 year of EIDC dataset there was 8819 absent, 1075 mild, 245 moderate, and 40 severe samples. After processing the EDIC dataset by data imputation and augmentation techniques, the final data set was prepared with 1200 samples per class. As per our previous study [32], we have observed the importance index of all MNSI variables using the XGBoost model. From Figure 1 we can observe that the questionnaire has an important index of 0.35 whereas clinical tests are ranked as VP, TS, AR, and AF based on the importance index in between 0.10 to 0.20 and Ulc has the lowest index of 0.5 [32]. Two datasets were prepared based on the feature ranking results. The first dataset (dataset-1) consists of the top three MNSI variables from feature ranking-i.e., questionnaire, VP, and TS-were considered as input variables to training the ML models. Also, one study by Watari et al. [30] used these three variables to classify DSPN patients' severity using a fuzzy system. In the second dataset (dataset-2), all six MNSI variables were considered as inputs (questionnaire, AF, Ulc., AR, VP, TS) to train the ML models. Therefore, dataset-1 consisted of three inputs: VP and TS scores with a range from 0 to 2, questionnaire score with a range from 0 to 13, and one output: DSPN severity levels, 0: absent, 1: mild, 2: moderate, 3: severe neuropathic and dataset-2: consists of six variables: vibration perception, tactile sensitivity, ankle reflection, the appearance of feet and ulceration, ranging from 0 to 2 for each test and questionnaire with range 0 to 13, and one output: DSPN severity levels (0,1,2,3).
Performance Evaluation of ML Models
Two datasets were used for training eight conventional ML models-i.e., RF, SVM, EA, KNN, DAC, NB, LR, ANN, for DSPN severity classifiers-in total 16 models were trained. In the classification models, 10-fold stratified cross-validation was used and in case, 9-fold was used as train data, and 1-fold with 120 samples per class as test data. Tables 2 and 3 are showing the performance evaluation of ML-based DSPN severity classifiers for 10-fold cross-validation using dataset-1 and dataset-2, respectively. Figures S1 and S2 (Supplementary Materials) are showing the confusion matrix for all the ML classifiers using dataset-1 ( Table 2) and dataset-2 (Table 3). For dataset-1, RF has better accuracy (91.87 ± 1.42), sensitivity (91.8 ± 1.66), specificity (97.23 ± 0.55) compared with other ML algorithms, afterward, ANN, and SVM showed second-best performance for the dataset-1. All these three exhibit high correlation coefficients and substantial reliability based on kappa value. All the ML classifiers outputs showed a statistically significant relationship with test sets results. For dataset-2, RF has better accuracy (98.50 ± 0.74), sensitivity (98.58 ± 1.67), specificity (99.50 ± 0.24) compare with other ML algorithms, afterward, SVM and EA showed second-best performance for the dataset-2. However, ANN showed poor performance for dataset-2 with 10-fold cross-validation and having a high standard deviation in performance parameters. All these three ML (RF, SVM, and EA) exhibits high correlation coefficients and near-perfect reliability based on kappa values. All the ML classifiers outputs showed a statistically significant relationship with test sets results for dataset-2. From Tables 2 and 3, it is visible that all the ML classifiers' performance enhanced with dataset-2 in comparison with dataset-1.
Discussion
Diabetic peripheral neuropathy (DSPN) is one of the major length-dependent complications of diabetic mellitus (DM). Since the 1900s, researchers are going on to establish a standardized diagnosis method for DSPN. To date, diagnosis, and severity stratification of DSPN requires manual grading by specialized expertise which are always subjective and vary depending on different screening methods. According to the study [7], less than one-third of health physicians would be able to identify the signs of DSPN, resulting in misleading diagnoses, contributing to high rates of morbidity and mortality. Although a variety of screening and diagnosing methods are accessible for DSPN, most of them require expensive equipment and specialized personnel to analyze the results from these tests; some of the methods are invasive and painful; some are not reproducible, and some have contradictory outcomes due to lack of standardization in diagnosis measures. Moreover, among the health professionals, there is a lack of understanding regarding the thorough diagnosis, controlling, and treatment process for DSPN [8]. Therefore, for early identification and satisfaction of the DSPN severity, a simple, cost-effective, reproducible, accurate diagnosis method is necessary, which can be globally applicable to solve the lack of understanding among the health professionals regarding DSPN.
Nowadays machine learning approaches are being researched in different aspects of healthcare systems due to their advantage of flexibility, cost-effectiveness, self-learning capacity, and being able to work as a second helping system for health professionals with accurate and reliable performance. Intelligent healthcare systems are capable of providing better patient satisfaction, helps health professionals with accurate, reliable, and real-time diagnosis, thus improving the healthcare facilities for DM patients. Intelligent systems using ML algorithms have now been widely researched for different biomedical systems and special importance is given to its application for disease diagnosis and minimization of health risks [21][22][23][24][44][45][46][47][48]. Alike other life-threatening diseases, DSPN has also caught the researchers' attention for the development of an artificial intelligence-based diagnosing system for DSPN [25][26][27][28][29][30][31][32]48].
In literature, detection and stratification of DSPN severity have been reported using the FIS, ANFIS, SVM, and ANN algorithms [28][29][30][31][32]48]. DSPN exhibits non-linear characteristics and progresses differently in every patient. As FIS is developed using the if-then rule base, there is a chance of having human error and reliance on expert knowledge in characterizing the non-linear DSPN characteristics, thus the performance can be biased. Duckstein et al. [28] used electrophysiological examination for diabetic neuropathy classification using a fuzzy inference system. Picon et al. [29] have used four input variables including symptom assessment, sign examination from MNSI and diabetic duration, and HbAc1. They proposed a fuzzy inference system that was based on expert knowledge. Watari et al. [30] also have used the same fuzzy model to classify DSPN into four classes and have considered only 3 MNSI parameters including system assessment, vibration perception, and tactile sensitivity as the model input. However, as fuzzy works with if-then rules, it requires professionals training to set the rules for the fuzzy system, which can vary as to its subjective to healthcare professionals evaluation thus have a chance of having human errors. Kazemi et al. [31] developed a multicategorybased SVM model for DSPN severity classification based on NDS; however, the performance of the model is not reliable and reported an accuracy of 76%. In the study [32], an ANFIS based DSPN severity classifier was designed using the same three MNSI variables that have been proposed in [30]. This study has reported an accuracy of 91% using the three MNSI variables, whereas, in our study, we have observed that, the results got improved significantly when all the MNSI variables were considered to design ML models. In a study [48], the ANN model was developed for the diagnosis of DSPN using NCS, but no severity classification had been studied.
This research aims to develop different conventional ML-based DSPN severity classifiers for accurate and reliable stratification of DSPN severity. Here eight conventional ML models-i.e., SVM, KNN, RF, EA, NB, DAC, ANN, and LR-were trained for the classification of DSPN patients into four severity groups: absent, mild, moderate, and severe. In this study, we only have considered the conventional machine learning models for developing the severity classifiers. Deep learning models have not been studies here as they are being widely used in complex classifications and regression problems where the data have high dimensions and complex features [49]. As we intend to develop a simple and cost-effective DSPN severity classifier, using deep models can introduce higher costs due to its complex computational models [50].
As DSPN exhibits non-linear characteristics, the data to train the model plays a crucial role. The performance of the ML models will depend on how well the data is showcasing the real situation. Therefore, for better accuracy of the models, we have considered a database from the EDIC trial, which is a large and continuous clinical trial, uses MNSI to follow-up the enrolled patients' DSPN condition annually [10,38,39]. As models were trained with a real dataset, it can accurately learn the non-linear characteristics of DSPN. As the MNSI variables are semi quantitation or non-quantitative tests, it can be easily deployed in any regular healthcare facility. As the EDIC trials consist of a wide range of demographic variables from 29 different clinical centers, this dataset is realistic in observing different classes of DSPN severity with a variety of populations.
Two datasets were used to train the ML models. For both of the datasets, the RF model was working better in comparison with other ML models used in this study. For models training dataset-1-i.e., top three MNSI variables from feature ranking-the performance of the ML models can be ranked as RF > EA > ANN > SVM > KNN > NB > DAC > LR. All the ML models using dataset-1 showed substantial reliability with kappa values between 0.66 to 0.78 [42] states that, the inputs used in dataset-1 are moderately accurate to identify DSPN severity. From the performance analysis for different ML models, it can be seen that only three variables are not enough to accurately identifying DSPN severity even though these variables got high importance index based on feature ranking. From dataset-2, where we have considered all the MNSI variables exhibit that, all the ML models exhibited very good performance except ANN. ANN performance has not been improved much after using all the MNSI variables and has a higher standard deviation in performance, indicating that, in some of the folds from the cross-validation process where ANN was not able to train properly and had poor performance. For dataset-2, ML models performance can be ranked as RF > SVM > EA > KNN > NB > DAC > LR > ANN. Also using all six MNSI variables to train ML models, the kappa values for the models were between 0.89 to 0.98 which indicates that the models are in perfect agreement [42] with the data and the variables used in dataset-2 are perfectly accurate to identify DSPN using ML models. Predicted classes by ML models and the true classes using dataset-2 have a higher correlation in comparison with dataset-1. From this study, we can recommend that all the six MNSI variables need to be considered while DSPN severity grading for higher accuracy of the model's performance.
According to the International Diabetic Federation [1] in 2019 almost 463 million people are affected with diabetes and 50% of the total prevalence is suffering from DSPN. USD 760 million is spent on diabetics and the health expenditure for diabetic patients increases with severity [1,51]. By enhancing the awareness among patients about DSPN as well as the performance of the diagnosis methods will help to improve the healthcare facilities for diabetic patients. As almost 50% of the DM patients are affected by DSPN at some point of DM duration, the global expenditure can be significantly reduced if an improved, cost-effective, accurate, reliable diagnosis method can be deployed which will be able to help with real-time DSPN severity identification and will allow early detection and treatment of diabetic neuropathy as well as prevent from severe complications like foot ulceration and amputation. ML algorithms based on DSPN severity classifiers are capable of providing all these benefits to DM patients. It will also be beneficial in overcoming the shortcomings in the available conventional diagnosis methods which relied on offline analysis by healthcare professionals, leading to a delayed and sometimes biased diagnosis for DSPN. The analysis results showed that RL models outperforming the other ML models with all MNSI variables for DSPN severity classification. This RF based DSPN severity classifier can be used as a support system for the healthcare professionals in more accurate, reliable, and faster identification and stratification of DSPN. A limitation of this study is that it had been conducted using the EDIC dataset, which only recruited type-1 diabetic patients. The effect of DSPN in type-2 patients and their severity classification using MNSI still need to be studied. Nerve conduction studies (NCS) have been considered the gold standard for DSPN identification and stratification. In the future, we aim to use NCS and other risk factors for DSPN with MNSI for severity identification and stratification using ML models. In the future, a prediction system can be incorporated with an RF-based DSPN classifier so that health professionals will be able to predict patients' future conditions using patients' previous and present conditions. This will help to identify the high-risk individuals ahead of time so that proper treatment can be provided to the patients to avoid extreme situations.
Conclusions
DSPN is one of the most common forms of diabetic neuropathy (DN) and almost 90% of the DN patients suffer from it. Diagnosis of DSPN is complicated because of contradictory and subjective diagnosis techniques.
Although many diagnoses and composite scoring techniques have been reported and many studies are being conducted to validate these systems, yet it lacks consistency and is sensitive to population size. To overcome this issue, machine learning techniques can be a good solution. The application of ML in different aspects of the biomedical sector has shown a promising impact in improving the performance from the usual methods. In this research, we have studied the performance of different conventional ML techniques (RF, SVM, KNN, EA, NB, DAC, ANN, LR) in the diagnosis and stratification of DSPN. We have using the MNSI composite scoring technique for DSPN diagnosis and observed the importance of the MNSI variables on DSPN identification and stratification. From this analysis, we have found that the random forest algorithm with all MNSI variable model works better in DSPN stratification. Therefore, a random forest based MNSI scoring technique can help health care professionals to identify DSPN patients and grade their severity. This type of system can overcome the problem of inconsistency and lack of agreement between professionals with diagnostic criteria for DSPN. | 5,892.6 | 2021-04-28T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Silica-based microencapsulation used in topical dermatologic applications
Microencapsulation has received extensive attention because of its various applications. Since its inception in the 1940s, this technology has been used across several areas, including the chemical, food, and pharmaceutical industries. Over-the-counter skin products often contain ingredients that readily and unevenly degrade upon contact with the skin. Enclosing these substances within a silica shell can enhance their stability and better regulate their delivery onto and into the skin. Silica microencapsulation uses silica as the matrix material into which ingredients can be embedded to form microcapsules. The FDA recognizes amorphous silica as a safe inorganic excipient and recently approved two new topical therapies for the treatment of rosacea and acne. The first approved formulation uses a novel silica-based controlled vehicle delivery technology to improve the stability of two active ingredients that are normally not able to be used in the same formulation due to potential instability and drug degradation. The formulation contains 3.0% benzoyl peroxide (BPO) and 0.1% tretinoin topical cream to treat acne vulgaris in adults and pediatric patients. The second formulation contains silica microencapsulated 5.0% BPO topical cream to treat inflammatory rosacea lesions in adults. Both formulations use the same amorphous silica sol–gel microencapsulation technology to improve formulation stability and skin compatibility parameters.
Introduction
The successful delivery of topical drugs and/or excipients can be influenced by various internal and external factors, including drug concentration [1], drug potency [1], oxidation [2], UV light exposure [3], physicochemical factors (drug molecule weight, lipophilicity, pH, size) [4], and the interaction and compatibility with other ingredients (i.e., combination products containing tretinoin and benzoyl peroxide) [5].Encapsulating these ingredients in silica can improve their stability and control their release onto and into the skin [6,7].
Since its introduction in the 1940s, microencapsulation has received widespread attention because of its diverse capabilities.Microencapsulation is the process of entrapping a microsized active ingredient particle (core material) within a shell (shell/wall material) [8].Microencapsulation technologies are commonly used to provide cosmetically elegant [9] and nontoxic methods to protect, direct, and control the release of active ingredients [10], leading to improved stability, efficacy, and patient adherence [11].Microencapsulation can also enhance the sensory properties of cosmetics, giving the product a more elegant look and feel [12].In addition to protecting and stabilizing bioactive compounds [13], microencapsulation allows manufacturers to minimize medication doses while maintaining efficacy and reducing adverse effects [14].
The two most common microencapsulation techniques are chemical and physical, which can be further categorized into physicochemical and physicomechanical subtypes [15].Silica-based microcapsules can be formed using the sol-gel process.In this process, amorphous silica is formed by interconnecting colloidal particles (the "sol") under increasing viscosity until a rigid network, the silica shell (the "gel"), is formed [5].Tetraalkoxysilanes undergo hydrolysis and polycondensation reactions to form amorphous silica [16].This method results in sol-gel microcapsules with several valuable properties.The microcapsules range from 0.01 to 100 µm [17].The solid form of the active ingredient [e.g., benzoyl peroxide (BPO) or tretinoin (all-trans retinoic acid)] functions as the core during the sol-gel reaction, and a silica shell forms around it [18].
Amorphous silica is listed in the inactive ingredient guide of the United States Food and Drug Administration (FDA) [17].Excipients or inactive ingredients are a crucial component of drug formulations that are added intentionally to aid in the manufacturing process or to enhance the performance, stability, bioavailability, or acceptability of the topical drug product [19].These substances are generally inert or nonreactive and include emollients, emulsifiers, gelling agents, surfactants, preservatives, buffering agents, or solvents [20,21].Excipients are considered safe and typically do not directly interfere with the therapeutic action of the drug [19,22].The use of excipients is essential in the formulation and delivery of pharmaceutical products, as they can impact the pharmacologic properties of the drug [22], as well as its appearance, color, odor, sensory properties, and shelf-life stability [23].In this case, amorphous silica has no direct effect on the treatment of disease and exerts no effect on any structure or function of the human body [24].In addition, silica is compatible with various active pharmaceutical ingredients (APIs), making it particularly useful in drug development [17].
Sol-gel microencapsulation has been successfully adapted to produce microencapsulated BPO and tretinoin.Scanning electron microscopy (SEM) and cryo-SEM images of sol-gel encapsulated all-trans-retinoic acid (E-ATRA) microcapsules indicate particle diameters ranging from 5 to 30 µm and a shell thickness of < 100 nm (Fig. 1) [25].SEM images of sol-gel encapsulated benzoyl peroxide (E-BPO) microcapsules show particle sizes of < 30 µm, with the majority smaller than 10 µm.Shell thicknesses of the microcapsules in cryo-SEM images range from 250 to 750 nm [26].These active ingredients are released from their microcapsules over time.
Applications of silicon and its derivatives
Silicon (Si) is the second most copious element on earth after oxygen [27].It is a metalloid signifying that it has both metal and nonmetal properties [28].Si rarely occurs in its pure form and is mainly combined with oxygen (O), halogens, aluminum-forming crystalline silica (SiO 2 , quartz), amorphous silica (opal), and silicates (talc, asbestos, and mica) [29].Silica, also known as silicon dioxide, is a silicic acid anhydride of monomeric orthosilicic acid (H 4 SiO 4 ) [28].The silicic acid group, comprised of silicon, hydrogen, and oxygen, is a group of chemical compounds with the common formula [SiO x (OH) 4−2x ] n [30] .Metasilicic acid, orthosilicic acid, disilicic acid, trisilicic acid, and the hydrated equivalent, pyrosilicic acid, are a few simple forms identified in very dilute aqueous solutions [30].These forms become unstable in the solid state and polymerize to form complex silicic acids [30].
Structural forms of the various silicic acids and silicone are shown in Fig. 2.Among these forms, orthosilicic acid is the most fundamental chemical form of water-soluble Si [29] and is also the natural form of Si in humans and animals [27,29].In the form of orthosilicic acid, Si is the third most abundant trace element in the human body [27].Si also activates hydroxylation enzymes, enhancing skin strength and suppleness [31], and is present in 1-10 parts per million (ppm) in hair, hair epicuticle, nails, and cornified epidermis [27].
Crystalline and amorphous silica have different forms, or polymorphs, each with unique surface chemical properties.Crystalline silica is highly abrasive and used in grinding, sandblasting, and masonry projects [32].In contrast, hydrated silica is only mildly abrasive, commonly used in toothpaste [33], and can quickly form gels that can be used in liquid foundation products [34].Both crystalline and amorphous silica are forms of silicon dioxide which in turn is a silicic acid anhydride of monomeric orthosilicic acid.
The most common form of silica used in cosmetics and skin care products is amorphous silica [35], categorized as either natural amorphous silica or synthetic amorphous silica (SAS) [36].Natural amorphous silica forms typically contain crystalline silica, while synthetic amorphous silica is free of crystalline silica contamination [37].There are several forms of SAS, 2 of which include nonporous silica nanoparticles and mesoporous silica nanoparticles (MSNs) [36].Nonporous silica nanoparticles have no particular shape or structure and have several applications due to their excellent biocompatibility.They are used in drug delivery, imaging, and enzyme encapsulation [38].The reflective properties of synthetic amorphous silica nanoparticles (SASNs) make them excellent candidates for cosmetics and sunscreens [34,39].MSNs, on the other hand, have a specific structure and large surface area.Due to their well-regulated porosity and high thermal stability, MSNs are widely used in catalysis, bioimaging, and drug delivery [38].The different silica forms can best be distinguished based on their size (Table 1) [36].
Amorphous silica is an inorganic inert excipient, and the FDA currently recognizes the use of silica in the food industry as an anticaking agent [40].According to the recent Code of Federal Regulations 21, amorphous silica is generally recognized as a safe (GRAS) ingredient in human drugs and feeds [41,42].The addition of amorphous silica to topical formulations may be beneficial in reducing harmful skin effects, such as irritation and rashes caused by strong active ingredients [17].Therefore, SAS is safe when used topically, but not all forms of silica are the same, and that there are several health risks associated with crystalline silica.Finally, silicones, not to be confused with silica, are synthetic polymers made up of repeating units of siloxane [43], elemental Si, and O combined with other elements (typically carbon [C] and hydrogen [H]) with the molecular formula of [R 2 SiO] n (R = CH 3 , C 2 H 5, or C 6 H 5 ).Silicones have different functional uses than silica and are commonly used as gels or sheeting to treat and minimize scars resulting from surgery, burns, and other skin injuries [44,45].
Uses of silica microencapsulation in topicals and sunscreens
In the current FDA inactive ingredient database (IID), which was last updated on January 2023, the maximum potency per unit dose limit for silicon dioxide used in topical creams is 3.4% w/w, and it is 0.25% w/w for topical gels [46].Product tolerability may be improved by using sol-gel microencapsulation to coat the surface of drugs or active ingredients with a high irritation potential by reducing the contact with biological components and membranes in human skin.The FDA recently approved two new first-line topical therapies: a 5.0% microencapsulated benzoyl peroxide (E-BPO) for treating papulopustular rosacea and a fixed-dose combination of microencapsulated 3.0% BPO and 0.1% tretinoin (E-BPO/T) to treat acne [47,48].Despite its therapeutic properties, the use of BPO has traditionally been avoided in patients with rosacea due to the high irritation rates.E-BPO is a proprietary vehicle technology to create silica-encapsulated BPO using the sol-gel microencapsulation technique.This encapsulation forms a barrier between the drug and the skin, resulting in a gradual release and absorption of BPO, allowing for efficacy in rosacea treatment while reducing tolerability issues and adverse events [47].The microencapsulation technology in E-BPO/T enables combining BPO and tretinoin into one product.The silica microcapsules segregate and envelop each of the active ingredients, protecting tretinoin from the oxidizing effects of BPO and releasing each active ingredient separately and gradually onto the skin [48].Sol-gel topical products contain silica particles that are larger than typical SAS nanoparticles.The processes for encapsulating BPO and tretinoin have been described previously [5].There is one sunscreen product that uses an advanced microencapsulation technology in which the sol-gel silica coating enhances avobenzone photostability [49].The sol-gel-treated UV filters remain on the skin surface, and the coating provides soothing skin protection.Silica encapsulation prevents the UV filter from contacting the skin surface and, subsequently, reduces avobenzone cutaneous uptake and hypersensitivity potential [49].
Other topical applications for microencapsulation
The sol-gel technique is also used to synthesize woundhealing products [50].Chitosan-silica (CTS-Si) materials produced through the sol-gel process have distinctive characteristics and can function as wound-dressing agents to speed up wound healing [50].Because of their many beneficial properties, MSNs have a broad spectrum of practical features, including combating bacterial infections [51], however commercialization may be challenging.A gentamicinloaded MSN construct with bacterial toxin-receptive lipid bilayer surface shells protecting the bacteria-targeting peptide, UBI 29-41 , effectively targeted Staphylococcus aureus (S. aureus) in vitro and in vivo and hindered S. aureus growth in mouse models [52].Also, hollow mesoporous silica nanoparticles (HMSNs) and nonporous MSNs are used to treat skin disorders [53].An MSN assembly containing a small interfering RNA (siRNA) formulation can treat skin squamous cell carcinoma (SCC) [54].The effectiveness of an MSN-siRNA formulation was investigated by administering siRNA topically to target the SCC transforming growth factor-beta receptor type 1 (TGFβR-1) gene in a mouse model.The results show that MSNPs comprising TGFβR-1 siRNA suppressed TGFβR-1 by twofolds compared with controls [54].Further research is needed to test their efficacy and safety in humans.
Silica uses in the functional design of a controlled drug delivery system
Due to their ordered mesoporous structure, functional moieties can be appended to the surface of MSNs, regulating the delivery of bioactive agents in response to different stimuli, including light, temperature, pH, electric fields, and chemicals [55].In one study, hollow silica particles were mixed with microgels to generate novel organic/inorganic systems called thermoresponsive hollow silica microgels (THSMGs) [56].These showed sensitivity to stimuli and might function as sustained drug delivery agents [56].Microparticles are a unique category of drug delivery systems in which the microencapsulation technique enhances the photostability of drugs that undergo photodegradation [57].Microencapsulation increased pantoprazole's photostability, making the drug acid resistant and extended its release for 9 h, making it more patient compatible.
Use of silica microencapsulation in oral medications
Oral drug delivery systems (ODDS) use silica-based materials due to their porous nature, minimal toxicity, and solubility in biological fluids [58].The primary advantage of using silica-based drug materials in oral medications is when silica undergoes enzymatic breakdown, and the byproduct orthosilicic acid is formed, and it is then excreted by the kidneys into the urine and thought to be harmless [58,59].The four types of silica-based materials used in oral delivery systems are (a) nonporous silica nanoparticles (fumed or Stöber nanoparticles), (b) mesoporous silica nanoparticles (MSNs), (c) mesoporous silica-based materials, and (d) biosilica [58].A combination of 2 MSNs, mobile composition of factor no. 41 (MCM 41) and MCM 48 were used to encapsulate aprepitant [60].Aprepitant is an oral capsule used to prevent chemotherapy-induced and postsurgical nausea and vomiting [60].Due to its low solubility and absorptivity, it must be administered at high doses.Microencapsulation with MCM 41 and 48 may help increase the solubility and availability of the medication at lower doses [60].
Additional and future silica uses
Silica has many medical applications beyond the skin.Magnetic resonance imaging (MRI) contrast agents take advantage of MSNs' biosensing abilities.A gadolinium (Gd), Gd 3+ incorporated MSN (Gd 2 O 3 @MSN) had desirable MRI contrast-augmentation properties, making it suitable for developing more precise and possibly even more focused contrast agents for molecular MRI [61].It could also provide a real-time response for treatment results, perhaps improving the clinical value of MRI.In addition, the MSN structure comprises silica and Si-OH groups wherein the Si-O-Si systems are relatively stable, and silica breakdown is difficult under physiological conditions.These particles likely enable good loading of Gd 2 O 3 but prevent the release of free Gd 3+ , lowering its toxic effects [61].Another use of amorphous silica is as an ODDS based on encapsulated ciprofloxacin used to target a Salmonella intracellular infection [62].The beneficial properties of silica-based nanoparticles make them promising candidates for the percutaneous delivery of anticancer drugs.A dabrafenib and trametinib drug combination was encapsulated in organosilica nanoparticles to treat mutant melanoma [63].MSNs may be potential drug delivery agents for treating malignant nervous system tumors and Alzheimer's.Dementia associated with Alzheimer's and Parkinson's disease was treated using MSNs loaded with rivastigmine hydrogen tartrate [64].Mo et al. demonstrated that tailored MSNs carrying anticancer drugs could circumvent the blood-brain barrier (BBB) in treating glioblastoma [65].Another application of MSNs is in microneedle-mediated intradermal vaccination, wherein a microneedle array coated with a lipid-MSN nano construct served as an intradermal transport system for encapsulated protein antigens [66].Silica nanoparticles can also simulate pathogen spread by contact transmission.A pilot study was performed in which silica nanoparticles in encapsulated DNA (SPED) served as a surrogate tracer to study microbial spread [67].The pilot study results show that SPED could be a valuable and safe tool for studying pathogen propagation [67].
Conclusion
The versatile use of amorphous silica as an excipient in healthcare and medicine has demonstrated remarkable potential in the field of drug discovery.This review has shed light on the various forms of silica and the benefits of silica microencapsulation.The use of sol-gel microencapsulation technology has enabled the development of innovative dermatological products, offering new treatments for patients suffering from conditions such as rosacea and acne.The creation of a protective silica shell between the medication and the skin has resulted in a more controlled delivery, increasing the efficiency of treatments while minimizing the adverse side effects.The continuing advancements in microencapsulation techniques have opened up new possibilities for the future of drug delivery and offer exciting opportunities for the development of novel medical treatment applications.In summary, the potential for amorphous silica and sol-gel microencapsulation technology in healthcare is enormous and requires continued research and development to explore its full capabilities.
Fig. 1
Fig. 1 Cryo scanning electron microscopy (cryo-SEM) image of encapsulated benzoyl peroxide (E-BPO) and SEM image of encapsulated tretinoin (E-ATRA).a Cryo-SEM image of an E-BPO microcapsule captured with a secondary electron detector which provides
Fig. 2
Fig. 2 Forms of different silicic acids
Table 1
Different forms of silica | 3,778.6 | 2023-10-04T00:00:00.000 | [
"Medicine",
"Materials Science"
] |
Magnetic properties and structure of Gd-implanted L10 FePt thin films
In this study, we have investigated the effect of Gd implantation on composition, chemical order, and magnetic properties of 20 nm thick L10 ordered FePt thin films. We show that upon Gd implantation at 30 keV even a small amount of 1 at. % is sufficient to destroy the L10 order, resulting in a soft magnetic A1 FePt alloy, with the exception of a thin L10 ordered layer located at the film/substrate interface. Additionally, a strong resputter effect is observed which results in a large decrease in film thickness as well as to a reduction in Fe content in the FePt alloy. Post-annealing of samples in Ar atmosphere did not result in a restoration of the L10 order, but leads to a transformation to pure Pt and Fe2O3, facilitated by the presence of a high density of vacancies induced by the implantation process. © 2019 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). https://doi.org/10.1063/1.5097350
I. INTRODUCTION
Chemically ordered L1 0 FePt alloy thin films, comprising of equal amounts of Fe and Pt, can exhibit large perpendicular magnetic anisotropy (PMA) of up to 70 Merg/cm 3 . [1][2][3] Recently, these films have been implemented as storage material for applications in heat-assisted magnetic recording (HAMR), which is expected to further extend the areal density towards 3-4 Tb/inch 2 . [4][5][6][7][8] While high PMA is needed for thermal stability of today's hard disk drives, it poses a challenge to magnetic writing heads, as heat assistance is generally required in order to reverse the magnetization direction. 8 Great interest therefore lies in the addition of third elements to the system, allowing for fine tuning of certain properties of the FePt alloy such as Curie temperature, saturation magnetization and PMA, as well as lowering the ordering temperature during post-annealing of chemically disordered FePt films. [9][10][11][12][13][14][15][16] In particular, the inclusion of rare earth elements can provide further functionalities. 17 For example, the addition of a heavy rare earth element should result in a reduction of the net magnetization due to the expectation of strong antiferromagnetic coupling between the magnetic moments of Fe and heavy rare earth element such as Gd. 18 Furthermore, it has been shown that the magnetization dynamics in ferrimagnetic GdFe stimulated by femtosecond laser pulses can offer an intriguing pathway for overcoming the material constraints of high magnetic anisotropy. In this regard, toggle switching in GdFe alloys, in which the magnetization switches back and forth after subsequent ultra short laser pulses, has been discovered. [18][19][20][21] In this study, we have implanted Gd ions as third element to L1 0 ordered FePt thin films and investigated its impact on the structural and magnetic properties.
II. METHODS
Crystal structures were analyzed by means of x-ray diffractometry (XRD). To characterize chemical compositions and film thicknesses, Rutherford backscattering spectrometry (RBS) was conducted with 5 MeV He 2+ ions. The Gd implantation was realized using an Eaton NV-3204 medium current implantation system. Simulations of the ion implantation process were performed utilizing the SRIM/TRIM software package. 22 A superconducting quantum interference device -vibrating sample magnetometer (SQUID-VSM) was used to measure the magnetic properties. Surface images were recorded using an atomic force microscope (AFM). Based on this data, the root mean square surface roughness Rq was calculated. Scanning electron microscopy (SEM) images were taken at 10 kV probe energy and 100 pA probe current. Auger electron spectroscopy (AES) was performed at an Omicron NanoSAM system operating at 5 kV probe energy with 3 nA probe current. The hemispherical analyzer was operated in constant retard ratio mode.
III. RESULTS AND DISCUSSION
Fe 52 Pt 48 thin films with a thickness of 20 nm were prepared at 800 ○ C on single crystalline MgO(001) substrates by dc magnetron sputtering using an Ar pressure of 5 µbar. In order to be able to make a statement about the degree of chemical L1 0 ordering in the sample, an out-of-plane XRD θ/2θ-scan was carried out. As revealed in Fig. 1a, single crystalline films with L1 0 order and (002) orientation were obtained under these deposition conditions. Based on the ratio of the integrated intensities of the FePt(001) and FePt(002) diffraction peaks the order parameter was determined to be 0.77, taking into account structure factor, absorption factor, polarization and Lorentz factor as well as the thermal displacement factor, as described by B. W. Roberts. 23 An experimental Debye-Waller factor of 0.14 Å, reported for FePt, 24 was used to calculate the thermal displacement factor. Its magnetic properties were characterized by measuring in-plane and out-of-plane M-H hysteresis loops at 300 K.
The corresponding loops are shown in Fig. 4a, revealing a clear outof-plane easy axis of magnetization. It was not possible to saturate the sample in the magnetically hard in-plane direction due to the high magnetocrystalline anisotropy constant of the L1 0 phase, which was estimated to be in the range of 40 Merg/cm 3 . Please note that we have measured a rather high saturation magnetization of the prepared L1 0 FePt film, which is about 20% larger than typically reported in the literature. 25 The reason for this discrepancy is still not clear but the conclusions drawn are not affected by this. The surface roughness Rq of the sample was calculated to be 1.7 nm based on AFM measurements. These film samples were further used for Gd implantation studies. Before Gd implantation, the correct fluence of Gd atoms per cm 2 as well as the ion energy for the process had to be determined. Therefore, numerical TRIM simulations were conducted, simulating the behaviour of accelerated Gd ions in 20 nm thick FePt films. Various runs at different ion energies, ranging from 10-50 keV, were simulated. The results are shown in Fig. 2a. For increasing energies, the maximum in Gd concentration shifts towards the substrate and the curves flatten out. A desirable distribution has its maximum at a sufficient depth below the surface, without penetrating into the MgO substrate, as the interface between the FePt film and the substrate should remain intact. The Gd distribution corresponding to an ion energy of 30 keV satisfies both requirements adequately. Figure 2b shows the estimated trajectories of Gd ions in the FePt film at this energy. The damage calculation for this energy yielded a large value of 650 displacements per implanted atom, inducing a high density of vacancies in the film samples.
Four different implantations with Gd concentrations of 1, 2, 3, and 5 at. % were conducted at an incident angle of 7 ○ in order to avoid channelling effects. The calculated fluences as well as the compositions and thicknesses, obtained by RBS measurements, are summarized in Table I. The desired Gd concentrations were achieved within the range of accuracy of RBS. An interesting observation is that the relative Fe content is strongly decreased at higher exposure doses, which is due to the higher sputter yield of Fe compared to Pt during Gd implantation. The Fe and Pt concentrations as function of Gd content (or dose) are given in Fig. 3a. The variation of the film thickness is shown in Fig. 3b, revealing a substantial reduction of over 40% for the highest exposure dose.
To evaluate the amount of damage to the L1 0 ordering caused by the implantation, XRD θ/2θ-scans of all samples were recorded (see Fig. 1b-e). A splitting of the FePt(002) peak into two peaks, especially for the highest Gd concentration, as shown in Fig. 1e, can be observed. The stronger of the two peaks at a lower angle belongs to the disordered A1 phase. The weaker (002) peak indicates the remaining L1 0 phase. The (001) peak that only exists for the L1 0 phase has strongly decreased in intensity when compared to before implantation (see Fig. 1a). Even the smallest exposure dose destroyed the L1 0 ordering except for a small amount. As the remaining fraction of L1 0 phase seems to be equally present in all samples, the region in which the ordering could prevail must be at the film/substrate interface, as this region is barely affected by Gd ions (see Fig. 2).
Another observation that can be made is that the position of the A1 FePt(002) peak shifts towards lower angles for higher implantation doses, as shown in more detail in Fig. 1f. Due to the fact that the samples have slightly different sizes, the absolute measured intensities of the different samples cannot be compared to one another in a meaningful way and were therefore normalized to their maximum. The shift of the peak position in angular space corresponds to an increase in lattice spacing from c = 3.818 Å for 1 at.% to c = 3.855 Å for 5 at.% Gd. This behaviour is mainly a result of the A1 FePt phase getting richer in Pt due to the stronger resputter effect of Fe compared to Pt.
The decrease in film thickness for higher exposure doses manifests itself in the evident broadening of the diffraction peaks (see Fig. 1f). According to the Scherrer equation, the peak's halfwidth is inversely correlated to its coherent scattering length. The thicknesses extracted from XRD data as well as the thicknesses obtained from RBS measurements are shown in Fig. 3b, which are in good agreement except for the highest exposure dose. The XRD results appear more reasonable, as they imply a linear decrease in film thickness as the exposure dose increases and are therefore used in the following for calculating the film volume needed to determine the magnetization of the implanted samples.
To characterize the change in magnetic properties after implantation, in-plane and out-of-plane M-H hysteresis loops were measured (see Fig. 4b-e). From the measurements, it becomes apparent
ARTICLE scitation.org/journal/adv
that the out-of-plane M S seems to be larger than in-plane. This is due to an inherent error arising from the SQUID's pickup-geometry. 26 The loops reveal two distinct parts of reversal. A rather sharp switching at low fields in the range of tens of Oe and a reversal of magnetization at higher fields of up to 20 kOe. This observation is consistent with the conclusions drawn from the structural analysis, where two layers were suggested; a dominant A1 phase with an inplane easy axis and a small L1 0 ordered region at the film/substrate interface exhibiting an easy axis out-of-plane. The chemically disordered A1 phase shows no magnetocrystalline anisotropy, therefore, the in-plane direction is now the preferred magnetic easy axis, due to magnetic shape anisotropy. We analyzed the evolution of M S in order to see some indication of magnetic coupling between Fe and Gd, which might be strongly antiferromagnetic, as observed in Fe-Gd alloys. 27,28 However, as shown in Fig. 4f, we found only a slight decrease in M S with Gd content, which is much lower than expected for antiferromagnetic coupling between Gd and Fe, assuming a Gd moment of 7.6 µ B . 29 Thus, the reduction is simply given by the reduced Fe content after implantation while Gd is considered to be in a paramagnetic state. Please note that M-H loops taken at lower temperatures down to 50 K revealed the same behaviour.
In order to restore the desired L1 0 ordering, the samples were thermally post-annealed at 800 ○ C for one hour inside a tube furnace. The process was conducted in low pressure Ar atmosphere to prevent reactions with oxygen. The XRD θ/2θ-scan of the postannealed Fe 51 Pt 48 Gd 1 sample is compared with the implanted sample shown in Fig. 5a and b, respectively. However, no transformation to the L1 0 structure could be observed. Even the previously measured A1 FePt(002) peak completely disappeared. Instead, a pure fcc Pt phase formed, which manifests itself in the occurence of the Pt(002) peak at around 46 ○ . A striking feature of this peak compared to the previously measured FePt peaks is its small full width at half maximum. The coherent scattering length corresponding to this value is about 40 nm. Therefore, the Pt phase most likely appears in form of islands. The total lack of Fe-related peaks can be explained by oxidation of Fe in the sample by residual O 2 inside the Ar gas during annealing. We believe that it is thermodynamically distributed throughout the sample due to the high density of vacancies and structural defects introduced by Gd implantation. An XPS study determined the type of iron oxide to be Fe 2 O 3 (not shown). In this regard, a systematic study on the oxidation of FePt nanoparticles was reported by C. Liu et al. 30 In their series of experiments, the change in structure after annealing in an oxygen rich atmosphere at different temperatures was investigated. Samples annealed at 700 ○ C exhibited no FePt compounds but consisted solely of pure Pt and Fe 2 O 3 , which is consistent with our observation.
The surface morphology of the post-annealed Fe 51 Pt 48 Gd 1 film was examined by AFM and SEM imaging. AFM measurements reveal a grainy film structures with a roughness Rq of about 16 nm (see Fig. 6a), while SEM images show, in addition, separated regions of brighter and darker areas (see Fig. 6b, c). At eight selected spots, marked in Fig. 6c, AES was measured to gain insight on the local chemical composition at each spot. The measured Auger signal (Fig. 6d) shows a high Pt and low Fe and O concentration at bright areas, while darker regions show only Fe and O, suggesting a local phase separation between elementary Pt and iron oxide.
The change in magnetic properties induced by the postannealing process were captured by another series of in-plane and out-of-plane M-H hysteresis loops, one of which is compared to an implanted sample, as shown in Fig. 5c and d, respectively. The magnetization is still calculated assuming the same volume as before annealing. However, this is not necessarily the case, as the volume has changed as a result of the phase formation of Pt and Fe 2 O 3 . Both of these effects increase the overall volume as both Pt and any iron oxide exhibit a lower density than FePt. 3,31 The magnetization values given for the post-annealed sample are therefore not to be taken literally but are upper estimates of the actual magnetization.
The general shape of the measured M-H loops strongly differs from any of the previously measured loops as the magnetization loops hardly show an opening at the centre. The coercivity has decreased by a factor of 20 and is now in the order of tens of Oe, as visible in the inset of Fig. 5d. Even though, the measured magnetization value, as mentioned previously, is an upper estimate, a striking decrease by about a factor of five is still noticeable when compared to before annealing, characteristic for weakly ferromagnetic Fe 2 O 3 . 32
IV. CONCLUSIONS
L1 0 ordered Fe 52 Pt 48 films with a thickness of 20 nm and strong perpendicular magnetic anisotropy were sputter-deposited on MgO(001) at 800 ○ C. Four different Gd concentrations were then implanted at 30 keV to make up 1, 2, 3, and 5 at. % of Gd. The thickness of the film decreased continuously as more and more material was resputtered. During the implantation process, a stronger resputtering effect of Fe could be observed compared to Pt, decreasing the relative Fe/Pt ratio the more Gd was implanted. The L1 0 order was destroyed almost entirely by the process leaving behind only a thin ordered layer at the film/substrate interface. The magnetic easy axis turned in-plane and the high coercivity previously measured had disappeared. The continuous loss of Fe in the FePt alloy after implantation resulted in a decrease in M S , without any signature of magnetic coupling between Fe and Gd. Thus, Gd is expected to be in a paramagnetic state. In an attempt to restore the L1 0 ordering by post-annealing at 800 ○ C in a low pressure Ar atmosphere, the material transformed to pure Pt and weakly ferromagnetic Fe 2 O 3 , faciliated by the presence of a high density of vacancies induced by the implantation process. | 3,781.4 | 2019-05-23T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Detection and Severity Evaluation of Combined Rail Defects Using Deep Learning
: Various techniques have been developed to detect railway defects. One of the popular techniques is machine learning. This unprecedented study applies deep learning, which is a branch of machine learning techniques, to detect and evaluate the severity of rail combined defects. The combined defects in the study are settlement and dipped joint. Features used to detect and evaluate the severity of combined defects are axle box accelerations simulated using a verified rolling stock dynamic behavior simulation called D-Track. A total of 1650 simulations are run to generate numerical data. Deep learning techniques used in the study are deep neural network (DNN), convolutional neural network (CNN), and recurrent neural network (RNN). Simulated data are used in two ways: simplified data and raw data. Simplified data are used to develop the DNN model, while raw data are used to develop the CNN and RNN model. For simplified data, features are extracted from raw data, which are the weight of rolling stock, the speed of rolling stock, and three peak and bottom accelerations from two wheels of rolling stock. In total, there are 14 features used as simplified data for developing the DNN model. For raw data, time-domain accelerations are used directly to develop the CNN and RNN models without processing and data extraction. Hyperparameter tuning is performed to ensure that the performance of each model is optimized. Grid search is used for performing hyperparameter tuning. To detect the combined defects, the study proposes two approaches. The first approach uses one model to detect settlement and dipped joint, and the second approach uses two models to detect settlement and dipped joint separately. The results show that the CNN models of both approaches provide the same accuracy of 99%, so one model is good enough to detect settlement and dipped joint. To evaluate the severity of the combined defects, the study applies classification and regression concepts. Classification is used to evaluate the severity by categorizing defects into light, medium, and severe classes, and regression is used to estimate the size of defects. From the study, the CNN model is suitable for evaluating dipped joint severity with an accuracy of 84% and mean absolute error (MAE) of 1.25 mm, and the RNN model is suitable for evaluating settlement severity with an accuracy of 99% and mean absolute error (MAE) of 1.58 mm.
Introduction
The railway is a transportation model that plays an important role nowadays because it is environmental-friendly, energy-saving, and safe. Therefore, the demand for the railway is increasing. However, the investment in railway projects is high, so the load and speed of rolling stocks are increased to meet the increasing demand for railway transportation. The high load and speed of rolling stocks deteriorate the railway infrastructure, and railway defects take place when the deterioration reaches a certain level. Railway defects can emerge as a single defect or combined defects. Combined defects are more complicated and more difficult to detect and evaluate than a single defect. Therefore, a tool to detect and evaluate the severity of combined defects is necessary to improve the railway maintenance capability.
Railway defects can be inspected using a traditional technique such as visual inspection [1] or more advanced techniques such as ultrasonic [2], magnetic induction [3], acoustic emission [4][5][6], and eddy current [7], which are non-destructive testing (NDT). The benefits of NDT are less waste, less downtime, accident prevention, advanced identification, comprehensive testing, and increased reliability [8]. Machine learning is an NDT technique that is popular in the present because it is fast, cost-saving, and it is proven that the performance is satisfied. Many machine learning techniques can be used to develop models to detect and evaluate defects. This study applies deep learning techniques to develop models because it is proven that deep learning techniques tend to provide the better performance if they are constructed properly [9].
This unprecedented study aims to apply deep learning techniques, namely, deep neural network (DNN), convolutional neural network (CNN), and recurrent neural network (RNN) to detect and evaluate the severity of combined defects consisting of settlement and dipped joint using axle box accelerations (ABA) as features. It is noted that the dipped joint and settlement in this study are simplified to the geometrical irregularities. In fact, they can be related to the void irregularity, which is more complicated, and further study is needed to investigate their dynamic behavior. ABA is used to detect and evaluate the severity of combined defects because it is one of the NDTs that requires a low installation cost, and it can be measured continuously when the rolling stock is operated. ABA can be measured by installing an axle box acceleration sensor to the rolling stock. The measurement can be monitored in real time or at the end of the day and fed into the machine learning models to detect and evaluate the severity defects. This process is an inverse analysis based on the fact that defects will affect the ABA differently depending on the type of defect. This approach is fast, cost-saving, and it monitors the track condition all the time. A verified simulation called D-Track is used to generate numerical data for machine learning model development.
The expected contributions of the study are that the developed models can detect and evaluate the severity of combined defects which will improve the railway maintenance capability in terms of cost, time, and reliability.
Literature Review
Machine learning is a branch of "the study and design of intelligent agents" to achieve a defined purpose [10]. Nowadays, machine learning is widely used in various areas such as computer science, psychology, medical, neuroscience, cognitive science, linguistics, engineering, etc. Machine learning can reduce human error, reduce human risk in some situations such as railway inspection, continue working for a long time especially repetitive tasks, work fast, and deal with complicated tasks [11].
Machine learning was adopted in the railway industry in different aspects. Huang et al. [12] used a random forest and support vector machine to control the speed profile and calculate the energy consumption of rolling stocks. They presented that the developed approaches had the error of energy consumption calculation of less than 0.1 kWh and could reduce the energy consumption by 2.84%. Alawad et al. [13] applied a decision tree to analyze fatal accidents. Sysyn et al. [14] applied the computer vision concept to predict contact fatigue on crossings. However, they faced a long processing time issue and claimed that deep learning could resolve this issue.
For railway defect detection, ABA was widely used Núñez et al. [15] applied ABA to detect squats and corrugations. The case study was from the Dutch Railway. They achieved an accuracy of the detection of higher than 85%. Then, Li et al. [16] applied the same concept to detect light squats. They could detect defects up to 85% using ABA, and many studies supported this finding [17,18]. Their findings demonstrated that ABA has the potential to be used to defect railway surface defects. This was also supported by many studies. Song et al. [19] found the relationship between ABA and polygonized wheels under high-speed conditions. ABA was used to predict the degradation of railway crossings [20]. Other defects can be detected using the ABA as well, such as insulated rail joint [21], bolt tightness [22], and track geometry [23]. Machine learning techniques were also applied to detect railway defects. Using ABA as the input for machine learning could provide a satisfying outcome. Table 1 summarizes machine learning techniques used to detect railway defects and demonstrates the research gap in this area. Table 1, it can be seen that image processing is the popular technique that is used to detect defects. However, cameras need to be installed, and there are limitations about the light and quality of images. Combined defects have not been comprehensively studied because most studies considered each defect separately as well as the severity evaluation. This is the research gap that this study aims to fulfill by developing models to detect combined defects and evaluate the severity of defects using axel box accelerations (ABA). The outstanding benefits of using ABA are that it requires a few additional installations that are cost-saving, continuity of data collection, and speed of inspection.
Numerical Data Simulation and Characteristics
Machine learning models in this study are developed using numerical data simulated by D-Track. D-Track is a simulation used to simulate the dynamic behavior of wheel and rail in railway transportation. D-Track was developed by Cai [34] in 1996. Then, Steffens [35] developed the DARTS (Dynamic Analysis of Rail Track Structure) model and an interface for D-Track in 2005. He found that the accuracy of D-Track at that time was not satisfied, because the simulated data and site data were significantly different. Then, D-Track was improved for more accuracy by Leong [36]. He found the causes for the D-Track's accuracy issue, which included too-low calculated wheel-rail forces, unnecessary assumptions in D-track, inaccurate sleeper pad reactions, and inaccurate sleeper's bending moment calculation. From these issues, he improved both the interface of D-Track and its workflow to improve the performance of the simulation. From the improvement, the simulation's outcome was close to the site data with an error of less than 10%. He compared the simulated results with the field data collected in Melbourne to Geelong, Australia. The parameters used to compare were average wheel-rail contact force, shear force, average rail acceleration, and bending moment. He also compared results between numerical data such as DARTS (Dynamic Analysis of Rail Track Structure), DIFF (Vehicle-Track Dynamic Analysis Model), NUCARS (New and Untried Car Analytic Regime Simulation), SUBTTI (Subgrade-Train-Track Interaction), and VIA (Vehicle Interaction with Track Analysis Model). He found that results from D-track were correlated to other simulations. This study uses data simulated by D-Track as representatives of data for developing machine learning models to detect and evaluate the severity of combined defects, which are crucial to rail safety and predictive track maintenance [37][38][39][40][41][42][43].
To simulate the dynamic characteristic of the railway system using D-Track, various inputs need to be defined in the simulation such as track properties (stiffness, damping, sleepers, etc.), vehicle properties (speed, weight, wheel radius, etc.), defect properties, and defect locations. Detailed variables are also required to define each category. Different outputs can be reported using D-Track such as accelerations, forces, pressures, bending moments, shear forces, and displacements of each wheel and track component. As mentioned, this study uses ABA or axle box acceleration from the simulation to develop machine learning models because it can be measured easily in the practice.
In terms of simulation inputs, a summary of parameters is shown in Table 2. Table 2 shows the 1650 simulations run to simulate data. Examples of ABA are shown in Figure 1. The speed and weight of the rolling stock are 20 km/h and 40 tons respectively. Figure 1a presents ABA when the rail is free from defect, and Figure 1b presents ABA when the rail has the 2.5 mm dipped joint and 20 mm short settlement, as shown in Figure 2. These two figures show that the ABAs from the defect-free rail and the rail with defects are significantly different and easy to categorize. However, when the sizes of combined defects vary and the defects are combined, it will be more complicated to categorize the type and size of defect; machine learning plays an important role for this purpose. From Figure 1b, the ABA has peak and bottom values, which will be used as simplified features. Figure 1 presents only one ABA from a wheel. From the simulation, ABAs from two wheels are extracted and used as features.
Parameters Value
Sizes of dipped joint 0-10 mm (the length of the dipped joint is 1000 mm.) Sizes of settlement 0-100 mm (the lengths of the settlement are 3000 and 10,000 mm for short and long settlement, respectively) Table 2 shows the 1650 simulations run to simulate data. Examples of ABA are shown in Figure 1. The speed and weight of the rolling stock are 20 km/h and 40 tons respectively. Figure 1a presents ABA when the rail is free from defect, and Figure 1b presents ABA when the rail has the 2.5 mm dipped joint and 20 mm short settlement, as shown in Figure 2. These two figures show that the ABAs from the defect-free rail and the rail with defects are significantly different and easy to categorize. However, when the sizes of combined defects vary and the defects are combined, it will be more complicated to categorize the type and size of defect; machine learning plays an important role for this purpose. From Figure 1b, the ABA has peak and bottom values, which will be used as simplified features. Figure 1 presents only one ABA from a wheel. From the simulation, ABAs from two wheels are extracted and used as features. The ABA is used in two ways as mentioned: simplified data and raw data. Simplified data are used to develop the DNN model, and raw data are used to develop the CNN and RNN models. For simplified data, 14 features are extracted from the simulations, which Table 2 shows the 1650 simulations run to simulate data. Examples of ABA are shown in Figure 1. The speed and weight of the rolling stock are 20 km/h and 40 tons respectively. Figure 1a presents ABA when the rail is free from defect, and Figure 1b presents ABA when the rail has the 2.5 mm dipped joint and 20 mm short settlement, as shown in Figure 2. These two figures show that the ABAs from the defect-free rail and the rail with defects are significantly different and easy to categorize. However, when the sizes of combined defects vary and the defects are combined, it will be more complicated to categorize the type and size of defect; machine learning plays an important role for this purpose. From Figure 1b, the ABA has peak and bottom values, which will be used as simplified features. Figure 1 presents only one ABA from a wheel. From the simulation, ABAs from two wheels are extracted and used as features. The ABA is used in two ways as mentioned: simplified data and raw data. Simplified data are used to develop the DNN model, and raw data are used to develop the CNN and RNN models. For simplified data, 14 features are extracted from the simulations, which are the weight and speed of a rolling stock, three peak ABA from two wheels, and three The ABA is used in two ways as mentioned: simplified data and raw data. Simplified data are used to develop the DNN model, and raw data are used to develop the CNN and RNN models. For simplified data, 14 features are extracted from the simulations, which are the weight and speed of a rolling stock, three peak ABA from two wheels, and three bottom ABA from two wheels. In case of simplified data, the ABA is the result from simulations, but the weight and speed of a rolling stock are extracted before the simulation. This procedure is done under the assumption that the weight and speed are known from on-board sensors. The reason for using simplified data for the DNN model is that it is more suitable than using raw data. The authors have tried feeding the raw data into the DNN model where the number of input nodes is equal to the number of values. However, the performance is not satisfying. For raw data, ABAs from two wheels are fed into the models without processing and other features.
To process simplified data and arrange raw data, Visual Basic for Applications (VBA) is employed. Fourteen features are extracted from simulations' reports and combined to create the dataset alongside raw data from each simulation. In this study, the total number of simulations is 1650, so the number of samples is the same. Each sample is labeled in accordance with the classes of each model. For defect severity classification, the labels are depending on the size of the defect, as shown in Table 3.
AI Model Development
DNN, CNN, and RNN are employed to develop machine learning models for detecting and evaluating the severity of combined defects. For dipped joint and settlement detection, this study proposes two approaches. The first approach is using a single model to detect both dipped joint and settlement. The second approach is using two independent models to detect dipped joint and settlement separately. This is to investigate whether a model has better performance for a more specific task. Therefore, the first approach will categorize four classes of the sample, namely, class 0: defect-free, class 1: dipped joint, class 2: settlement, and class 3: dipped joint and settlement. For the second approach, two models are used to detect dipped joint and settlement separately so the classes are binary, defect, and no defect.
For defect severity classification, samples are labeled as shown in Table 3. Models for classifying the severity of dipped joint and settlement are developed independently. It is noted that the second approach applies two models to detect each defect separately so the labels shown in Table 3 are dependent on the models. That means that label 0 in the dipped joint severity classification model is different from label 0 in the settlement severity classification model. For defect severity regression, the models are different because they are regression models in which the labels are real numbers. As defect severity classification, two models are developed for dipped joint and settlement severity evaluation.
The workflow of the machine learning models for detecting and evaluating combined defects is shown in Figure 3. All models are tuned by hyperparameter tuning to ensure that all models provide the best performance. The detail is presented in the following section. In the training, samples are split using the proportion of 70/30. The performance of developed models is evaluated using accuracy in the case of classification and mean absolute error (MAE) in the case of regression. The models with the highest accuracy and the lowest MAE will be selected for further application.
Hyperparameter Tuning
Some parameters of the models are not tuned during the training. Hyperparameter tuning is conducted to improve the performance of models and ensure that the models provide the best performance. In this study, a grid search is used to tune hyperparameters. The list of tuned hyperparameters of each model is shown in Table 4. The features used to develop the DNN model consist of 14 features, as mentioned in the previous section. For CNN and RNN, two sets of raw data from two wheels' ABA are used as features. The total number of values is 6695 for each wheel.
All models are tuned by hyperparameter tuning to ensure that all models provide the best performance. The detail is presented in the following section. In the training, samples are split using the proportion of 70/30. The performance of developed models is evaluated using accuracy in the case of classification and mean absolute error (MAE) in the case of regression. The models with the highest accuracy and the lowest MAE will be selected for further application.
Hyperparameter Tuning
Some parameters of the models are not tuned during the training. Hyperparameter tuning is conducted to improve the performance of models and ensure that the models provide the best performance. In this study, a grid search is used to tune hyperparameters. The list of tuned hyperparameters of each model is shown in Table 4.
Results and Discussion
This section presents the results of model development and discusses them by separating them into two topics, combined defect detection and combined defect severity evaluation. For combined defect detection, two approaches are applied as mentioned in the previous section. The first approach is developing a model to detect dipped joint and settlement, and the second approach is developing two models to detect dipped joint and settlement separately. Two approaches are compared to test the hypothesis of whether two models perform better than a single model for detecting combined defects.
The combined defect severity evaluation is presented into two topics: severity classification and severity regression. The classification is used to classify the severity of combined defects into groups as shown in Table 3. The regression is used to predict the size of combined defects. The performance of models is evaluated using the accuracy or MAE depending on the models. Three deep learning techniques are used, which are DNN, CNN, and RNN. The detail is presented as follows.
One Model for Detecting Both Dipped Joint and Settlement
There are four classes in this case, namely, no defect, dipped joint, settlement, and dipped joint and settlement. The performance of each model is presented in Table 5. Table 5, the accuracy of the CNN model is the highest followed by DNN and RNN, respectively. Surprisingly, the accuracy of the CNN model is almost 1.00; however, the accuracy of the RNN model is the worst, although both models use raw data as features. The DNN model performs quite well, although it does not perform as well as the CNN model and uses simplified data as features. From this, it can be concluded that using raw data does not guarantee higher accuracy than using simplified data. The RNN model has the lowest accuracy, from which it can be assumed that the technique is not suitable for classification in this condition. This is because the RNN model will perform well when it deals with the time-series data and the sequence of the data is significant. In this situation, the sequence of data is not highly related to each other. Therefore, the RNN model performs worse than other models. Moreover, training the RNN model takes the longest time compared to the DNN and CNN model. From the results, the CNN model is the best model for detecting combined defects in this approach. The tuned hyperparameters of the CNN model are shown in Table 6.
Two Models for Detecting Dipped Joint and Settlement Separately
This approach is to test whether the model has better performance if there are fewer classes to predict. Models are developed to detect dipped joint and settlement separately. The performance of each model is presented in Table 7. Table 7, the accuracy of models is calculated by multiplying the accuracy of the best models on dipped joint and settlement detections. The CNN model also has the best accuracy of 0.99. The overall performance of models is accorded to the first approach in which the CNN model has the highest accuracy followed by the DNN and RNN models. However, it is worth noting that the RNN model performs better than the DNN model in settlement detection. Compared to the first approach, it can be seen that the performance of models is improved when the number of classes is lower. However, the CNN model's accuracy does not change. This might because the accuracy of the CNN model is high and there is no room for improvement. Although the performance of models can be improved by reducing the number of classes, the model developed in the first approach is good enough to detect combined defects. The CNN models from the two approaches perform the best and have the same accuracy of 0.99. The tuned hyperparameters of the CNN models are shown in Table 8.
Combined Defect Severity Evaluation
This section presents the results from model development to evaluate the combined defect severity after they are detected. To evaluate the severity, this study presents models to classify the severity and estimate the size of defects. In this part, dipped joint and settlement are considered separately, because the authors tried developing models to consider them together and found that the accuracy is not satisfied due to a too high number of classes to predict. Therefore, considering them separately is the better option.
Severity Classification
There are three classes to classify the severity of dipped joint and settlement, which are shown in Table 3. The accuracy of the classification of each model is presented in Table 9, and the confusion matrix of each model is presented as Tables 10 and 11, respectively. Actual Class 2 1 9 189 Table 11. Confusion matrix of settlement severity classification from the recurrent neural network (RNN) model. Table 9, the CNN model is the best model for classifying the severity of dipped joint with an accuracy of 0.84, while the accuracy of the DNN and RNN models is not satisfied. The RNN model still performs worst for classifying the dipped severity. However, it is surprising that the RNN model has the highest accuracy in classifying the settlement severity with an accuracy of 0.99. This finding is conformed to the settlement detection model that the RNN model tends to perform well when dealing with the settlement. Therefore, the total accuracy of classifying combined defect severity is calculated from the accuracy of the CNN model on classifying the dipped joint severity and the accuracy of the RNN model on classifying the settlement severity, which equals 0.83. The tuned hyperparameters of each model are shown in Table 12.
Severity Regression
Models developed in this section are different from others because they are regression models. The output layer does not predict the class of data but the continuous value. As mentioned, the performance of each model is evaluated using MAE, which is straightforward to interpret compared to other indicators. The size of the defect is not labeled as groups, but it is directly used as a label. The performance of the severity regression or estimation is shown in Table 13. The plots between actual data and prediction are shown in Figures 4 and 5 for dipped joint and settlement, respectively. From Table 13 and Figures 4 and 5, the CNN model is the best model for estimating the size of the dipped joint with the MAE of 1.25 mm, while the MAEs of the DNN and RNN models are lower, but the difference is not relatively high compared to the maximum size of 10 mm. The RNN model has the highest MAE. From the previous models, it can be concluded that the RNN model is not suitable for detecting and evaluating the dipped joint, which can be seen from the lowest performance in every aspect. Again, the RNN model has the lowest MAE on estimating the size of settlement, which equals 1.58 mm. Compared to the maximum size of settlement used in this study (100 mm), the RNN model can estimate the size of settlement with very low error. This emphasizes the performance of the RNN model on detecting and evaluating the settlement. It can be concluded that the CNN model is the best model for estimating the dipped joint size, and the RNN model is the best model for estimating the settlement size, which conformed to the model performance on the severity classification. The tuned hyperparameters of each model are shown in Table 14. From Table 13 and Figures 4 and 5, the CNN model is the best model for estimating the size of the dipped joint with the MAE of 1.25 mm, while the MAEs of the DNN and RNN models are lower, but the difference is not relatively high compared to the maximum size of 10 mm. The RNN model has the highest MAE. From the previous models, it can be concluded that the RNN model is not suitable for detecting and evaluating the dipped joint, which can be seen from the lowest performance in every aspect. Again, the RNN model has the lowest MAE on estimating the size of settlement, which equals 1.58 mm. Compared to the maximum size of settlement used in this study (100 mm), the RNN model can estimate the size of settlement with very low error. This emphasizes the performance of the RNN model on detecting and evaluating the settlement. It can be concluded that the CNN model is the best model for estimating the dipped joint size, and the RNN model is the best model for estimating the settlement size, which conformed to the model performance on the severity classification. The tuned hyperparameters of each model are shown in Table 14. From Table 13 and Figures 4 and 5, the CNN model is the best model for estimating the size of the dipped joint with the MAE of 1.25 mm, while the MAEs of the DNN and RNN models are lower, but the difference is not relatively high compared to the maximum size of 10 mm. The RNN model has the highest MAE. From the previous models, it can be concluded that the RNN model is not suitable for detecting and evaluating the dipped joint, which can be seen from the lowest performance in every aspect. Again, the RNN model has the lowest MAE on estimating the size of settlement, which equals 1.58 mm. Compared to the maximum size of settlement used in this study (100 mm), the RNN model can estimate the size of settlement with very low error. This emphasizes the performance of the RNN model on detecting and evaluating the settlement. It can be concluded that the CNN model is the best model for estimating the dipped joint size, and the RNN model is the best model for estimating the settlement size, which conformed to the model performance on the severity classification. The tuned hyperparameters of each model are shown in Table 14.
Conclusions
This study is the first to apply deep learning techniques to detect and evaluate the severity of combined defects in the railway infrastructure. Dipped joint and settlement are used as the case study of combined defects. The numerical data are simulated using D-Track, which is a verified simulation for studying the dynamic behavior of wheel and rail. Various parameters are used to create the diversity of data. There are 1650 simulations that are run. The output from the simulations that are used as features to develop the machine learning models is ABA from two wheels. ABA is used in two ways: simplified data and raw data. The DNN model uses the simplified data that consists of 14 features, while the CNN and RNN models use raw data. The data are split with the proportion of 70/30 to be data and testing data.
The models for detecting combined defects are developed using two approaches: a single model and two models for detecting combined defects. The study shows that using a single model is good enough to detect combined defects when the best model is the CNN model with an accuracy of 0.99. To evaluate the severity, models are developed to classify the severity and estimate the size of defects. It is found that the CNN models have the best performance in classifying and estimating the dipped joint with the accuracy and MAE of 0.84 and 1.25 mm respectively. However, the RNN models perform better in detecting and estimating the settlement with the accuracy and MAE of 0.99 and 1.58 mm, respectively.
To improve this unprecedented study, site data can emphasize the reliability of the finding in the study. The main difference between the simulated data and site data is that there are noises in site data. Other types of defects are also able to improve the capability of the model by increasing the variety of data. Other information is worth trying as features for model development to support other sensors and measurements.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to the confidentiality. | 7,357 | 2021-04-07T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Reduction of hydrazines to amines with low-valent titanium reagent
The N,N bond cleavage in hydrazines to amines via low-valent titanium reagent prepared in situ by treatment of TiCl 4 with Mg powder in THF or CH 2 Cl 2 -Et 2 O is described. The reaction proceeds smoothly under mild conditions to afford amines in good to excellent yields with diverse functional group tolerance such as chloride, methoxyl, benzyloxyl, ester, acyl, as well as C,C double bonds and benzyl-nitrogen bonds.
Introduction
Amines constitute an important class of compounds in chemistry and biology.They are frequently encountered as drugs in the pharmaceutical industry, pesticides for crop protection and target molecules in total synthesis. 1][4][5][6][7][8][9][10][11][12] Remarkable examples are the addition of alkylzinc reagent to arylazo tosylates, 12 electrophilic addition of -diazoesters with ketones, 13 the addition of carbon nucleophiles to azodicarboxylates 14 and acylhydrazonium salts, 15 the anionic 8,16 or radical 17 additions to the C=N bond of hydrazones, the cross-coupling of ketones with hydrazones, 18 the addition of N-aminolactams to Michael acceptors, 10,19 the aza-Michael addition of hydrazines to electrophilic alkenes, 20 the radical cyclization of N-allyl-perchlorohydrazides, 21 and the 1,3-dipolar cycloaddition of azomethine imines to dipolarophiles. 22These examples clearly show the significance of N,N bond cleavage in organic synthesis.Although a number of methods have been developed for N,N bond cleavage in hydrazine-based substrates and they give satisfactory results in general, limitations and difficulties are frequently encountered in many cases, mainly related to harsh acidic or basic reaction conditions, incompatibility with some functionalities, requirement of activating groups, and in some cases the lack of reactivity. 6For example, Zn/H + reduction requires acidic condition, 11 dissolving metal reduction generates basic condition, 4,23 hydroboration 24 is incompatible with C=C bond, the metal-catalyzed hydrogenolysis may cause hydrogenation of C=C bond and hydrogenolysis of benzyloxyl protecting group before the reductive cleavage of hydrazine N-N bonds, 6,8 the oxidative cleavage 6,7 and SmI2-reductive cleavage 8,10 usually need an activating acyl group on at least one of the hydrazine nitrogens, and the reaction with 2naphthols restricts to N,N-disubstituted hydrazines at present stage. 25Recently, we reported the reductive cleavage of the N,N bond in hydrazines with aqueous titanium(III) trichloride solution. 26Aqueous solution and heating involved in the method reduce its general use.
Use of a low-valent titanium reagent as a powerful electron donor has been reported to accomplish certain deoxygenations such as reductive coupling of carbonyl compounds, 27 and reduction of sulphoxides, epoxides, bromohydrins, and cyanohydrins. 28So far as we know, use of low-valent titanium reagents has not been reported for the reductive cleavage of N,N bonds in hydrazines to produce amines.Reports demonstrated that titanium(II) reagents can serve as the reductant for transformation of N-nitrosoamines to hydrazines, 29 which indicates the titanium(II) reagent did not reduce hydrazines to amines at room temperature.We hypothesize that the powerful reductivity of titanium(0) reagent which is usually prepared in situ with an excess amount of reductant in nonaqueous organic solvent could be exploited to reduce hydrazine to amine under mild conditions, thus overcoming the difficulties encountered in previously reported methods.Herein we report the readily N,N bond cleavage of hydrazines in THF or CH2Cl2ether with a wide range of functional group tolerance under mild conditions by low-valent reagent prepared in situ by reducing TiCl4 with excess magnesium powder.
Results and Discussion
A number of agents can reduce TiCl4 to low-valent titanium reagent, including LiAlH4, alkaline metal, Zn and Mg powder. 27For simplicity and convenience, our attention turned to Zn and Mg powder.Because Zn reacted sluggishly with TiCl4, Mg powder was finally employed for the preparation of low-valent titanium reagent from TiCl4 in this study.As shown in Table 1, transformation of phenylhydrazine to aniline was chosen as a model reaction to optimize the reaction conditions.To avoid interference of air oxidation, the reaction was performed under an argon atmosphere.Low-valent titanium reagent was firstly prepared by stirring the mixture of TiCl4 and Mg powder in THF for 2 h at room temperature.The molar ratio of TiCl4 to Mg was kept less than 1:2 to ensure the reduction of TiCl4.Then phenylhydrazine was added, and the reduction was performed at room temperature and monitored by TLC.Different molar ratios of hydrazine/TiCl4/Mg were explored, and we found that the molar ratio affected the product yield and reaction time significantly.When the molar ratio of phenylhydrazine/TiCl4/Mg was 1:1:2.5, the reaction was complete in 1 h, affording aniline in 93% yield (Table 1, entry 1).Increasing the amount of TiCl4 from a phenylhydrazine/TiCl4/Mg molar ratio of 1:1:2.5 to 1:4:10 led to completion of the reaction within 20 min, albeit the yield of aniline was not improved significantly (Table 1, entry 2).Interestingly, when the amount of TiCl4 was reduced to phenylhydrazine/TiCl4/Mg molar ratios of 1:0.4:2.5 and 1:0.2:2.5, the reaction went smoothly as well, giving aniline in 92% yield in 2 h (Table 1, entry 3) and 90% in 4 h (Table 1, entry 4), respectively.When the amount of TiCl4 was further decreased to a phenylhydrazine/TiCl4/Mg molar ratio of 1:0.1:2.5, phenylhydrazine was not completely consumed even after 12 h based on TLC monitoring, and aniline was obtained in 71% yield (Table 1, entry 5).A control reaction was run, and when TiCl4 was not added, no reaction occurred (Table 1, entry 6).Thus the optimized reaction conditions involved use of a hydrazine/TiCl4/Mg molar ratio of 1:0.4:2.5 in anhydrous THF under argon atmosphere at ambient temperature.
With the effective reaction conditions in hand, we next investigated the generality of the reaction with respect to hydrazines.As shown in Table 2, monosubstituted hydrazines (Table 2, entries 16), N,N-disubstituted hydrazines (Table 2, entries 715), symmetrical and unsymmetrical N,N′-disubstituted hydrazines (Table 2, entries 1618) could be reductively dissociated to the corresponding primary and secondary amines by low-valent titanium reagent in THF at room temperature with a hydrazine/TiCl4/Mg molar ratio of 1:0.4:2.5; the yields were generally excellent.
For N,N,N′,N′-tetrasubstituted hydrazines (Table 2, entries 1922), however, a hydrazine/TiCl4/Mg molar ratio of 1:4:10 and higher reaction temperature of 60 o C were required for complete conversion of substrates, which was probably due to the steric hindrance of tetrasubstituted hydrazines.Alkylhydrazines (Table 2, entries 69) underwent the reaction as well as arylhydrazines.It was noted that common functional groups such as methoxyl (Table 2, entry 3), chloride (Table 2, entry 4), ester (Table 2, entry 5), C,C double bonds (Table 2, entries 11 and 20) which are usually susceptible to hydrogenation and hydroboration reactions, and benzyl-nitrogen bonds (Table 2, entries 12, 18 and 21) were all well tolerated in the reaction.Interestingly, we found that solvent made important impact on the reaction.Acylhydrazines gave complex products by TLC monitoring when the reduction was carried out in THF.Fortunately, when performed in the mixture solvent of CH2Cl2/Et2O (4:1, v/v) at room temperature with a hydrazine/TiCl4/Mg molar ratio of 1:4:10, acylhydrazines including benzyloxycarbonyl (Table 3, entry 4) underwent the reaction furnishing the desired product in good yields (Table 3, entries 14).
Entry Substituent
Amine a, yield (%) b Amine b, yield (% The N,N bond in hydrazone could also be cleaved under these conditions giving the corresponding amine and carbonyl compound in good yields (Table 3, entry 5).A similar situation was encountered with trisubstituted hydrazines, and they did not react completely in THF.However, the reduction of trisubstituted hydrazines took place smoothly when performed in the mixture solvent of CH2Cl2/Et2O (4:1, v/v) at 35 o C with a hydrazine/TiCl4/Mg molar ratio of 1:4:10 (Table 3, entries 69).However, the observation was not surprising since it has been well documented in the literature that low-valent titanium reagents prepared in different solvents may exhibit distinct activity. 27
Conclusions
In summary, we have established an efficient method for the reduction of hydrazines to the corresponding amines under mild conditions by using a low-valent titanium reagent conveniently prepared in situ from TiCl4 and Mg powder.The reaction solvent and hydrazine/Mg/TiCl4 molar ratio affected the reaction significantly.While acylhydrazines and trisubstituted hydrazines were found to be reduced efficiently in a solvent mixture of CH2Cl2Et2O with a hydrazine/Mg/TiCl4 molar ratio of 1:4:10, tetrasubstituted hydrazines were reduced with a hydrazine/Mg/TiCl4 molar ratio of 1:4:10 in THF.Other hydrazines were reduced in THF with a hydrazine/Mg/TiCl4 molar ratio of 1:0.4:2.5.The reaction was compatible with common functionalities such as chloride, methoxyl, ester, acyl, and C,C double bond, benzyl-nitrogen bond and benzyloxyl groups, providing a variety of amines in good to excellent yields.
Experimental Section
General.All reagents were purchased from commercial suppliers and were used without further purification.THF and diethyl ether were freshly distilled from Na/benzophenone and DCM was distilled from CaH2.Hydrazines used in this study were prepared following the literature procedures (see Supplementary Information).Column chromatography was performed with silica gel (200300 mesh).Thin layer chromatography was carried out using Merck silica gel GF254 plates.NMR spectroscopy was performed on a Bruker (400 MHz or 600 MHz) spectrometer using TMS as an internal standard.GCMS were made on an Agilent Technologies 6890-5973N GCMS spectrometer with EI ionization at 70 eV.MS analyses were made on a Bruker Daltonics Bio TOF-Q Mass Spectrometer using ESI or MALDITOF ionization.All products are known and were characterized by comparing 1 H and 13 C NMR spectroscopic data with data reported in the literature. 26neral procedure for the reduction of hydrazines to amines with low-valent titanium reagent in THF.TiCl4 (0.44 mL, 4 mmol) was added dropwise with a syringe to a stirred mixture of Mg powder (0.6 g, 25 mmol) in dry THF (100 mL) at 0 o C under an argon atmosphere.The resulting mixture was allowed to warm to ambient temperature and stirred for 2 h.Hydrazine (10 mmol) in THF (25 mL) was then added and the mixture was stirred at ambient temperature for another 26 h.The resulting solution was made alkaline by addition of saturated aqueous NaHCO3 solution and filtered through celite.The filtrate was extracted with CH2Cl2 repeatedly.The organic phases were combined and dried with MgSO4.The solvent was evaporated and the residue was purified by column chromatography on silica gel using CH2Cl2 and petroleum ether as eluent to give amines.For tetrasubstituted hydrazines, TiCl4 (4.4 mL, 40 mmol) and Mg power (2.4 g. 0.1 mol) was added, and after addition of hydrazine, the mixture was stirred at 60 o C instead.
General procedure for the reduction of hydrazines to amines with low-valent titanium reagent in CH2Cl2Et2O.TiCl4 (0.88 mL, 8 mmol) was added slowly with stirring to a mixture solvent of CH2Cl2 and Et2O (20 mL, 4:1, v/v), and lots of yellow precipitates were produced.Magnesium powder (0.48 g, 20 mmol) was then added under Ar and stirring was continued for 2.5 h at room The yellow mixture turned black gradually.Hydrazine (2 mmol) in CH2Cl2 (5 mL) was added at room temperature and the mixture was stirred for another 26 h at room temperature for acylhydrazines, or at 35 o C for trisubstituted hydrazines.The resulting solution was made alkaline with saturated NaHCO3 solution and filtered through celite.The filtrate was extracted with CH2Cl2 repeatedly.The combined organic phases were dried with MgSO4.After removal of the solvent, the residue was purified by column chromatography on silica gel using CH2Cl2 and petrum ether as eluent to give amines.
Table 1 .
Reduction of phenylhydrazine to aniline with low-valent titanium reagent a a 10 mmol of phenylhydrazine was used.b Average isolated yield of two runs. | 2,590 | 2013-04-21T00:00:00.000 | [
"Chemistry"
] |
Contact Semi-Riemannian Structures in CR Geometry : Some Aspects
There is one-to-one correspondence between contact semi-Riemannian structures (η, ξ, φ, g) and non-degenerate almost CR structures (H, θ, J). In general, a non-degenerate almost CR structure is not a CR structure, that is, in general the integrability condition forH1,0 := {X− i JX, X ∈ H} is not satisfied. In this paper we give a survey on some known results, with the addition of some new results, on the geometry of contact semi-Riemannian manifolds, also in the context of the geometry of Levi non-degenerate almost CR manifolds of hypersurface type, emphasizing similarities and differences with respect to the Riemannian case.
Introduction
Contact (semi-)Riemannian geometry and (almost) CR geometry are two fields of research that have been developed independently of each other, and with different motivations.However, the two theories are quite related to each other.We note that there is not a monograph dedicated to contact semi-Riemannian structures which emphasizes its connection with the non-degenerate almost CR structures.
We can say that the contact geometry begins with Sophus Lie (1872) when he introduced the notion of a contact transformation as a geometric tool to study systems of differential equations (we refer to H. Geiges [1] for an overview of the historical origins of contact geometry).
The study of contact manifolds from the Riemannian point of view was introduced in the 60's of the last century by the Japanese school, with S. Sasaki as leader.From then, contact manifolds equipped with Riemannian metrics have been intensively studied.The odd dimensional spheres S 2n+1 and the unit tangent sphere bundles T 1 M of Riemannian manifolds are the most known examples of contact Riemannian manifolds.
The monograph of D.E.Blair [2] and the monograph of C. Boyer and K. Galicki [3] give a wide and detailed overview of the results obtained in this framework.Contact manifolds equipped with semi-Riemannian metrics were first introduced and studied by T. Takahashi [4], who focused on the Sasakian case.In particular, the author discussed the classification of Sasakian semi-Riemannian manifolds of constant ϕ-sectional curvature κ = −3.The relevance in physics of contact semi-Riemannian structures was pointed out in K.L. Duggal [5] (see also H. Baum [6]).A systematic study of contact semi-Riemannian manifolds started with the paper of G. Calvaruso and D. Perrone [7] (see also [8]).
The paper of S.S. Chern and J. Moser [9] on the real hypersurfaces in complex manifolds, and the works by Tanaka [10] and S. Webster [11], have made an important contribution to the development of CR geometry (also in terms of pseudohermitian geometry).Then, (almost) CR structures have drawn a great amount of interest for their connection with several different research areas in both analysis and geometry (see the monograph of S. Dragomir and G. Tomassini [12] for a wide and detailed overview of CR structures).
If θ is a contact 1-form on an odd dimensional manifold and J is an almost complex structure, i.e., J 2 = −I, defined on the contact distribution H = ker θ, such that the Levi form L θ = dθ(•, J•) is a non-degenerate Hermitian form H, then (θ, J) is said to be a non-degenerate almost CR structure.Different signatures of the Levi form L θ correspond to different kind of geometries.There is one-to-one correspondence between contact semi-Riemannian structures and non-degenerate almost CR structures.In general, a non-degenerate almost CR structure is not a CR structure, that is, in general the integrability condition for H 1,0 := {X − iJX, X ∈ H} is not satisfied.CR structures are considered mainly from a complex analytical point of view.
In this paper (which reflects the interests and knowledge of the author) we give a survey on some known results, with additions of some new result, on the geometry of contact semi-Riemannian manifolds, also in the context of the geometry of Levi non-degenerate almost CR manifolds of hypersurface type, emphasizing similarities and differences with respect to the Riemannian case.In particular, we explain the relationship between contact semi-Riemannian structures and non-degenerate pseudohermitian structures, describing also in some detail several important examples, like hypersurfaces of indefinite Kähler manifolds, and tangent hyperquadric bundles over semi-Riemannian manifolds.
The author believes that this paper will be useful especially to mathematician interested in contact Riemannian geometry, as developed for instance in D. Blair's book [2], who want to have a comprehensive look at the main differences between the strictly pseudo-convex setting and the semi-Riemannian setting.
Generality on Contact Semi-Riemannian Manifolds
A (2n + 1)-dimensional manifold M is said to be a contact manifold if it admits a contact form, this is, a global 1-form η such that η ∧ (dη) n = 0. Given a contact form η, there exists a unique vector field ξ, called the characteristic vector field or the Reeb vector field, such that η(ξ) = 1 and dη(ξ, •) = 0. Furthermore, a semi-Riemannian metric g is said to be an associated metric (for the contact form η) if there exists a tensor ϕ of type (1,1) and so g(ξ, ξ) = ε = ±1.In such a case, (η, ξ, ϕ, g), or (η, g), is called contact semi-Riemannian structure, or contact pseudo-metric structure.
Special contact semi-Riemannian manifolds are the following.
Since Ω is closed, ( M, J, g) is an almost pseudo-Kaehler structure.By using also the Sasakian condition one can show that J is parallel, that is, the structure on the cone is pseudo-Kaehler.Besides, the converse statement also holds.In other words, there is an one-to-one-correspondence beteween pseudo-Sasakian structures (η, ξ, ϕ, g), with g(ξ, ξ) = ε, on M, and pseudo-Kaehler structures (J, g) on the ε-cone M.Moreover, the pseudo-Sasakian manifold is Einstein (respectively, of constant sectional curvature) if and only if the corresponding ε-cone M is Ricci-flat (respectively, flat).
• K-contact manifolds are contact semi-Riemannian manifolds (M, η, ξ, ϕ, g) whose Reeb vector field ξ is a Killing vector field, or equivalently, h = 0. Any Sasakian semi-Riemannian manifold is K-contact and the converse also holds when n = 1.
•
H-contact manifolds.The condition that ξ be an eigenvector of the Ricci operator is a very natural condition in contact Riemannian geometry.Sasakian manifolds, K-contact manifolds, (κ, µ)-spaces and locally ϕ-symmetric spaces satisfy this curvature condition.One of the more important interpretations of this condition is that of an H-contact manifold as introduced by the present author in [15].Recall that on a Riemannian manifold (M, g), a unit vector field V is said to be a harmonic vector field if V : (M, g) → (T 1 M, G), where G is the Sasaki metric (cf.Section 5.1), is a critical point for the energy functional restricted to maps defined by unit vector fields (see the recent monograph [16], and references therein).If (M, g) is a semi-Riemannian manifold the same argument applies for vector fields of constant length (if is not light-like).The critical point condition which defines a harmonic vector field is: " ∆V is collinear to V", where ∆V is the so called rough Laplacian of V. H-contact semi-Riemannian manifolds are contact semi-Riemannian manifolds whose Reeb vector field ξ is harmonic, besides we have that (see [15,17]): a contact semi-Riemannian manifold is H-contact if and only if ξ is a Ricci eigenvector.The class of H-contact semi-Riemannian manifolds extends the classes of Sasakian and K-contact semi-Riemannian manifolds.Results about the classification of H-contact Riemannian three-manifolds are given in [18] and in the recent paper of Cho [19].
Remark 1. Sasakian structures, K-contact structures, and H-contact structures are preserved by the transformation Equation (8).In fact, the normality condition and the tensor h = 1 2 L ξ ϕ do not depend on the metric, so that (M, η, g) is Sasakian (respectively K-contact) if and only if (M, η, ḡ) is.Moreover, by using Equation (10), A difference between the Riemannian case and the general semi-Riemannian one is the following: in both cases, from Equation (6), trh 2 = 0 implies Ric(ξ, ξ) = 2n.But,
•
K-contact Riemannian manifolds are characterized by the condition Ric(ξ, ξ) = 2n, since it implies trh 2 = 0 and so, h = 0 (because in the Riemannian case h is diagonalizable); • in the semi-Riemannian case the condition tr h 2 = 0 does not imply h = 0. On the other hand, there exist contact semi-Riemannian manifolds for which trh 2 = 0 but h = 0, and contact semi-Riemannian manifolds for which h 2 = 0 but h = 0 (see Examples 3 and 5).
Recall that there is a canonical way to associate a contact Riemannian structure to a contact Lorentzian structure (and conversely).Let (η, ξ, ϕ, g L ) be a contact Lorentzian structure on a smooth manifold M, where the Reeb vector field ξ is time-like.Then, g = g L + 2η ⊗ η is a Riemannian metric, and is still compatible with the same contact structure (η, ξ, ϕ).Moreover, in such case g(ξ, ξ) = −g L (ξ, ξ) = +1.Hence, (η, ξ, ϕ, g) is a contact Riemannian structure on M. We remark that where g −1 is obtained by the D-homothetic deformation of g for t = −1.Consequently, the Levi-Civita connection and curvature of g L can be easily deduced from the formulae valid for a general D-homothetic deformation.In particular, if ∇ is the Levi-Civita connection of g L and ∇ is Levi-Civita connection of g, we have the following: Taking into account that in the Lorentzian case the tensor h is diagonalizable, for a unit vector field X ∈ ker η, hX = λX, from Equations ( 18) and (19) we have the following formulae Moreover, a contact Lorentzian manifold is Sasakian (respectively K-contact, H-contact) if and only if the corresponding contact Riemannian manifold is so.
A contact semi-Riemannian manifold is called η-Einstein if the Ricci tensor is given by In particular, the Ricci tensor of the η-Einstein K-contact Riemannian structure (η, g) is given by where the scalar curvature r is a constant when n > 1, and g is Einstein if and only if r = 2n(2n + 1).
Then, from Equations ( 20) and ( 21), the Ricci tensor of the corresponding Lorentzian K-contact structure (η, g L ) is given by where the scalar curvature r L = r + 4n is a constant when n > 1. Hence (η, g L ) is η-Einstein K-contact, and g L is Einstein if and only if r L = −2n(2n + 1).
In dimension three, every K-contact structure (η, g) is automatically Sasakian and η-Einstein, and thus by Equation ( 22) also every K-contact Lorentzian three-manifold is automatically Sasakian and η-Einstein.Moreover, for a K-contact Lorentzian three-manifold, the scalar curvature r L and the ϕ-sectional curvature H L are related by r L = 2H L − 4.
Recall that a Lorentzian Sasakian manifold (M, g, η) is Einsteinian if and only if the cone M is Ricci-flat.Moreover, geometries of this type are interesting because they provide examples of twistor spinors on Lorentzian manifolds (see, for example, Ref. [6,14]).In particular, in [6] a twistorial characterization of Einstein Lorentzian Sasakian manifolds is given .
If (η, g) (resp.(η,g L )) is Einstein K-contact, then (η, g L ) (resp.(η, g)) is η-Einstein K-contact.Now, we see as the η-Einstein Lorentzian K-contact structures are related to the Einstein Lorentzian Sasakian structures.Let (η, g L ) be a Lorentzian K-contact structure on M with ξ time-like, dim from Equations ( 18) and (19) we have If in addition (η, g L ) is η-Einstein, since n > 1, then the scalar curvature r L is a constant and the Ricci tensor of the new Lorentzian K-contact structure ( η, gL ) is given by So, for any t > 0, the Lorentzian K-contact structure ( η, gL ) is η-Einstein.Recall that the function r = r L − 2n = r + 2n is the so-called Webster scalar curvature of (η, g) and (η, g L ) (see Section 3.3).Now, if the scalar curvature r L of the η-Einstein Lorentzian K-contact manifold (η, g L ) satisfies r L < 2n, i.e., r < 0, then the Lorentzian K-contact structure ( η = η t , gL = (g L ) t ) obtained in correspondence to Then, the Webster scalar curvature r is a constant and we have the following.
From this Theorem and Proposition 6.2 of [6], we get the following Theorem 2. ([22]) Let (M, η, ξ, g L ) be a simply connected η-Einstein Lorentzian Sasakian manifold of dimension 2n + 1 > 3 and with scalar curvature r L < 2n, i.e., r < 0.Then, there exists a transverse homothety whose resulting Lorentzian manifold (M, gL ) is a spin manifold.Moreover, there exists a twistor spinor φ which is an imaginary Killing spinor and the associated vector field V φ (the Dirac current) is ξ.
Any connected sum of S 2 × S 3 admits a Lorentzian Sasaki-Einstein structure [23].Now, we give the following.
Curvature of K-Contact (and Sasakian) Semi-Riemannian Manifolds
In the contact Riemannian case, the following curvature condition characterizes the Sasakian structures.In the semi-Riemannian case any Sasakian manifold satisfies Equation ( 23), but there is not a proof for the conversely and we do not know examples of non-Sasakian contact semi-Riemannian manifolds which satisfy Equation (23).A partial result for this problem is given by the following (cf.[22]).
Theorem 4. If a semi-Riemannian manifold (M, g) admits a Killing vector field ξ, g(ξ, ξ) = ε = ±1, such that the sectional curvature of all nondegenerate plane sections containing ξ equals ε, then is K-contact semi-Riemannian structure on M.
In the same paper [22], we proved Theorem 5. Any conformally flat K-contact semi-Riemannian manifold is Sasakian and of constant sectional curvature κ = ε.
Proof.For a K-contact semi-Riemannian manifold, from Equations (4) and (5), we have Moreover, for a locally symmetric semi-Riemannian manifold ∇R = 0. Then we get Replacing X by ϕX, we have By using this last equation, we get Replacing Z by ϕZ, the above equation becomes Then, by using Equation (24), we obtain Now, let p be an arbitrary point and span(X p , Y p ) be an arbitrary non-degenerate plane.Then, from Equation (25), we obtain Therefore (M, η, ξ, ϕ, g) is a K-contact semi-Riemannian manifold of constant curvature ε.Then, by using Theorem 5, we conclude that the manifolds is also Sasakian.Remark 2. Theorems 4-6, which include in particular the Lorentzian case, give results analogous to the Riemannian case (see [2]).
If (M, g) is a semi-Riemannian manifold which admits a Killing vector field X 0 of constant length, g(X 0 , X 0 ) = c = 0, such that the sectional curvature of all non-degenerate plane sections containing X 0 equals c, then ξ = (1/εc)X 0 and g = εc g, where ε = +1 if c > 0 and ε = −1 if c < 0, satisfy the conditions of Theorem 4.Then, by using Theorems 4-6, we get the following (which extends Corollary 4.3 of [22]).
Theorem 7. Let (M, g) be a semi-Riemannian manifold whose admits a Killing vector field X 0 of constant length, g(X 0 , X 0 ) = c = 0, such that the sectional curvature of all non-degenerate plane sections containing X 0 equals c.Then, the following properties are equivalent Numbered lists can be added as follows:
Geometry of H-Contact Semi-Riemannian Manifolds
H-contact semi-Riemannian manifolds are related to the contact semi-Riemannian manifolds whose Reeb vector field is an infinitesimal harmonic transformation.Recall that a vector field V on a semi-Riemannian manifold (M, g) is called an infinitesimal harmonic transformation (in short i.h.t.) if the one-parameter group of local transformations generated by V are local harmonic diffeomorphisms.Moreover, V is an i.h.t.if and only if tr(L V ∇) = 0 (see [24,25]), where for all tangent vector fields X, Y.With respect to a pseudo-orthonormal basis {E 1 , . . ., E m } of (M, g), we have tr(L V ∇) where ∆ is the rough Laplacian.Thus, a vector field V is an i.h.t.if and only if ∆V = QV.Now, let (M, η, g, ξ, ϕ) be an arbitrary contact semi-Riemannian manifold.Then, we have (cf.[15,17]) Besides, by using Equation (6), we get from which we get the following (cf.[17]).
ξ is an i.h.t.if and only if M is K-contact.In the semi-Riemannian case: In fact, the following is an example of contact semi-Riemannian manifold where ξ is an i.h.t.but it is not Killing.
Example 3. ([17])
We consider the 5-dimensional connected Lie group G, whose Lie algebra g admits a basis Consider the semi-Riemannian left-invariant metric g, for which {E 0 , E i , V i } is a pseudo-orthonormal basis with Define the left-invariant tensors ξ, η, and ϕ on G putting Then, the metric g described in Equation ( 28), together with tensors described in Equation ( 29), define a left-invariant contact semi-Riemannian structure (η, g, ξ, ϕ) on G.This contact semi-Riemannian structure is H-contact and satisfies trh 2 = 0 with h = 0 (more precisely, Remark 3. The class of contact semi-Riemannian manifolds with ξ i.h.t. is invariant for D-deformations.In fact, the class of H-contact semi-Riemannian manifolds is invariant and trh The Lorentzian case.Let (M, η, ξ, ϕ, g) be a contact semi-Riemannian manifold, and ḡ the metric associated to (η, ξ, ϕ) described by Equation (8).Then, as remarked in Section 2, (M, η, ξ, ϕ, ḡ) is H-contact if and only if (M, η, ξ, ϕ, g) is H-contact.In particular, there exists a one-to-one correspondence between H-contact Riemannian manifolds and H-contact Lorentzian manifolds.It follows that the class of H-contact Lorentzian manifolds is really large.To note that just like in the Riemannian case, for a contact Lorentzian manifold, with ξ time-like one has trh 2 = 0 if and only if h = 0 [7].Hence, using Equation ( 8) and the corresponding result valid in the Riemannian case ( [26], [Theorem 4.1]), we have the following result.Proposition 2. Let (M, η, ξ, ϕ, g) be a contact Lorentzian manifold with ξ time-like.Then, the following properties are equivalent: ( Remark 4. Let (M, g) be a Lorentzian manifold and V a unit time-like vector field on M. The space-like energy of V is defined as the integral of the square norm of the restriction of ∇V to the space-like distribution V ⊥ .A unit time-like vector field V, which is a critical point of the space-like energy, is called a spatially harmonic vector field.If V is a time-like unit geodesic vector field, then it is spatially harmonic if and only if it is a harmonic vector field ( [16], Chapter 8 and [27]).On the other hand, the Reeb vector field of a contact semi-Riemannian manifold is geodesic.Thus, we have the following result [17]: A contact Lorentzian manifold, with ξ time-like, is H-contact if and only if ξ spatially harmonic.
Remark 5. We note that the Reeb vector field of a three-dimensional contact Riemannian manifold (M 3 , η, ξ, ϕ, g) defines a harmonic map from M to T 1 M if and only if it is H-contact and ξ(λ) = 0, where λ, −λ are the nontrivial eigenvalues of tensor h [18].The same characterization holds in the contact Lorentzian case (in fact, for the corresponding contact Lorentzian manifold we have h = h).Then, it is natural to ask which are the H-contact Lorentzian three-manifolds for which λ is a constant (equivalently, the Ricci eigenvalue related to ξ is constant).In the Riemannian case, it follows from the proof of Theorem 1.2 in [18] that a three-dimensional contact Riemannian manifold is H-contact with constant Ricci eigenvalue if and only if either it is Sasakian or is locally isometric to a unimodular Lie group G equipped with a non-Sasakian left-invariant contact metric structure.Then, a contact Lorentzian three-manifold is H-contact with constant Ricci eigenvalue (related to ξ) if and only if either it is Sasakian or is locally isometric to a unimodular Lie group G equipped with a non-Sasakian left-invariant contact Lorentzian structure.A complete classification of simply connected homogeneous contact Lorentzian three-manifolds will be given in Section 4.
Since the work of Hamilton and especially Perelman's proof of the Poincaré conjecture, there has been considerable interest in the Ricci flow and its applications.For an introduction to Ricci flow we refer to the book of B. Chow and D. Knopf [28].Ricci solitons have been intensively studied in recent years, particularly because of their relationship with the Ricci flow.For examples and more details on Ricci solitons in semi-Riemannian settings, we may refer for example to [29] and references therein.A Ricci soliton is a semi-Riemannian manifold (M, g), admitting a vector field V, such that for some real constant µ.A Ricci soliton is said to be shrinking, steady, or expanding, according to whether µ > 0, µ = 0 or µ < 0, respectively.Clearly, an Einstein manifold, together with a Killing vector field, is a trivial solution of Equation (30).As proved in the paper [30], any Riemannian Ricci soliton is an infinitesimal harmonic transformation, and it is easily seen that the same argument applies to the semi-Riemannian case.By definition, a contact (semi-Riemannian) Ricci soliton is a contact semi-Riemannian manifold (M, η, ξ, ϕ, g), for which Equation ( 30) is satisfied by V = ξ.Since (L ξ g)(ξ, X) = 0, from Equation ( 30) with V = ξ, we have that the Reeb vector field of a contact semi-Riemannian manifold satisfies Qξ = µξ.
So, a contact semi-Riemannian Ricci soliton is H-contact with constant Ricci eigenvalue.On the other hand, if (M, η, ξ, ϕ, g) is a contact semi-Riemannian Ricci soliton, then ξ is an infinitesimal harmonic transformation.Hence, by Theorem 8, Qξ = µξ with µ = 2nε = ±2n and we get the following result [17]: Theorem 9. A (2n + 1)-dimensional contact semi-Riemannian Ricci soliton is H-contact: Qξ = ±2nξ, and it is either shrinking or expanding, according to the causal character of the Reeb vector field.
In Riemannian setting, the above Theorem yields a much stronger rigidity result.In fact, by Theorem 8 we have trh 2 = 0, that is h = 0. So, by using Equation ( 30), we have the following result (see [31] and also [22]).
Corollary 1. A contact Riemannian manifold is a contact Ricci soliton if and only if it is K-contact and Einstein.
Recall the following result of C. Boyer and K. Galicki (see [21]): A compact K-contact Einstein manifold is Sasakian Einstein.Therefore, from Corollary 1 we get the following Theorem 10.A compact, contact Riemannian Ricci soliton is Sasakian Einstein.
Moreover, by Theorem 9 and Proposition 2, we deduce the following Lorentzian analogue of Corollary 1.
Corollary 2. Let (M, η, ξ, g, ϕ) be a contact Lorentzian manifold with ξ time-like.Then, (M, η, ξ, ϕ, g) is a contact Ricci soliton if and only if it is Einstein and K-contact.
By Corollary 1, only trivial contact Ricci solitons occur in Riemannian settings.On the other hand, the above Theorem 9 specifies that semi-Riemannian Ricci solitons must be found among H-contact manifolds, but this does not exclude the existence of nontrivial contact semi-Riemannian Ricci solitons.As explicitly remarked in [17], the left-invariant contact semi-Riemannian structure described in Example 3 is H-contact (and ξ is also an infinitesimal harmonic transformation), but not a contact Ricci soliton.Hence, the class of semi-Riemannian contact Ricci solitons is strictly included in the one of H-contact semi-Riemannian manifolds satisfying Qξ = ±2nξ.
Non-Degenerate Almost CR Structures
Almost CR structures have drawn a great amount of interest for their connection with several different research areas in both analysis and geometry (Dragomir-Tomassini [12]).In this Section we will emphasize some aspects of their connection with the contact semi-Riemannia structures.
Generality on Almost CR Structures
Let M be a (2n + 1)-dimensional manifold.An almost CR structure (of hypersurface type) on M is a pair (H = H(M), J) where H is a smooth real subbundle of rank 2n of the tangent bundle TM (also called the Levi distribution), and J : H → H is an almost complex structure: Starting from an almost contact structure (η, ξ, ϕ), the pair (H = kerη, J = ϕ |H ) defines a corresponding almost CR structure on M. It is a natural question to ask when an almost CR structure (H, J) permits to reconstruct an almost contact structure (η, ξ, ϕ), such that (H = kerη, J = ϕ |H ).The answer is given by the following result.Proposition 3. Let M denote an odd-dimensional manifold.An almost CR structure (H, J) on M is induced by an almost contact structure (η, ξ, ϕ) if and only if M admits a (globally defined) vector field X 0 , transversal to H at any point.
Conversely, suppose that (H, J) is an almost CR structure, admitting a global vector field X 0 transversal to it.Then, it is enough to define ξ, η and ϕ by for any vector field X ∈ H.It is then easy to check that (η, ξ, ϕ) is an almost contact structure, and (H, J) = (kerη, ϕ |kerη ).
Given an almost CR structure, let E x ⊂ T * x (M) be the subspace consisting of all pseudohermitian structures on M at x ∈ M. Then E = x∈M E x is (the total space of) a real line subbundle of the cotangent bundle T * (M) and the pseudohermitian structures are the globally defined nowhere zero C ∞ sections in E. If M is oriented then E is trivial i.e., E ≈ M × R (a vector bundle isomorphism).Therefore E admits globally defined nowhere vanishing sections, equivalently any orientable almost CR manifold admits a 1-form θ such that kerθ = H (cf., for example, Ref. [12] Section 1.1.2).On the other hand the existence of a (globally defined) vector field X 0 , transversal to H at any point is equivalent to the existence of a 1-form θ such that kerθ = H.In fact, given θ with kerθ = H, we consider a Riemannian metric g on M (which is paracompact) and then define X 0 by g(X 0 , X) = θ(X) for any vector field X.As such, X 0 is transversal to H at any point because kerθ = H.Hence, by using Proposition 3, we get the following Proposition 4. On an orientable odd-dimensional manifold, the existence of an almost CR structure is equivalent to the existence of an almost contact structure.
Let (M, H, J) be an almost CR manifold.Put that is, H 1,0 (resp.H 0,1 ) is the eigenbundle of J C (the C-linear extension of J to H C = H ⊗ C) corresponding to the eigenvalue i (resp.−i).Then the complexfication H C can be decomposed into direct sum of (±i)-distributions of J C : Definition 1.An almost CR structure (M, H, J) is said to be a CR structure on M if H 1,0 (and hence also H 0,1 ) is (formally) integrable: for any open set U ⊂ M.
CR structures are considered mainly from a complex analytical point of view.It is easy to see that an almost CR structure (M, H, J) is a CR structure if and only if the following two conditions hold: Of course, if dim M = 3: Any almost CR structure is integrable (in dimension three, the integrability conditions are trivially satisfied).Moreover, if M is a real hypersurface of a complex manifold then the induced almost CR structure is CR (cf.[12], Proposition 1.1).Therefore, an integrable (codimension one) CR structure (H, J) is often called CR structure of hypersurface type.Remark 6.Another way to define an almost CR structure is the following.Let M be a real (2n + 1)-dimensional manifold.An almost CR structure on M is a complex subbbundle T 1,0 (M), of complex rank n, of the complexified tangent bundle T(M) ⊗ C such that T 1,0 (M) ∩ T 0,1 (M) = (0) where T 0,1 (M) = T 1,0 (M) (overbars denote complex conjugates).The integer n is the CR dimension.An almost CR structure T 1,0 (M) is integrable, and then T Then, T 1,0 (M) = {X − iJX : X ∈ H} = H 1,0 , i.e., T 1,0 (M) is the eigenbundle of J C (the C-linear extension of J to H ⊗ C) corresponding to the eigenvalue i.The pair (H, J) is the real manifestation of T 1,0 (M).Now, we want to compare the normality condition Equation ( 14) of an almost contact structure with the integrability conditions of the induced almost CR structure.We first give the following proposition.
Proof.(1) We have to show that the condition S = 0 is equivalent to the integrability conditions Equations ( 31) and (32).Since (31).Replacing X by JX, we get ([X, JY] + [JX, Y]) ∈ H, and so Equation (33) becomes Thus S(X, Y) = 0 implies the condition Equation (32).Conversely, if Equations ( 31) and ( 32) are satisfied, then S = 0 is trivial. ( (3) In such a case, the condition L ξ η = 0 is equivalent to the condition that ξ is geodesic with respect to the Levi-Civita connection.In fact, for any X ∈ H: Remark 7.An almost contact structure (η, ξ, ϕ) satisfying the condition ξ ∈ ker dη is called a natural almost contact structure.This class of almost contact structures has been introduced and studied in the paper [32].We note that 2dη(ξ, X) = L ξ η, so the condition that defines this structure is equivalent to the condition L ξ η = 0 considered in Proposition 5.In particular, any contact semi-Riemannian manifold satisfies the condition L ξ η = 0, equivalently ∇ ξ ξ = 0.
S. Ianus (cf.[2], Theorem 6.6 p. 92) proved that a normal almost contact manifold is a CR-manifold.The following Theorem completes this result.Theorem 11.Let (H, J) be an almost CR structure on an odd-dimensional manifold M induced by an almost contact structure (η, ξ, ϕ).Then, the almost contact structure (η, ξ, ϕ) is normal if and only if almost CR structure (H, J) is integrable and the tensor h = 0.In particular, if L ξ η = 0, (η, ξ, ϕ) is normal if and only if (H, J) is integrable and L ξ J = 0.
Proof.By Proposition 5 we know that (H, J) is a CR structure if and only the tensor S defined on H by Equation (33) vanishes.For X, Y ∈ H: Moreover, for Therefore, from Equations ( 14), (34), and ( 35) we obtain that (η, ξ, ϕ) is normal if and only if almost CR structure (H, J) is integrable (i.e., S = 0) and the tensor h = 0.The second part follows from (3) of Proposition 5.
Non-Degenerate Almost CR Structures and Contact Semi-Riemannian Structures
We have already observed that the existence of an almost CR structure (H, J) on an (2n + 1)-dimensional manifold M induced by an almost contact structure is related to the existence of a 1-form θ such that kerθ = H (cf. Proposition 3).Definition 2. A pseudohermitian structure on an almost CR manifold (M, H, J) is a 1-form θ such that kerθ = H and the Levi form L θ , defined by It should be observed that, for any X, Y ∈ H, the following are equivalent: Then, we have Proposition 6.Let (H, J) be an almost CR structure and θ an 1-form such that kerθ = H.Then, the following properties are equivalent: In the case of an almost CR structure induced by an almost contact semi-Riemanian structure, we have: Proposition 7. Let (H = ker θ, J) be an almost CR structure induced by an almost contact semi-Riemannian structure (η = θ, ξ, ϕ, g).Then, (H, J, θ) is a pseudohermitian almost CR structure if and only if the tensor q := ϕ • ∇ξ − ∇ξ • ϕ is symmetric on H, where ∇ is the Levi-Civita connection of g.In particular, if (η = θ, ξ, ϕ, g) is a contact semi-Riemannian structure, or an almost α-coKähler structure, then q = 2h = L ξ ϕ and so it is symmetric.
Proof.We show that the partial integrability condition Equation ( 31) is satisfied if and only if q is symmetric on H.
Definition 3. A pseudohermitian almost CR structure (H, J, ϑ) is said to be a non-degenerate (pseudohermitian) almost CR structure if the Levi form L θ is, in addition, non-degenerate (equivalently, θ is a contact form, i.e., θ ∧ (dθ) n is a volume form).
In the sequel by a non-degenerate almost CR structure we will mean a non-degenerate pseudohermitian almost CR structure.So, a nondegenerate almost CR structure satisfies the partial integrability condition Equation (31).We remark that two pseudohermitian structures θ and θ on the same almost CR manifold, are related by θ = λθ for some C ∞ function λ : M → R \ {0}.
, it is a non-degenerate CR structure, if and only if (M, g) has constant sectional curvature.
Let (M, H, J, θ) be a non-degenerate almost CR manifold.Let us extend J to an endomorphism ϕ of the tangent bundle by requesting that ϕ = J on H and ϕ(T) = 0 (T is the Reeb vector field of θ).Then and (θ, T, ϕ) is an almost contact structure.In particular, θ • ϕ = 0.The Webster metric is the semi-Riemannian metric g θ defined by for any X, Y ∈ H, where ε = ±1.Equivalently, is a contact semi-Riemannian structure on M. If we denote by g + θ the Webster metric with T space-like and by g − θ the Webster metric with T time-like, then This fact agrees with the change of the causal character of the Reeb vector field (cf.Equation ( 8)).In particular, if g + θ is Riemannian, then g − θ is Lorentzian with T time-like.Conversely, a contact semi-Riemannian structure (η, ξ, ϕ, g) defines a nondegenerate pseudohermitian almost CR structure given by and L θ = g |H is the corresponding Levi form which is nondegenerate and Hermitian, that is, Equation ( 31) is satisfied.
If the Levi-form L θ is positive definite, the Webster metric g θ (with ε = 1) is a Riemannian metric and "non-degenerate" is replaced by "strictly pseudo-convex".
• Some remarks 1.We note that the non-degeneracy is more natural in CR geometry with respect to strictly pseudo-convexity.In fact non-degeneracy is a CR invariant property, i.e., it is invariant under a transformation θ = λθ, where λ : M → R − {0} is a smooth function, while strictly pseudo-convexity is not a CR invariant property (if L θ is positive definite and θ = −θ, then L θ is negative definite).In particular, if (H, θ, J) is a non-degenerate almost CR structure, then for any real constant t = 0, (H, θ = tθ, J) is a non-degenerate almost CR structure.Moreover, the Webster metrics g θ and g θ are related, taking account that φ = ϕ, by This is related to the deformation Equation ( 16). 2. Let (H(M), J, θ) be a non-degenerate almost CR structure and (η = −θ, ξ = −T, ϕ, g = g θ ) the corresponding contact semi-Riemannian structure.Since then we get that (θ, ξ = T, ϕ, ḡ = −g θ ) is still a contact semi-Riemannian structure with ε = −ε.This second structure is obtained by Equation (13), i.e., reversing the first contact semi-Riemannian structure.
3. Let η be a contact 1-form.Then, there exists an associated metric for η if and only if there exists an almost complex structure J on H =kerη such that the Levi form L η = dη(•, J•) is Hermitian.
A generalization of the basic results in pseudohermitian geometry to the case of a contact Riemannian manifold whose almost CR structure is not integrable was started by S. Tanno [20].Results in this direction are given also in [35][36][37].
•
Hypersurface of an indefinite Kaehler manifold.
The property (1) in Proposition 5 suggests to look the almost contact structure of a hypersurface of an indefinite Kaehler manifold.Let ( M, J, ḡ) be an indefinite (2n + 2)-Kaehler manifold (cf.[38] for definitions and examples).Suppose that M is an orientable non-degenerate real hypersurface of M. Let N be a normal vector field, ḡ(N, N) = ε, that defines the orientation of M.Then, the tensors define an almost contact semi-Riemannian structure.Moreover, we have (see, for example, Ref. [39]) where A = − ∇N is the shape operator.Now, consider the almost CR structure induced On the other hand, by Equation ( 38)a, we have (∇ X ϕ)Y = −εg(AX, Y)ξ for any X, Y ∈ H, and so we get S(X, Y) = 0 for any X, Y ∈ H. Therefore, by 1) of Proposition 5, the almost CR structure (H, J) is integrable.So, we proved the following Proposition 9. Let ( M, J, ḡ) be an indefinite Kaehler manifold.Suppose that M is an orientable non-degenerate real hypersurface of M.Then, the almost contact semi-Riemannian structure on M given by (37) defines a CR structure (H, J) on M. Now, we see when the almost contact semi-Riemannian structure defined by (37) is Sasakian.Suppose that this structure is Sasakian.Then, comparing Equation (15) with Equation (38)a, we get and taking Y = ξ, we have In particular, Aξ = η(Aξ)ξ.Then, η(AX) = εg(AX, ξ) = εg(X, Aξ) = εg(X, η(Aξ)ξ) = η(Aξ)η(X), and so from Equation (39) we obtain Conversely, if A is given by Equation (40), by Equation (38)a we get Equation (15).Then we get the following (cf.[39], and [2] Theorem 6.15 in the Riemannian case).
Theorem 12. Let ( M, J, ḡ) be an indefinite Kaehler manifold.Suppose that M is an orientable non-degenerate hypersurface of M.Then, the almost contact semi-Riemannian structure on M given by Equation (37) is Sasakian if and only if the shape operator is given by Equation (40).
By Proposition 9, the standard pseudohermitian almost CR structure (H, θ, J) of an orientable non-degenerate real hypersurface, that is, the one induced by Equation (37), is integrable.Then, Proposition 6 gives that (H, θ, J) is always a pseudohermitian CR structure.Moreover, by using Equation (38)b, i.e., ∇ξ = ϕA, we have Consequently, the condition dη = g(•, ϕ) is satisfied if and only if Then, we have the following (cf.[2], Theorem 4.12, for the Riemannian case) Theorem 13.Let M be an orientable non-degenerate real hypersurface of an indefinite Kaehler manifold M.
Then, the almost CR structure (H, θ, J) induced on M is always a pseudohermitian CR structure.Moreover, it is a non-degenerate CR structure if and only if the shape operator satisfies Equation (42).
• Levi-flatness
The "opposite"of Levi non-degenerate is the following definition.
Definition 4. A pseudohermitian almost CR structure (H, J, θ) is said to be Levi-flat, or Levi-degenerate, if the Levi form L θ vanishes.
In the case of an orientable non-degenerate real hypersurface of an indefinite Kaehler manifold M, by using Equation (41), the standard pseudohermitian CR structure (H, θ, J) of M is Levi-flat, i.e., L θ = 0, if and only if ϕA = −Aϕ on H. On the other hand, if we consider the fundamental 2-form Φ, we have (dΦ)(X, Y, Z) = 0 (cf.[39]).Hence, an orientable non-degenerate real hypersurface of an indefinite Kähler manifold M is almost coKähler if and only if ϕA = −Aϕ.
Recently in the paper [33], see also [41], we proved that an orientable Riemannian three-manifold (M, g) admits an almost α-coKähler structure with g as a compatible metric if and only if M admits a foliation, defined by a unit closed 1-form, of constant mean curvature.Then, in the same paper we show that a simply connected homogeneous almost α-coKähler three-manifold is either a Riemannian product of type R × S 2 (k 2 ), equipped with its standard coKähler structure, or it is a semidirect product Lie group G = R 2 A R equipped with a left invariant almost α-coKähler structure.All the three-manifolds listed in this classification are examples of Levi-flat pseudohermitian CR three-manifolds.
• The embeddability
A natural difference between the class of CR manifolds and the class of almost CR manifolds is the question of embeddability.In fact, a question of principal interest in the theory of compact, (2n + 1)-dimensional CR-manifolds is to understand when a given strictly pseudo-convex CR-structure can be realized by an embedding in C m .This question is only of interest in the three-dimensional case because a theorem of Boutet de Monvel [42] states that any strictly pseudo-convex CR-structure, on a compact (2n + 1)-manifold, is realizable as an embedding in some C m , provided n > 1.
The global embedding problem in CR geometry in dimension 3 has received a lot of attention.In [43], Burns and Epstein considered perturbations of the standard CR structure on the three-sphere S 3 .They showed that a generic perturbation is non-embeddable and gave a sufficient condition for embeddability ( [43], Theorem 5.3).In the same paper, they introduced the notion of stability for CR embeddings.Then Lempert [44] considered the problem of stability of CR embeddings of a compact three-dimensional CR manifold into C 2 , and proved that if a compact strictly pseudo-convex CR manifold admits a CR embedding into C 2 then this embedding is stable.
S. Chanillo, H. Chiu and P. Yang ([45,46]) discussed the relationship between the embeddability of three-dimensional closed strictly pseudo-convex CR manifolds and the positivity of the CR Paneitz operator and the CR Yamabe constant.In particular, they proved the embeddability into C n for some n when the CR Paneitz operator is non-negative and the CR Yamabe constant is positive.
The (Generalized) Tanaka-Webster Connection and the Pseudohermitian Torsion
Let (M, H, J, θ) be a non-degenerate almost CR manifold and (η = −θ, ξ = −T, ϕ, g = g θ ) the associated contact semi-Riemannian structure.The most convenient linear connection for studying (M, H, J, θ) is the so-called (generalized) Tanaka-Webster connection ∇.This is the linear connection given by ∇X for any X, Y ∈ X(M), where ∇ is the Levi-Civita connection of g θ .Equivalently, ∇ is defined by where π is the usual projection π : TM → H.The generalized Tanaka-Webster connection ∇ is due to Tanno [20] (though confined to the positive definite case).For a nondegenerate almost CR manifold, ∇ was considered in [47,48].∇ admits an axiomatic description similar to that of the ordinary Tanaka-Webster connection (cf.Tanaka [10]) except for the property ∇ϕ = 0.More precisely, ∇ is the unique linear connection obeying to the axioms ∇η = 0, ∇ξ = 0, ∇g = 0, T(ξ, ϕX) Here T(X, Y) = ∇X Y − ∇Y X − [X, Y] is the torsion tensor field of ∇, and Q is the Tanno tensor, i.e., We note that Q(ξ, X) = Q(Y, ξ) = 0 and Q(Y, X) = (∇ X ϕ)Y − η ∇ X ϕY ξ for any X, Y ∈ H.Then, by the same proof given in [20], Q = 0 if and only if (H, J) is integrable, that is: ∇ϕ = 0 ⇐⇒ (H, J) is a CR structure, and then ∇ is the ordinary Tanaka-Webster connection.
The pseudohermitian torsion of ∇ (introduced by Webster in the integrable case [11], see also [12], p. 26) is the vector valued 1-form τ on M defined by τX := T(T, X), and thus Then, by using Next, we recall that given a semi-Riemannian manifold ( M, ḡ), with ∇ the Levi-Civita connection, and a smooth nondegenerate distribution D : and ( ∇X Y) ⊥ is the natural projection on D ⊥ .Moreover, the distribution D is called totally geodesic if the symmetrized second fundamental form B s (X, Y) := (1/2) B(X, Y) + B(Y, X) vanishes.Consider the non-degenerate almost CR manifold (M, H(M), J, θ).For the Levi distribution H(M), the second fundamental form B(X, Y) is given by B(X, Y) = ε g(∇ X Y, ξ)ξ , and by using Equation (4) we get where E i is a local orthonormal basis.Since trace g (ϕh) = 0, we get trace g (B) = 0.Moreover, by using Equation ( 4), the symmetrized second fundamental form is given by Then, we get Proposition 10. ( [34]) For any non-degenerate almost CR manifold (M, H, J, θ), the Levi distribution H(M) is minimal in (M, g θ ), and it is totally geodesic if and only if the pseudohermitian torsion τ vanishes.Now, we give some properties related to the pseudohermitian curvature of a non-degenerate almost CR manifold (M, H, J, θ).Denote by R the pseudohermitian curvature tensor, that is, the curvature tensor associated to the generalized Tanaka-Webster connection ∇.Then, following Tanno [20,49], the pseudohermitian Ricci tensor Ric and the pseudohermitian scalar curvature r are defined by Ric(X, Z) = Tr g R(X, •)Z, r = Tr g Ric.
In the case of a non-degenerate CR manifold with vanishing pseudohermitian torsion our definition of pseudo-Einstein structure coincides with the definition of J.M. Lee [50].In general, the pseudo-Einstein condition does not imply that the pseudohermitian scalar curvature is constant, so such a structure is less rigid than an Einstein structure on a semi-Riemannian manifold [50].Next, we show that this notion is related to the notion of η-Einstein contact semi-Riemannian manifold given in Section 2.2.
Consider a non-degenerate almost CR manifold (M, H, J, θ), dim M = (2n + 1), with pseudohermitian torsion τ = 0.In this case, Equation (43) gives and in particular Then, the pseudohermitian Ricci tensor Ric and the Tanaka-Webster scalar curvature r are given by where Ric and r denote the Ricci tensor and the scalar curvature of the Webster metric g θ .So, we get Proposition 11. ( [48]) Let (M, H, J, θ) be a non-degenerate almost CR manifold with pseudohermitian torsion τ = 0.Then, the structure (H, J, θ) is pseudo-Einstein if and only if the corresponding semi-Riemannian contact structure is η-Einstein.
When the pseudohermitian torsion τ = 0, there are other conditions on τ with an interesting meaning.Given an oriented, compact, contact manifold (M, η), denote by M(η) the set of all Riemannian metrics associated to the contact form η and by A(η) the set of all almost CR structures J for which the Levi form is positive definite.By Proposition 8, the sets M(η) and A(η) can be identified.
• The condition ∇ξ τ = 0. Tanno [20] considered the Dirichlet energy defined for any g ∈ M(η).Then, he found the critical point condition ( [20], Theorem 5.1) We note that this condition has a tensorial character, so it holds also in the non compact case.The Dirichlet energy Equation ( 50) was studied by Chern and Hamilton [51] for compact contact three-manifolds as a functional defined on the set A(η) (there was an error in their calculation of the critical point condition, as was pointed out by Tanno).Moreover, since Ric(ξ, ξ) = 2n − trh 2 = 2n − L ξ g 2 /4, the functional Equation ( 50) is equivalent to the functional L(g) = M Ric(ξ, ξ)dv studied in general dimension, for compact regular contact manifold, by Blair ([2], Section 10.3).Now, since L ξ g = 2g(ϕ•, h•) = −2g(τ•, •), where g = g θ and τ = ϕh is the pseudohermitian torsion, we have Then, to consider the Dirichlet energy Equation ( 50) is equivalent to consider the following defined on the set A(η).Moreover, using the Tanaka-Webster connection given by Equation ( 43), we get Thus, the critical point condition Equation ( 51) becomes ∇ξ τ = 0.
Recall that if M is an oriented compact manifold, by a classical result of Hilbert (see also Nagano [55]), a Riemannian metric g on M is a critical point of the integral of the scalar curvature, I(g) = M rdv, as a functional on the set of all Riemannian metrics of the same total volume on M, if and only if g is an Einstein metric.Now, by using a result of [54], we get that a contact Riemannian three-manifold is η-Einstein if and only if it is H-contact and satisfies the critical point condition ∇ξ τ = 2ϕτ (equivalently, ∇ ξ τ = 0).
•
The Chern-Hamilton functional.In CR geometry a natural functional is the the integral of the generalized Tanaka-Webster scalar curvature.For a strictly pseudo-convexity almost CR manifold, i.e., for a contact Riemannian manifold, the generalized Tanaka-Webster scalar curvature r is given by (cf.[20]) This is eight times the Webster scalar curvature W as defined by Chern and Hamilton [51] on three-dimensional contact manifolds.In the same paper, Chern and Hamilton proved, in dimension three, that the critical point condition for the functional I w (g) = M rdv defined on the set A(η), is the vanishing of the pseudohermitian torsion τ.An alternate proof of this important result was given by the present author [54].Tanno [20] studied the functional I w (g) in arbitrary dimension.
•
An interpretation of the Tanaka-Webster scalar curvature.Recall that a contact form η on a compact manifold M is called regular if its Reeb vector field ξ is regular, i.e., any point of M has a neighborhood such that any integral curve of ξ passing through the neighborhood passes through only once.In this case M is a principal S 1 -bundle over a symplectic manifold B whose fundamental 2-form Ω has integral periods (a Hodge manifold).The corresponding fibration p : M → B = M is known as the Boothby-Wang fibration [56].Now, let (M, η, g) be a compact simply connected regular Sasakian, (2n + 1)-manifold.Then, the base of the Boothby-Wang fibration is a compact Kähler manifold of complex dimension n, with Kähler metric g and fundamental 2-form Ω satisfying (cf., for example, Ref. [57,58]) Moreover, the scalar curvatures r,r of (M, g) and (B, g), respectively, are related by r = r + 2n.
On the other hand, in the Sasakian case, Equation ( 53) becomes r = r + 2n.
So, in this case, the Tanaka-Webster scalar curvature r is the scalar curvature r of the Kähler manifold (B, g) base of the Boothby-Wang fibration.We note that a compact simply connected homogeneous Sasakian manifold is regular [57].
Contact Geometry of CR Manifolds
In this subsection we give a presentation of some results about the study of CR manifolds, i.e., the CR integrable case, from the point of view of contact geometry.
•
The Olszak's result in the semi-Riemannian setting First rigidity results concerning non-degenerate almost CR manifold with the Webster metric of constant curvature were obtained in the Riemannian case by D.E.Blair and Z. Olszak.Blair [59] showed that a contact form does not admit any flat associated Riemannian metric in dimension ≥ 5.Then, Olszak [60] generalizing this result proved that if a contact Riemannian (2n + 1)-manifold, n ≥ 2, is of constant curvature κ, then the manifold is Sasakian and κ = 1.In the semi-Riemannian case, we have ( [7,8]) Theorem 16.Let (M, η, g) be a contact semi-Riemannian (2n + 1)-manifold.If n ≥ 2 and (M, g) is of constant sectional curvature κ, then κ = ε = g(ξ, ξ) and h 2 = 0.
In particular, since ε = ±1, a non-degenerate almost CR structure does not admit any flat semi-Riemannian Webster metric in dimension ≥ 5, so Blair's result also holds in the semi-Riemannian setting.But, there are examples of non-degenerate almost CR manifold with τ 2 = 0 and τ = 0.In fact we have the following (see [34] for details).
Example 5. Consider the space M = R 5 (x 1 , x 2 , x 3 , x 4 , z) and two smooth functions α, β ∈ C ∞ (R 5 ).We put Moreover, we define the 1-form η = 2x 1 dx 3 + 2x 2 dx 4 + dz, the vector field ξ = X 5 = ∂ z , the tensor ϕ by and the semi-Riemannian metric g of signature (−, −, Then (ξ, ϕ, η, g) defines a contact semi-Riemannian structure, and so a non-degenerate almost CR structure on M, with Levi distribution H = ker η =span(X 1 , X 2 , X 3 , X 4 ).Moreover, we can construct a frame {E 1 , E 2 , E 3 , E 4 , ξ} of vector fields on R 5 with E i ∈ H null vector fields which satisfy Therefore, τ 2 = 0.Moreover, τ = 0 if and only if ∂ z (β − α) = 0. So, taking the functions α, β such that , we obtain a non-degenerate almost CR structure with τ 2 = 0 and τ = 0.Moreover, this structure in general is not a CR structure.In fact, taking for example X = E 1 and Y = E 3 , one gets that the integrability condition Equation (32) is satisfied if and only if If the almost CR structure is integrable, we get the Olszak's result in the semi-Riemannian setting.In fact, we have the following.Theorem 18. ( [34]) Let (M, H, J, θ) be a non-degenerate CR manifold, dim M = 2n + 1, n ≥ 2. If the Webster metric g θ is of constant sectional curvature κ, then κ = ε = g θ (ξ, ξ) and the pseudohermitian torsion τ = 0, i.e., the Webster metric is Sasakian.Remark 9.In dimension three, a non-degenerate CR manifold with the Webster metric g θ locally symmetric (in particular, of constant sectional curvature) is either flat or of constant sectional curvature κ = ε = g(ξ, ξ), and in the second case the metric is Sasakian [7].
In particular, the Ricci operator Q and the scalar curvature r of a (2n + 1)-dimensional (κ, µ)-space M, κ < 1, are given by Then, (κ, µ)-spaces are examples of H-contact manifolds.For a non-Sasakian (κ, µ) space, Boeckx Boechx [62] introduced an invariant and showed that for two non-Sasakian (κ, µ) spaces M, M , we have I M = I M if and only if up to a D-homothetic deformation, the two spaces are locally isometric as contact metric manifolds.
• Sasakian geometry by using a variational theory
In paper [63], Barletta and Dragomir built a variational theory of geodesics of the Tanaka-Webster connection ∇ on a strictly pseudoconvex CR manifold M. They obtained the first and second variation formulae for the Riemannian length of a curve in M and showed, in particular, that in general geodesics of ∇ admitting horizontally conjugate points do not realize the Riemannian distance.The paper also contained interesting results concerning the pseudohermitian sectional curvature K θ , that is, the sectional curvature defined by the tensor where R(X, Y)Z is the pseudohermitian curvature tensor associated to the Tanaka-Webster connection ∇, and g θ is the Webster metric.For example they proved (cf.Theorems 4 and 5 of [63]) the following.Theorem 20.Let M be a a strictly pseudoconvex CR manifold. (1) If M has non-positive pseudohermitian sectional curvature, then it has no horizontally conjugate points.(2) If M, of CR dimension n > 1, has constant pseudohermitian sectional curvature, then it has vanishing pseudohermitian torsion (τ = 0) if and only if the Tanaka-Webster connection of M is flat.
• Almost contact structures belonging to a CR structure Let (H, J) be a CR structure on a odd-dimensional manifold M. We say that an almost contact structure (θ, ξ, ϕ) belongs to the CR structure (H, J) if ker θ = H and J = ϕ |H .Then, by Lemma 1.1 of [64], two almost contact structures (θ, ξ, ϕ) and (θ , ξ , ϕ ) belong to the same CR structure (H, J) if and only if for some smooth function λ and vector field X 0 ∈ H, where ε = ±1.
Denote by (θ, ξ, ϕ) * an almost contact structure belongs to a non-degenerate CR structure (H, J) and satisfying the condition [ξ, H] ⊂ H.Then, K.Sakamoto and Y. Takemura [64] proved the existence of a unique linear connection associated to (θ, ξ, ϕ) * .Moreover in [65], they obtained a curvature invariant of the pseudo-conformal geometry, that is, a tensor field invariant under the change of almost contact structures belonging to the same non-degenerate CR-structure.For the case of a normal almost contact structure the invariant tensor field is just the Bochner curvature tensor.
Homogeneous Non-Degenerate CR Three-Manifolds
The main purpose of this Section is to give a presentation of some results about homogeneous non-degenerate CR three-manifolds.
The Classification Theorem
Recall some definitions about the homogeneity.A contact manifold (M, η) is said to be homogeneous if there exists a (connected) Lie group G of diffeomorphisms acting transitively on M and leaving η invariant.A contact semi-Riemannian manifold (M, η, g) is said to be homogeneous if there exists a (connected) Lie group G of isometries acting transitively on M and leaving η invariant, that is, for any x, y ∈ M there exists f ∈ G such that y = f (x), and In particular a CR transformation is a diffeomorphism f such that Remark 10.Typical examples of CR maps are got as traces of holomorphic maps of Kaehlerian manifolds on real hypersurfaces.Precisely, let M be a Kaehlerian manifold.Any orientable real hypersurface M ⊂ M admits a natural CR structure (cf.Proposition 9).If M ⊂ M is another oriented real hypersurface in the Kaehlerian manifold M and F : M → M is a holomorphic map such that F(M) ⊂ M then f ≡ F| M : M → M is a CR map.The statements above hold true for traces of holomorphic maps among indefinite Kaehlerian manifolds [47].
A characterization of K-contact structures in terms of CR maps is presented in Theorem 32 of this paper.
Let θ and θ be pseudohermitian structures on the almost CR manifolds M and M respectively.
Let (M, H, θ, J) be a pseudohermitian almost CR manifold.Denote by P sh (M, θ) the group of all CR automorphisms f : M → M such that f * θ = θ.
In other words, Recall that there is a canonical way to associate a contact Riemannian structure to a contact Lorentzian structure (and conversely).If (η, ξ, ϕ, g L ) is a contact Lorentzian structure on a smooth manifold M, dim M = 2n + 1, where the Reeb vector field ξ is time-like, then (cf.Section 2.2 and also Equation ( 36)) is a contact Riemannian structure on M. The scalar curvatures r R and r L of g and g L are related by Equation ( 20): Now, let (M, H, θ, J) be a non-degenerate CR three-manifold.Then, the Levi form L θ is definite, and we can assume L θ positive definite (if necessary, we change θ with −θ).Therefore, without loss in generality, in dimension three, we can consider either g θ Lorentzian with T time-like or g θ Riemannian.
In particular, the Sasakian condition τ = 0 does not depend on the causal character of the Reeb vector field T.Moreover, for a non-degenerate CR three-manifold, the Tanaka-Webster scalar curvature is given by r = r + ε(2 + tr h 2 ), where ε = g θ (T, T).
So, the Tanaka-Webster scalar curvature r does not depend on the causal character of the Reeb vector field T, i.e., rL = rR .In fact, If we consider the scalar torsion L ξ g θ introduced by Chern and Hamilton in [51] in their study of contact Riemannian three-manifolds, since and thus where W is the Webster scalar curvature as defined by Chern and Hamilton [51].Since the Webster scalar curvature W and the scalar pseudohermitian torsion τ do not depend on the causal character of the Reeb vector field T, that is, they depend only on Levi form L θ , then it is natural to consider these invariants in order to classify the homogeneous nondegenerate CR three-manifolds.More precisely, we consider the invariant W in the Sasakian case, and the invariant in the non Sasakian case.Then, the classification Theorem of [67], can be reformulated in the following form.
Theorem 21.A simply connected, homogeneous, non-degenerate CR three-manifold (M, H, θ, J) is a Lie group G equipped with a left-invariant non-degenerate CR structure.More precisely, one of the following cases occurs: Proposition 13.The Lie group E(2) is the only simply connected 3-manifold which admits a homogeneous non-degenerate CR structure with flat Riemannian Webster metric.In such a case W = +1/2.
In the Lorentzian case, one gets Proposition 14. ( [7]) The Lie group E(1, 1) is the only simply connected 3-manifold which admits a homogeneous non-degenerate CR structure with flat Lorentzian Webster metric.In such a case W = −1/2.
H. Geiges [68] proved that a compact 3-manifold admits a Sasakian structure if and only if it is diffeomorphic to a left invariant quotient of SU(2), the Heisenberg group H 3 or SL(2, R) by a discrete group.As a consequence of Theorem 21 we have Proposition 15.The unimodular Lie groups SU(2), the Heisenberg group H 3 , SL(2, R), and the non-unimodular Lie group with Lie algebra defined by Equation (58), are the only simply connected three-manifolds which admit a homogeneous Sasakian structure.Now, let (η, g) be a homogeneous Sasakian structure on the sphere S 3 , with Webster scalar curvature W > 0. Since W = (r + 2)/8 > 0, then η = tη and g = tg + t(t − 1)η ⊗ η, for t = W, define a Sasakian structure on S 3 with g of constant sectional curvature +1 (cf.[52], Section 3).In particular, ( η, g) is isomorphic to the standard Sasakian structure (η 0 , g 0 ) [4].Then, we can assume ( η, g) = (η 0 , g 0 ), and consequently we have where g a = g 0 + a − 1 η 0 ⊗ η 0 , a = 1/W > 0, is a Berger metric, that is, a metric defined as the canonical variation g a , a > 0, of the standard metric g 0 on S 3 , obtained deforming g 0 along the fibres of the Hopf fibration: •) = aη 0 , g a (ξ 0 , ξ ⊥ 0 ) = 0, where ξ 0 denotes the standard Hopf vector field on S 3 .Therefore we get: Proposition 16.In the second part of Corollary 3, the Sasakian metric on S 3 is homothetic to a Berger metric.
Remark 12.The main result of [51] says that any contact structure on a compact and orientable three-manifold has a contact form and a contact Riemannian metric whose Webster scalar curvature W is either a constant ≤ 0 or is everywhere strictly positive.Now, if M is a compact Sasakian 3-manifold with Webster scalar curvature W > 0, then M admits a contact Riemannian structure of positive Ricci curvature [69].If, in addition, M is simply connected, by a deep result of R.S. Hamilton [70], M is diffeomorphic to the sphere S 3 .However, this fact is not too surprising since a compact simply connected manifold which admits a nonsingular Killing vector field is diffeomorphic to S 3 (cf.[52] Section 4).
Corollary 4. The Heisenberg group H 3 , SL(2, R) and the non-unimodular Lie group with its Lie algebra defined by Equation (59), are the only simply connected 3-manifolds which admit a homogeneous nondegenerate CR structure with Webster scalar curvature W = 0.In particular, the Heisenberg group H 3 is the only simply connected three-manifold which admits a non-degenerate CR structure with pseudohermitian torsion τ = 0 and Webster scalar curvature W = 0.
In Theorem 21, if we consider g θ Lorentzian and denote by r L the corresponding scalar curvature, then in the Sasakian case (i.e., when τ = 0), the conditions W = 0, W > 0, W < 0, and W = −α 2 /4 are equivalent to r L = 2, r L > 2, r L < 2, and r L = −2α 2 + 2 < 2, respectively.On the other hand, for a Lorentzian Sasakian three-manifold, when r L < 2, the Lorentzian K-contact structure ( η, g) obtained by a D-homothetic deformation in correspondence to t = (2 − r L )/8 = −W is Einstein (see Section 2.2), and so of constant sectional curvature −1.Therefore, we get the following corollary which does not have a Riemannian counterpart.
Corollary 5.The unimodular Lie group SL(2, R) and the non-unimodular Lie group with Lie algebra defined by Equation (58), are the only simply connected three-manifolds which admit a homogeneous Lorentzian-Sasakian structure of constant sectional curvature κ = −1.
•
Homogeneous bi-contact metric three-manifolds We close this subsection with a very short presentation of a recent notion introduced by the present author in [71].H. Geiges and J. Gonzalo ( [72,73]) introduced and studied the notion of taut contact circle on a three-manifold, that is, a pair of contact forms (η 1 , η 2 ) such that the 1-forms η a = a 1 η 1 + a 2 η 2 are contact forms with the same volume form for all a ∈ S 1 .In the paper [71] we introduce a Riemannian approach to the study of taut contact circles on three-manifolds.A natural related notion, that we introduce, is the one of taut contact metric circle (η 1 , η 2 , g), that is, (η 1 , η 2 ) is a taut contact circle and g is a Riemannian metric associated to both the contact forms η 1 and η 2 .More in general, we introduce the notion of bi-contact metric structure (η 1 , η 2 , g), where (η 1 , η 2 ) is a pair of arbitrary contact forms and g is a Riemannian metric associated to both the contact forms η 1 and η 2 such that the same contact forms are orthogonal with respect to g, i.e., the corresponding Reeb vector fields ξ 1 , ξ 2 are orthogonal.On the other hand, in the classical definition of three-contact metric structure (η 1 , η 2 , η 3 , g), also called contact metric three-structure, we have three contact forms and a Riemannian metric g associated to the three contact forms, satisfying additional conditions that imply the orthogonality of the three forms with respect to g (see, for example, Ref. [2] Chapter 14 and [3] Chapter 13).
Moreover, a three-contact metric structure is three-Sasakian (see, for example, Ref. [2] p. 293, Theorem 14.1), and a three-Sasakian three-manifold is of constant sectional curvature +1 (see, for example, Ref. [2] p. 294, Theorem 14.3).So, our definition of bi-contact metric structure seems more appropriate, at least in dimension three, in the sense that it is very less rigid.In particular, we characterize the existence of a taut contact metric circle and of a bi-contact metric structure on a three-manifold.Note that a taut contact metric circle is a bi-contact metric structure, but the converse is not true.Then, we give a complete classification of simply connected three-manifolds which admit a bi-H-contact metric structure.In particular, by using the classification given in Theorem 21, we get (cf.[71], Corollary 4.7).
Theorem 22.A simply connected three-manifold admits a homogeneous bi-contact metric structure if and only if it is diffeomorphic to one of the following Lie groups: SU(2), SL(2, R), E(2), E(1, 1).
Some Results in Arbitrary Dimension
Now we briefly recall some results, in arbitrary dimension, about contact homogeneity and spherical CR manifolds.
• D. E. Blair (see [2], p. 120) conjectured the non-existence of contact Riemannian manifolds having non positive sectional curvature, with the exception of the flat 3-dimensional case.In this direction, A. Lotta [74] got the following (as a consequence of a more general theorem and by using the classification given in Theorem 21).
Theorem 23.The only simply connected homogeneous contact Riemannian (2n + 1)-manifold having non-positive sectional curvature is the Lie group E(2) endowed with a flat left invariant contact Riemannian structure.
• A contact Riemannian manifold is said to be a strongly locally ϕ-symmetric space if the reflections in the integral curves of the Reeb vector field are isometries.Examples of strongly locally ϕ-symmetric spaces include the non-Sasakian (κ, µ)-manifolds (see [2], p. 146; more in general we refer to [2] Section 7.9 for a discussion on weakly and strongly locally ϕ-symmetric spaces).Boeckx and Cho in the paper [75] proved the following Theorem 24.Let M be a locally homogeneous contact Riemannian (2n + 1)-manifold.If M is strongly locally ϕ-symmetric, then it is a (κ, µ)-space.
•
Recently E.M. Correa [76] gives a new study on compact, (2n + 1)-dimensional, homogeneous contact manifolds.More precisely, this paper contains: a description of contact structure for any compact homogeneous contact manifold; a description of G-invariant Sasaki-Einstein structure for any compact homogeneous contact manifold; a description of Calabi-Yau metrics on cones with compact homogeneous Sasaki-Einstein manifolds as link of isolated singularity; a description of crepant resolution of Calabi-Yau cones with certain compact homogeneous Sasaki-Einstein manifolds as link of isolated singularity.
This study of homogeneous contact manifolds is based on the Kähler geometry of complex flag manifolds.
•
The present author and L. Vanhecke [77] proved that a compact, simply connected, five-dimensional, homogeneous contact manifold M is diffeomorphic to S 5 or S 2 × S 3 .In both cases the underlying homogeneous contact metric structure is Sasakian (and hence is a CR structure).This result is based on the fact that the contact structure is regular and the base B of the Boothby-Wang fibration π : M → B is a compact simply connected homogeneous Kähler manifold of complex dimension two.In general, we note that every compact simply connected homogeneous contact manifold is a homogeneous Sasaki-Einstein manifold (Ref.
[76], Remark 2.17).• D. V. Alekseevsky and A. Spiro [66,78] gave a classification of all compact, simply connected, (2n + We note that domain (5) does not admit any compact quotients ([79], Proposition 5.5).• R. Lehmann and D. Feldmueller [80] proved that the only CR-structure (of hypersurface type) on S 2n+1 , n > 1, which admits a transitive action of a Lie group of CR-transformations is the standard CR-structure.For S 3 all possible homogeneous CR-structures of hypersurface type are classified in [81] (cf.also [80], p. 524).• G. Dileo and A. Lotta [82] studied spherical symmetric CR manifolds.A strictly pseudo-convex CR manifold M is said to be CR-symmetric if for each point x ∈ M there exists a CR-isometry σ : M → M such that σ(x) = x and (dσ) x|H x = −I d .In particular, they proved the following.Let M be a strictly pseudo-convex CR manifold, dim M > 3, with pseudohermitian torsion τ = 0.Then, M is locally CR-symmetric if and only if the underlying contact metric structure (η, ξ, ϕ, g) satisfies the (k, µ)-nullity condition, that is, the curvature tensor satisfies Equation ( 54).In such a case M is spherical if and only if the Webster scalar curvature vanishes.
Geometry of Tangent Hyperquadric Bundles
The geometry of the unit tangent sphere bundle T 1 M of a Riemannian manifold (M, g) equipped with the Sasaki metric, and in particular with the standard contact Riemannian structure, has been studied by many authors.A motivation of this study depends of the fact that often properties of T 1 M characterize the base manifold (see, for example, Blair's book [2] Chapter 9, and from the point of view of the CR geometry Tanno [83]).
If (M, g) is a semi-Riemannian manifold of index ν > 0, the Sasaki metric induced on tangent hyperquadrics bundle T ε M, ε = ±1, is a semi-Riemannian metric of index 2ν − 1 if ε = −1 and the index is 2ν if ε = +1.In such case we have few results about the the geometry of T ε M (see [84], Ref. [85] and more recently [48]).In this Section we discuss some results of [48] on the geometry of T ε M equipped with the standard non-degenerate almost CR structure.
The Standard Non-Degenerate Amost CR Structure on T ε M
Let (M, g) be a semi-Riemannian manifold of index ν, 0 ≤ ν ≤ n = dim M. At any point (x, u) of its tangent bundle TM, the tangent space of TM splits into the horizontal and vertical subspaces: Each tangent vector Z ∈ (TM) (x,u) can be written in the form Z = X h + Y v , where X, Y ∈ M x are uniquely determined vectors.
The tangent bundle TM can be endowed in a natural way with a semi-Riemannian metric, the Sasaki metric G, depending only on the semi-Riemannian metric g.It is determined by for any z = (x, u) ∈ TM and for any X, Y ∈ M x .G is a semi-Riemannian metric of signature (2ν, 2n − 2ν), and both H z and V z have index ν.There is also an almost complex structure J on TM given by then the Sasaki metric G is Hermitian with respect to the almost complex structure J.We denote by N (x,u) the canonical vertical vector field on TM and by ζ (x,u) the geodesic flow on TM.They are defined by The Liouville form β on TM, defined by β( X) z = G( Xz , ζ z ) = g x (π * z Xz , u), satisfies the following (see Prop. 2 of [84], and [2] that is, 2(dβ) is the fundamental 2-form, and so (TM, J, G) is an indefinite almost Kaehler manifold.Besides (see ( [84], Proposition 3): J is integrable if and only if the semi-Riemannian manifold (M, g) is locally isometric to the semi-Euclidean space R n ν .Consider the tangent hyperquadric bundle moreover the geodesic flow ζ is tangent to T ε (M, g).Any horizontal vector X h is tangent to T ε (M, g), and a vertical vector X v is tangent to T ε (M, g) if and only if X v is orthogonal to N z .Consequently, the tangent space of T ε (M, g), at a point z = (x, u) ∈ T ε (M, g), is given by In general, the tangential lift of a vector field X is a vector field on T ε (M, g) defined by The Sasaki metric on T ε (M, g) is the semi-Riemannian metric G on T ε (M, g) induced from G, it is completely determined by the identities for all z = (x, u) ∈ T ε (M, g) and X, Y ∈ M x .Since the Sasaki metric on the tangent bundle TM is of signature (2ν, 2n − 2ν), G(N , N ) = ε and T ε (M, g) is an orientable semi-Riemannian hypersurface of (TM, G) of sign ε, then the index of T −1 (M, g) is 2ν − 1 and the index of T 1 (M, g) is 2ν.
We now construct the standard non-degenerate almost CR structure on T ε (M, g).The tangent hyperquadric bundle T ε (M, g) is an orientable non-degenerate hypersurface of the indefinite almost Kaehler manifold (TM, J, G).Then, by the usual procedure, we construct the almost contact semi-Riemannian structure (ξ , η , ϕ , G) induced on T ε (M, g), where for z = (x, u) ∈ T ε (M, g) and X vector field on T ε (M, g).Since 2ε(dη )( X, Ỹ) = G( X, ϕ Ỹ) for any X, Ỹ vector fields on T ε (M, g), if we rescale the structure tensors appropriately by η = (1/2ε)η , ξ = 2εξ , ϕ = ϕ and ḡ = (1/4) G, Riemannian manifold (M, g) has constant sectional curvature +1, and in such case the standard contact Riemannian structure on T 1 M is Sasakian.Now, we consider the same question in the semi-Riemannian setting and in terms of CR geometry.
By using • formulas for the pseudohermitian torsion of T ε (M, g): In such a case (H, θ, J) is a pseudo-Einstein CR structure, which is Sasakian, and the Ricci tensor and the pseudohermitian Ricci tensor are given by (ii) If 0 < ν < n, the pseudo-Einstein CR structure of (i) is Einstein, i.e., the Webster metric is Einstein, if and only if (M, g) is a Lorentzian surface of constant curvature c = ε.In such a case, T ε (M, g) has constant sectional curvature c = ε.
Corollary 7. Let (M, g) be a semi-Riemannian manifold of index ν, 0 ≤ ν ≤ n = dim M.Then, the geodesic flow of T ε (M, g) is Killing if and only if M has constant sectional curvature ε.
Sasaki-Einstein and H-Contact Structures on T ε M
The geometry of a H-contact unit tangent sphere bundles when the base manifold is a Riemannian manifold has been extensively investigated (see, for example, Refs.[88][89][90][91][92]).
In the semi-Riemannian case, we have Theorem 27.Let (M, g) be a semi-Riemannian manifold of constant sectional curvature c.Then, the standard contact semi-Riemannian structure (η, ξ, ϕ, ḡ) on T ε (M, g) is H-contact.Moreover, the structure is η-Einstein if and only if either c = ε or c = (n − 2)ε, n = dim M. In such a case, the Ricci tensor is given by where c = ε or c = (n − 2)ε.
Remark 14.
Recall that η-Einstein, K-contact and Sasakian semi-Riemannian manifolds are H-contact.Now, we remark that: As a consequence of Theorems 25-27, we obtain the following result.
Therefore as c varies over the reals, I T 1 M assumes all the real values strictly greater than > −1.Boeckx found examples of (κ, µ)-spaces, for every value of the invariant I ≤ −1, namely a two parameter family of Lie groups with a left-invariant contact metric structure (cf.[62], and [2] pp.125-126).
More recently, E. Loiudice and A. Lotta [93] showed that the tangent hyperquadric bundles T −1 M over Lorentzian space forms (M, g) of constant curvature c different from −1, equipped with a strictly pseudoconvex CR structure, also provide non equivalent examples.For these space, the formula for the Boeckx invariant changes as follows: where c ∈ R, c = −1, so that for c ≤ 0, these examples cover all possible values of the Boeckx invariant in (−∞, −1).This result makes E. Boeckx's classification of (κ, µ)-spaces in [62] more geometric.We note that in this case the Webster metric of T −1 M is not the Lorentzian metric Equation (65) induced from the Sasaki metric of TM.
Levi Harmonicity on Non-Degenerate Almost CR Manifolds
The papers [47,94] are devoted to the study of a class of variational principles whose corresponding Euler-Lagrange equations are degenerate elliptic and generalize ordinary harmonic map theory in the spirit of sub-Riemannian geometry (cf.[95]) i.e., given a smooth map f : M → M of (semi) Riemannian manifolds (M, g) and (M , g ) one replaces the Hilbert-Schmidt norm of d f by the trace with respect to g of the restriction of f * g to a given codimension one distribution H on M (rather than applying the same construction to the full f * g ).E. Barletta et al., Ref. [96], introduced pseudoharmonic maps f : M → M from a nondegenerate CR manifold M endowed with a contact form θ into a Riemannian manifold M .When M is itself a non-degenerate CR manifold carrying the contact form θ a result in [96] describes pseudoharmonicity of CR maps f : M → M .R. Petit [97] considered the following (pseudohermitian analog to the) second fundamental form where ∇ is the Tanaka-Webster connection of M and ∇ = f −1 ∇ is the pullback of the Levi-Civita connection ∇ of M .The approach in [96] is to replace ∇ by an arbitrary linear connection D on M , consider the restriction Π H β f of (69) to the Levi distribution H = ker θ, and take the trace with respect to the Levi form L θ .Then f is called pseudoharmonic (with respect to the data (θ, D )) if trace L θ Π H β f = 0.More recently, Dragomir and R. Petit et al.,[98], studied contact harmonic maps, i.e., C ∞ maps f : M → M from a compact strictly pseudoconvex CR manifold M into a contact Riemannian manifold M which are critical points of the functional where θ is a contact form on M and (d f ) H,H = pr H • f * : H → H . J. Konderak & R. Wolak, Ref. [99], introduced transversally harmonic maps as foliated maps f : (M, F , g) → (M , F , g ) between foliated Riemannian manifolds satisfying a condition similar to the vanishing of the tension field in Riemannian geometry.
As a natural continuation of the ideas in [96], and following the ideas of B. Fuglede (who started the study of the semi-Riemannian case within harmonic map theory, cf.[100], and [101] pp.427-455), in the papers [47,94], S. Dragomir and the present author introduced the concept of Levi harmonic map f from an almost contact semi-Riemannian manifold (M, η, ξ, ϕ, g) into a semi-Riemannian manifold (M , g ), i.e., C ∞ solutions of τ H ( f ) ≡ trace g Π H β f = 0, where β f is the second fundamental form of f , and Π H β f is the restriction of β f to the Levi distribution H = ker η.Thus, we studied the Levi harmonicity for CR maps between two almost contact semi-Riemannian manifolds.This is perhaps the most general geometric setting (metrics are semi-Riemannian, in general the contact condition is not satisfied and the underlying almost CR structures are not integrable).In such a study, an important role is played by the notion of ϕ-condition: ∇ ϕX ϕX + ∇ X X = ϕ[ϕX, X], equivalently : (∇ X ϕ)ϕX = (∇ ϕX ϕ)X, (70) for any X ∈ H.Moreover, as emphasized in [47], the class of almost contact semi-Riemannian manifolds obeying to Equation ( 70) is quite large.For instance, contact semi-Riemannian manifolds, orientable real hypersurfaces in an indefinite Kaehler manifold (with the induced almost contact semi-Riemannian structure) and quasi-cosimplectic manifolds (which contains cosymplectic and almost cosympletic manifolds) satisfy the ϕ-condition.Moreover, the ϕ-condition extends (cf.[94], Section 3) the so-called condition (A) of Rawnsley [102].Rawnsley in his paper introduced the condition (A) in order to study the harmonicity of f -holomorphic maps between an almost Hermitian manifold with coclosed Kaehler form and a Riemannian manifold equipped with a f -structure.Moreover, there is the following characterization of a contact Riemannian manifold ( This last condition, for Y = ϕX, X ∈ ker η, implies the ϕ-condition. In this Section we report some results of [47,94], for almost contact semi-Riemannian manifolds.Let (M, η, ξ, ϕ, g) be a real (2n + 1)-dimensional almost contact semi-Riemannian manifold and (M , g ) a semi-Riemannian manifold.Let f : M → M be a C ∞ map and f −1 T(M ) → M the pullback of T(M ) by f .Let ∇ = f −1 ∇ be the pullback of the Levi-Civita connection ∇ of (M , g ) i.e., the connection in the vector bundle f −1 T(M ) → M induced by ∇ .If (U, x i ) and (U , y α ) are local coordinate systems on M and N such that f (U) ⊂ V then ∇ is locally described by where ) denotes the natural lift of Y ∈ X(U ) and Γ γ αβ are the Christoffel symbolds of (M , g ).Let H = kerη and J = ϕ| H be the almost CR structure underlying (η, ξ, ϕ, g).The second fundamental form β f of f is given by Here ∇ is the Levi-Civita connection of (M, g) and the vector field f * X is given by ( f * X)(x) = ( f * x )X x ∈ T f (x) M for any x ∈ M and X ∈ X(M).Next, let τ H ( f ) ∈ C ∞ ( f −1 TM ) be the tension field defined by where Π H β f is the restriction of β f to H ⊗ H.Note that the tension field Definition 6.Let (M, η, ξ, ϕ, g) an almost contact semi-Riemannian manifold and (M , g ) a semi-Riemannian manifold.A C ∞ map f : M → M is said to be Levi harmonic with respect to H = kerη if τ H ( f ) = 0.
Then one has We consider the operator ∇ * , the formal adjoint of ∇ (see for example [16], pp.108-110), thus if S is a tensor of type (1, 1), ∇ * S = −trace∇S.Then, after some computations, we get Moreover, as f is a CR map, If additionally (M, ϕ, ξ, η, g) is a contact semi-Riemannian manifold, then where ε = g(ξ, ξ).Hence f is Levi harmonic if and only if f * ξ is collinear to ξ .
Let S 2n+1 ⊂ C n+1 be the unit sphere endowed with the canonical Sasakian structure (η 0 , ξ 0 , ϕ 0 , g 0 ), hence ξ 0 is the standard Hopf vector field on S 2n+1 .Then Corollary 11. ξ 0 : (S 2n+1 , η 0 , g 0 ) → (T 1 S 2n+1 , ηt , ḡ0t is a Levi harmonic map for any t > 0. Remark 18.About the harmonicity of Hopf vector fields, Han and Yim [107] proved that these fields, namely, the unit Killing vector fields, are the unique unit vector fields on the unit sphere S 3 which define harmonic maps from S 3 to (T 1 S 3 , ḡ0 ), where ḡ0 is the Sasaki metric.In [108], as a consequence of a more general result, we got in particular that Han-Yim's Theorem is invariant under a three-parameter deformation of the Sasaki metric on T 1 S 3 .
Finally, we give a short presentation of the variational treatment of Levi harmonicity.Let (M, η, ξ, ϕ, g) be a (2n + 1)-dimensional almost contact Riemannian manifold and (M , g ) a Riemannian manifold.
If Ω ⊂ M is a relatively compact domain we set for any f ∈ C ∞ (M, M ).Then we obtain the following ( [47], Theorem 6.1): Let Ω ⊂ M be a relatively compact domain.A C ∞ map f : M → M is a critical point for the energy functional E Ω : C ∞ (M, M ) → R defined by (79) if and only if τ H ( f ) = f * ∇ ξ ξ + div(ξ) ξ .
If f : M → M be an immersion and a critical point of E Ω , then f is Levi harmonic if and only if the Reeb field ξ is geodesic and divergence free.
Remark 19.The many ramifications of harmonicity (subelliptic harmonic, contact harmonic, Levi harmonic, and pseudoharmonic maps) seem to indicate that the theory of harmonic maps has reached a stage of mannerism.However, the mentioned ramifications (to which one may add p-harmonic and exponentially harmonic maps, Gromov's tangentially harmonic maps and harmonic maps from Finslerian manifolds (cf.references in [47])) are but a measure of the enormous success enjoyed by the theory.
Problems Question 1 .Question 2 .Question 3 .
(related to the Section 2.3) It is an open problem, to our knowledge, to find examples of non-Sasakian contact semi-Riemannian manifolds which satisfy Equation (23), or to give a proof that an arbitrary contact semi-Riemannian manifolds satisfying Equation (23) is Sasakian.(related to the Section 2.4) In dimension ≥ 5, it is an open problem, to our knowledge, the existence of non trivial semi-Riemannian contact Ricci solitons.(related to Section 3.2) Study the geometry of an almost contact (semi) Riemannian structure (η, ξ, ϕ, g) when η defines a pseudohermitian structure.Question 4. (related to the Section 3.4) In dimension ≥ 5, it is an open problem to see if the Olszak's result holds for a general non-degenerate almost CR manifold.
and r > 0, i.e., r > −2n, equivalently r L > 2n, the Riemannian K-contact structure (η t , g t ) obtained in correspondence to of Proposition 5 and Theorem 11, one gets: • the Reeb vector field ξ is Killing with respect to Webster metric g θ if and only if pseudohermitian torsion τ vanishes, equivalently L ξ J = 0; • the almost contact structure (ξ, ϕ, η) is normal, equivalently the Webster metric g θ is Sasakian, if and only if the almost CR structure is integrable and the pseudohermitian torsion τ vanishes; • a non-degenerate CR manifold is Sasakian if and only if L ξ J = 0.
1)-dimensional, homogeneous non-degenerate CR manifolds M.This classification is based on a description of the maximal connected compact group of automorphisms of M. • CR manifolds which are locally CR equivalent to the unit sphere S 2n+1 , endowed with the standard CR structure as a real hypersurface of C n+1 , are called spherical CR manifolds.In particular, non-degenerate CR manifolds with a vanishing Chern pseudoconformal curvature tensor are spherical ([12], p. 61).If M is a spherical CR manifold, Burns and Shnider ([79], Section 1) defined a development map f : M → S 2n+1 , where M is its universal cover.Moreover, they proved that if the group of CR automorphisms is transitive on M, then f : M → S 2n+1 is a covering and f ( M) is homogeneous domain D in S 2n+1 .Thus to classify the simply connected spherical homogeneous CR manifolds it suffices to classify homogeneous domain in S 2n+1 ([79], Theorem 3.1).In particular, in dimension three, we have a list of five examples ([79], p. 229): Let (M, g) be a semi-Riemannian manifold of index ν, 0 ≤ ν ≤ n = dim M.Then, we have the following(i)The standard non-degenerate almost CR structure (H, θ, J) on T ε (M, g) has vanishing pseudohermitian torsion if and only if (M, g) has constant sectional curvature c = ε. | 20,955.2 | 2019-01-09T00:00:00.000 | [
"Mathematics"
] |
EMPOWERING GEO-BASED AI ALGORITHM TO AID COASTAL FLOOD RISK ANALYSIS: A REVIEW AND FRAMEWORK DEVELOPMENT
: Climate change and current susceptibilities exacerbated the coastal flood loss and damage resulting in livelihoods and property damage. Urban areas in the Low to Lower-Middle Income Countries are expected to be disproportionately impacted by the disaster, given a higher share of citizens living in the Low Elevation Coastal Zone, limited financial resources, and poorly constructed disaster protection. Documentation of historical coastal floods, population, and property affected, could advance the assessment by considering those parameters in risk analysis. Besides, incorporating such geographic features e.g., mangroves as the ecological solution for alternative coastal flood protection in the prediction is also essential. Mangrove is considered fit for the LLMIC primarily situated in the tropical zone. The prediction utilizing spatial Machine Learning (ML) could aid climate-related disaster risk analysis and contribute to risk reduction and policy suggestions to improve disaster resilience. The research aims to archive recent studies on the application of geospatial science empowering Artificial Intelligence, notably ML in coastal flood risk assessment, so-called GIS-based AI. Another aim is to document population, property, and mangrove distribution across the LLMIC. Artificial Neural Networks were mostly utilized for disaster risk assessment in past research. The number of 58 historical coastal flood events and 908 expected coastal flood hotspots for 2006 to 2021 has been documented. Over 1,2 million Km 2 falls under vulnerable areas toward coastal flood in LLMIC under different settlement types where Large City (urban areas) dominates it. Mangrove distribution is mainly distributed across tropical regions mostly distributed along the Southeast Asia coast.
Background
The coastal cities have experienced and been exposed to a range of coastal hazards, notably due to extreme Sea Level Rise (SLR) with its four significant impacts: coastal flood; coastal erosion; exacerbated land subsidence; and saltwater intrusion (Azevedo de Almeida and Mostafavi, 2016). According to the Special Report on the impacts of global warming of 1.5°C by IPCC, coastal flood has the highest risk of a severe impact associated with climate change. Each degree of increasing temperature is a matter of coastal flood risk (IPCC, 2018). The risk is projected to increase, associated with rising temperature and triggered by current susceptibilities, resulting in population exposure, property damage, and disruption of economic activities in the coastal zone.
Previous documentation revealed that nearly 10% of the world's population (618 million) and 2.3% (2,599 thousand km 2 ) of the world's land area of coastal countries resided and situated in Low Elevation Coastal Zone (LECZ) in 2000, defined as the contiguous area along the coast that is less than 10 meters above sea level (McGranahan et al., 2007;Neumann et al., 2015). By looking at the urban boundary, it accounted for 13% of the total urban population (352 million) living within LECZ, which covered 8% of the whole world's urban land area (275 thousand km 2 ). Another documentation indicated that in 2015 the urban area that falls under LECZ was estimated to have 10% of the world's population and 13% of the world's urban population, equal to 815 million (MacManus et al., 2021). By 2060, the total LECZ population was projected to reach 1.4 billion inhabitants (534 people/km 2 ) or equal to 12% of the world's population of 11.3 billion under the highest-end of forecast scenarios (Neumann et al., 2015). On the other hand, the situation is getting worst, exacerbated by the increase of such disasters in the future followed by their damage. According to historical global data by EM-DAT, the event tends to increase in upcoming years, which is associated with increasing loss and damage (Kirezci et al., 2020). Average global flood losses in 2005 are estimated to be approximately US$ 6 billion per year, rising to US$ 52 billion by 2050 with projected socio-economic change alone (Hallegatte et al., 2013). In short, coastal flood is expected to have the highest risk of a severe impact of loss and damage on livelihood and properties damage (Chan et al., 2018;Hallegatte et al., 2013;Kirezci et al., 2020;Neumann et al., 2015;Nicholls et al., 2008). It is, therefore, essential to carry out a coastal flood risk analysis to better grasp the disaster across the coastal zone.
Among the world's countries, urban areas in Low and Lower Middle-Income Countries (LLMIC) are expected to be the most vulnerable to coastal floods, given a higher share of the population living in the LECZ and limited financial resources for disaster management. The majority (83%) of the global LECZ population lived in less developed countries (Neumann et al., 2015). Accounted 28% of the urban area of the LLMIC population lives in the LECZ (McGranahan et al., 2007), which makes it vulnerable. Dasgupta et al. (2009) assessed that approximately 0.3% (194,000 km 2 ) of the territory in the 84 developing countries would be impacted by a 1-m SLR. It would equal 56 million people (1.28% of the population) exposed. LLMIC also tends to have non-engineered or poorly constructed coastal protection due to financial resources (Takagi, 2019).
Despite their budgetary limitations, many developing countries have the advantage of considering an ecological solution because they are often situated in tropical and subtropical regions. The ecological solution, i.e., the mangrove ecosystem for Eco-DRR, is currently acknowledged as one of the alternative strategies for coastal flood protection. Besides, it provides cobenefits for carbon sequestration both in the soil and the plant. The strategies also allow stewardship supporting people surrounding livelihood. Therefore, understanding to what extent ecological solutions can be applied and beneficial for LLMIC will be advantageous for coastal flood management.
An advancement in coastal flood risk simulation could contribute to risk reduction, policy suggestions, minimizing the loss of livelihood, and property damage associated with coastal floods. Nowadays, Artificial Intelligence (AI), notably Machine Learning (ML) approach for flood risk simulation, has emerged in the past few years (Chang et al., 2019;Mosavi et al., 2018). The paper is trying to document the utilization of AI, especially ML for aiding coastal flood risk analysis. ML utilization is encouraged to aiding climate-related disaster analysis and advance disaster risk prediction for handling big spatial data and high spatial-temporal data (Huntingford et al., 2019). In addition, looking at LLMIC which is considered a vulnerable region, it is essential to document the population, property, and coastal strategies, especially mangroves as Eco-DRR along its coastal zone. This documentation could enrich an advancement in coastal flood risk prediction by incorporating those parameters into the simulation. Besides, it is expected that the significance of ecological solutions in averting loss and damage from a coastal flood is also revealed.
Objective
The study aims to archive recent geospatial science studies that empower Artificial Intelligence, notably ML, in aiding disaster risk analysis, so-called GIS-based AI. Another aim is to document the population and property in LLMIC, which is vulnerable to coastal floods and the distribution of mangroves along their coastal zone.
Research Boundary
Case Study: The study selects cases in the Low to Lower Middle-Income Countries, or LLMIC, given their vulnerability toward coastal floods, i.e., a higher share of citizens living in their urban area, limited financial resources for coastal flood management, and poorly constructed coastal protection. Urban areas along the coastal zone are selected as essential locations for livelihood and economic activities within LLMIC.
LLMIC: countries that fall under the category where their GNI per capita is lower than $4,095 (World Bank 2021). Forty-five countries fall under this category worldwide.
Coastal Flood Risk Terminology:
The study defines the risk as to the potential occurrence and impact of the coastal flood in terms of loss and damage, including population exposure, property damage, and economic loss. A coastal flood is water that penetrates onto land, especially within Low Elevation Coastal Zone (LECZ), areas lower than 10 m above sea level and hydrologically connected to the coast.
Mapping on Coastal Flood Events, Population, Property, and Mangrove Distribution across LLMIC
The study utilized the ArcGIS Pro to document historical and projected coastal flood events, population, property, and mangrove distribution across LLMIC. Historical coastal flood data were recorded from the Global Active Archive of Large Flood Events, Dartmouth Flood Observatory. Spatial data were collected based on data sources information of global spatial dataset for flood studies, which is well explained by previous research (Kirezci et al., 2020;Lindersson et al., 2020).
Documentation of Past Studies on Empowering Geobased AI on Disaster Risk Analysis
The study concerns on application of Artificial Intelligence (AI), both Machine Learning (ML) and Deep Learning (DL), for disaster risk analysis, especially flood risk. The study focuses on recent research for the period 2016-2022 at any level and coverage. It emphasizes what kind of analysis is used in terms of temporal or spatial machine learning, ML algorithm used, and feature variables incorporated in the simulation.
State of the Art Geo-AI Approach in Aiding Disaster Risk Analysis
Artificial Intelligence, especially the Machine Learning (ML) approach, has emerged in the past few decades (Mosavi et al., 2018), as shown in Figure 1. This approach allows for various purposes, especially for resilience and preparedness against flooding (Saravi et al., 2019). According to the documentation, the research revealed that researchers mainly utilized ANNs (Artificial Neural Networks) followed by the SVM (Support Vector Machine), which gradually increased in use. Aiyelokun et al. (2021) predicted flood risk and drought through Naïve Bayes (NB) approach for traditional ML using wind, rainfall, temperature, and Relative Humidity (RH) dataset (Aiyelokun et al., 2021). Park and Lee (2020) assessed coastal flood risk under climate change impacts in South Korea using multiple machine learning algorithms (KNN-k-Nearest Neighbor; RF-Random Forest; SVM) spatially (Park and Lee, 2020). They have included geographic features such as Tide, DEM, and urban characteristics for the analysis despite lacking in considering the population and coastal protection in the simulation. At the same time, other researchers assessed the flood risk using traditional machine learning through flood datasets only (area, location, duration, etc.) for flood classification or prediction of the inundation (Chang et al., 2019;Saravi et al., 2019;Tayfur et al., 2018).
In short, based on the documentation, the most general ML algorithms for flood prediction were Adaptive Neuro-Fuzzy Inference System (ANFIS), Multilayer Perceptron (MLP), Wavelet Neural Network (WNN), Ensemble Prediction Systems (EPSs), Decision Tree (DT), Random Forest (RF), classification and regression trees (CART), Support Vector Machine (SVM), Naïve Bayes (NB), and Artificial Neural Networks (ANNs) (Aiyelokun et al., 2021;Ganguly et al., 2019;Manandhar et al., 2020;Park and Lee, 2020;Ruckelshaus et al., 2020;Saravi et al., 2019). They were widely used in flood modeling and provide robust and efficient ML algorithms for flood prediction. Although most of the researchers have acknowledged the robustness of ML in flood prediction, they were still analyzing through traditional ML and neglected the geographic features which essential and may influence the assessment. In addition, the researchers have limited estimates of risk in terms of loss and damage such as area flooded, the property affected, and community exposed, including the projection analysis considering climate change and population scenarios. Previous research limited their risk assessment to engineered ones instead of focusing on ecological solutions, i.e., Eco-DRR mainly through mangrove ecosystems that hypothesized fit for LLMIC primarily situated in the tropical or subtropical region. Table 1 shows the comparison recommended for future research, originality, and novelty. Cattaneo et al. (2021) has identified and divided the catchment areas of urban centers of different sizes called Urban-Rural Catchment Areas (URCAs), varied by the total population and time travel to the city. URCA is a raster dataset of the 30 urbanrural catchment areas showing different sizes of catchment areas around cities and towns. As explained by the authors, each rural pixel is assigned to a specific category. In this study, it is adapted and modified into ten categories of urban settlement types to make it simpler as follows: 1. Large city (> 5 million) 2. Large city (1 -5 million) 3. Intermediate city (500,000 -1 million) 4. Intermediate city (250,000 -500,000) 5. Small city (100,000 -250,000) 6. Small city (50,000 -100,000) 7. Town (20,000 -50,000) 8. Rural (beyond other types) 9. Dispersed towns (>3 hours to any city) 10. Hinterland (>3 hours to any city)
Distribution of Urban Settlement across LLMIC
The study concerns the urban areas in the Low Elevation Coastal Zone (LECZ) across Low to Lower-Middle Income Countries (LLMIC) located in tropical and subtropical regions. The selection of the areas is mainly motivated due to higher vulnerability zone among other zones toward coastal floods. This vulnerability means that the location has a higher risk of coastal flood occurrence, followed by the potential impact on population exposure and property damage (MacManus et al., 2021;McGranahan et al., 2007;Neumann et al., 2015). Besides, they also have limited financial resources for disaster management and poorly constructed coastal protection Fields (Takagi, 2019). Tropical and subtropical boundaries are selected considering the mangrove ecosystem fits this region's (Giri et al., 2011;Takagi, 2019). Figure 2 indicates the distribution of urban areas along LECZ (<10 masl) in 10 different urban settlement types. Bali Island is provided to depict clearly where Denpasar city shows as a large city. In total, over 1,2 million Km 2 falls under vulnerable areas toward coastal flood in LLMIC under different settlement types where Large City (urban areas) dominates it.
Distribution of Historical Coastal Flood Events and Coastal Floods Hotspot
Historical coastal flood event was compiled based on Dartmouth Flood Observatory (DFA) specifically for coastal flood event from 2006 to 2021. DFA provides the flood information from 1985, but due to lacking coordinated information, it is only started from 2006. There are compiled 109 cases of coastal floods worldwide caused by high tides, storm surges, cyclones, and typhoons but only 58 events across the LLMIC areas, as shown in Figure 3. Some of these events will be validated by the news if available. Each point comprises information such as country, detailed location including coordinate, duration of the flood, people exposed and damage, primary cause, and flooded area. (Kirezci et al., 2020). There were 908 total cases of extreme sea-level rise projected in the future (2100), mostly under 1.5 m. Furthermore, there is a vast distribution of coastal hotspots, especially in southeast Asia, i.e., Indonesia. On average, the coastal hotspots range from 0.7-0.8 m asl. Based on the historical information and distribution of coastal hotspots, there is a case match in a city currently exposed by the storm surge and, in the future, expected to have a 1.5-2.5 m extreme sea-level rise. Coastal 'Hotspot' across LLMIC
Distribution of Mangrove across LLMIC
The study concerns ecological solutions for coastal flood countermeasure in the LLMIC region. The distribution of ecosystem types for coastal protection benefits has been documented in the SNAPP project (Science for Nature and People Partnership). It shows various coastal protection and its benefits, as indicated in Figure 5 (Li et al., 2017). In addition, mangroves as massive coastal protection applied in the tropical zone were documented. Figure 6 shows the majority of mangrove distribution globally. Indonesia occupied almost one-fourth of global mangroves, equal to 3,244 thousand ha of (Giri et al., 2011;Kusmana, 2014). (Giri et al., 2011;Kusmana, 2014)
Population and Property Documentation across LLMIC
The documentation of population and property across LLMIC is addressed to reconsider that these areas are prioritized zone for the assessment. The documentation of the people with highresolution was documented by CIESIN (Facebook Hub) called High-Resolution Settlement Layer (HRSL). HRSL is an estimation of human population distribution at a resolution of 1 arc-second (approximately 30m) for the year 2015. The population estimates are based on recent census data and highresolution (0.5m) satellite imagery from DigitalGlobe. The Connectivity Lab at Facebook developed the settlement extent data using computer vision techniques to classify blocks of optical satellite data as settled (containing buildings) or not. CIESIN used proportional allocation to distribute subnational census data to the settlement extent. World Bank Living Standards Measurement Study (LSMS) program was used to validate the final dataset against anonymized "ground-truth" household surveys. While the World Settlement Footprint (WSF) 2015 is a 10m (0.32 arcsec) resolution binary mask outlining the 2015 global settlement extent derived by jointly exploiting multitemporal Sentinel-1 radar and Landsat-8 optical satellite imagery. Settlements are associated with value 255; all other pixels are associated with value 0. Figure 7 indicates the documentation of population and property with high spatial resolution potentially used for the coastal flood risk simulation.
Framework Development for Coastal Flood Risk Assessment by Employing Geo-based AI
The coastal flood simulation approach was developed, as shown in Figure 8. The study presents a novel use of the ensemble Spatial Machine Learning Algorithm (SMLA) for coastal flood simulation, harnessing big spatial data of high temporal and horizontal resolution globally. Once the key parameters' documentation is finished, all data will be transformed and harmonized into grid-based data and compiled into (A) Data The Data Table of Key Parameter will be split into a training dataset (70%) and a testing dataset (30%) contained by data feature as a predictor and feature target (indicated in asterisk (*) above). Subsequently, these data will be (B) modeled through GIS-based Machine Learning (ML) using ArcGIS API for Python. Following ML approaches will be compared to figure out the best accuracy, such as Random Forest (RF), Support Vector Machine (SVM), Artificial Neural Networks (ANN), Naïve Bayes (NB), and Logistic Regression (LR). Result evaluation such as confusion matrix, ROC, and F1 score (>75%) will be employed. Previous research has compared these approaches for flood simulation but is limited to coastal floods especially considering such DRR in the simulation (Faizollahzadeh Ardabili et al., 2019;Park and Lee, 2020). Coastal flooding is driven by stochastic high-water events, such as storm surges and waves caused by tropical cyclones/coastal storms/high tides. In other words, a coastal flood is a seawater penetrating onto land (Lorie et al., 2020). Some coastal flood parameters are used in the analysis, as shown in Figure 9 and listed below. Sea level rise is defined as the height of water over the mean sea surface in a given time and region. In this analysis, the dataset of sea level anomalies is computed concerning a twenty-year mean reference period (1993-2012) using up-to-date altimeter standards. Tide and surge are easily defined as the difference between seawater anomaly and mean sea level rise (high tide), or the contrast of the rise in water level above the average tidal level, and do not include waves.
For further simulation, variable targets (Y) are required, such as flooded areas, the property affected, and the population exposed.
The following is impact documentation in the case study (Pekalongan northern coast, central java, Indonesia).
The study developed the flooded areas due to coastal floods ( Figure 10). It was generated using Landsat 8. There are some limitations where Landsat did not cover some areas during coastal floods. Therefore, the study assumed the most extended duration of the flood. .
Figure 10. Flooded area in Pekalongan Northern Coast
In addition to that, the study also developed the damage analysis by considering the property is affected and the population exposed ( Figure 11). Property damage is defined as the GDP loss in the flooded area boundary. While the population exposed is defined as the total inhabitant living in the region where floods occur. The damage analysis used Open Street Map (OSM) and Gridded GDP. The plan will consider the depth, duration, and GDP. While the population exposed was calculated using gridded WorldPop. We found that the main issue was the spatial resolution.
The future study will analyze coastal flood risk with the key parameters above. The impact above will act as a variable target (Y), and the feature variable will be the key parameters developed. This may include in the final paper.
CONCLUSION
A study on utilizing Machine Learning (ML) to aid disaster risk analysis has emerged recently. It is vastly improved to advance disaster prediction both spatially and temporally. Despite a limited study on spatial disaster risk assessment using the ML, there is a trend on this, and expected to utilize it shortly extensively. The application of this approach in the case study shows that ML has promising results to advance risk prediction in loss and damage.
Considering LLMIC, especially urban zone, for the focus study is essential given they are expected to have a severe impact from the coastal flood. The documentation of population and property has revealed that it is crucial to consider this region for assessment. The study recommends that those information and geographic features parameters such as ecological solutions applied to climate change and population scenarios enrich the risk forecast under various conditions. | 4,774 | 2022-05-17T00:00:00.000 | [
"Environmental Science",
"Geography",
"Engineering",
"Computer Science"
] |
Asymptotic K-Support and Restrictions of Representations
In the late nineties T. Kobayashi wrote a series of papers in which he established a criterion for the discrete decomposablity of restrictions of unitary representations of reductive Lie groups to reductive subgroups. A key tool in the proof of sufficiency of his criterion was the use of the theory of hyperfunctions to study the microlocal behavior of characters of restrictions to compact subgroups. In this paper we show how to replace this tool by microlocal analysis in the $C^\infty$ category.
Introduction
In the late nineties T. Kobayashi wrote a series of papers in which he established a criterion for the discrete decomposablity of restrictions of unitary representations of reductive Lie groups to reductive subgroups.
A key tool in the proof of sufficiency of his criterion was the use of the theory of hyperfunctions to study the microlocal behavior of characters of restrictions to compact subgroups. See [6]. In this paper we show how to replace this tool by microlocal analysis in the C ∞ category.
In the following K denotes a connected, compact Lie group with Lie algebra k. We fix a maximal torus with Lie algebra t ⊂ k and an associated positive system. ByC ⊂ it * , i = √ −1, we denote the closure of the (dual) Weyl chamber. We identify equivalence classes of irreducible representations with their highest weights. Thus we writê K = Λ∩C, where Λ denotes the weight lattice in it * . We also assume an Ad-invariant inner product on k, extended to an Ad-invariant hermitian inner product on the complexification k C . We denote the norm of λ ∈ k C by |λ|. Using the inner products we identify t * and t * C with subsets of k * and of k * C , respectively. The Fourier series u = λ∈K u λ of any square integrable function u converges in L 2 (K). A (formal) Fourier series λ∈K u λ converges to a distribution u ∈ C −∞ (K) iff the L 2 norms u λ of the Fourier coefficients are polynomially bounded as functions of λ ∈K. Smooth functions, u ∈ C ∞ (K), are characterized by the rapid decrease of their Fourier coefficients, u λ = O(|λ| −∞ ) as λ → ∞. We shall define, for every distribution u, a closed cone afsupp(u) ⊂C \ 0, the asymptotic Fourier support of u. Essentially this is the smallest cone outside which the Fourier coefficients decrease rapidly. The asymptotic Fourier support is empty for C ∞ functions.
The wavefront set is a fundamental notion in the microlocal analysis of distributions. Given a closed cone Γ ⊂ T * K \ 0 one defines the space C −∞ Γ (K) which consists of all u ∈ C −∞ (K) having their wavefront sets contained in Γ, WF u ⊂ Γ. Under appropriate geometric conditions on Γ some operations can be extended by continuity to C −∞ Γ (K). The wavefront set was used by Howe [4] in a related setting.
The group K × K acts on the cotangent bundle T * K via left and right translations.
The Fourier series of u converges in C −∞ (K×K)·i −1 afsupp(u) (K). Kashiwara and Vergne [5, 4.5] proved the first assertion in the hyperfunction setting and noticed the C ∞ analogue in a remark. The importance of the second assertion is that it implies for subgroups satisfying geometric assumptions that restriction commutes with Fourier series.
A representation π of K in a Hilbert space is said to be polynomially bounded if the K-multiplicity m K (λ : π) = dim Hom K (λ, π) of λ in π is polynomially bounded as a function of λ ∈K. In particular, the multiplicities are finite then. The asymptotic K-support of π is a closed cone AS K (π) ⊂C \ 0 with approximates the support of m K (· : π) as λ → ∞. (See [6, (2.7.1)].) Theorem 2 ([6]). Let M be a closed subgroup of K. Denote its Lie algebra by m, and by m ⊥ ⊂ k * the space of conormals. Let π be a unitary representation of K which is polynomially bounded and which satisfies Then the restriction π| M of π to M is a polynomially bounded representation of M. The asymptotic M-support AS M (π| M ) is contained in the image of Ad * (K) AS K (π) under the canonical projection ik * → im * .
It is known that the restriction of an irreducible unitary representation π of a real reductive Lie group G to a maximal compact subgroup is polynomially bounded. For closed subgroups G ′ ⊂ G which are stable under the Cartan involution a criterion on G ′ -admissability of π| G ′ is given in [6,Theorem 2.9]. Theorem 2 contains all the micro-local information needed to rewrite the proof of [6, Theorem 2.9] without having to invoke the theory of hyperfunctions. Thus we offer an alternative approach to Kobayashi's theorem for readers without a strong background in hyperfunction theory.
The proof of Theorem 2 is centered around the notion of the Kcharacter λ∈K m K (λ : π) tr λ of π. The asumptions imply that this series converges in C −∞ (K), and that the K-character posesses a restriction to M which turns out to be the M-character of π| M . Theorem 1 is used to prove this. The continuity statement given in Theorem 1 simplifies the proof Theorem 2 when compared with the original argument.
The paper is organized as follows. In Section 2 we recall the expansion in eigenfunctions of a positive elliptic operator and its application to Fourier series on K. The asymptotic Fourier support is defined in this section. In Section 3 we study, for central distributions, wavefronts sets and the convergence of Fourier series. The theorems are proved in Sections 4 and 5.
This research grew out of the dissertation of the third author. The work was supported by the DFG via the international research training group "Geometry and Analysis of Symmetries".
Asymptotic Fourier support
The space C −∞ (K) of distributions on K is, by definition, the dual space of C ∞ (K). Functions are identified with distributions, L 2 (K) ⊂ C −∞ (K), using the normalized Haar measure dk on K. The L 2 scalar product (·|·) extends to an anti-duality between C −∞ (K) and C ∞ (K). We recall how the theory of Fourier series of distributions and of smooth functions follows from results on eigenfunction expansions of elliptic selfadjoint differential operators.
The Sobolev space H m (K) consists of all distributions which are mapped into L 2 (K) by differential operators with order ≤ m. We assume differential operators to be linear with C ∞ coefficients. H m (K) is equipped with a norm making it a Banach space. Let A be a second order, elliptic differential operator. Regard A as an unbounded operator on L 2 (K) with domain D(A) = H 2 (K). Its Hilbert space adjoint A * has, by elliptic regularity theory, the domain D(A * ) = H 2 (K). Assume, in addition, that (Au|u) > 0 if 0 = u ∈ D(A). Then A is positive selfadjoint. The eigenfunctions of A are in C ∞ (K).
Proposition 3 ([7, §10]). Let A be a positive selfadjoint second order elliptic differential operator on K. Let Au = j µ 2 j (u|e j )e j denote its spectral resolution where (e j ) ⊂ L 2 (K) is an orthonormal basis of eigenfunctions and 0 < µ j ↑ ∞ the corresponding sequence of eigenvalues of with the corresponding norm. The norm is equivalent with the graph norm. Hence D(A k ) is a Banach space. Obviously, H 2k (K) ⊂ D(A k ). By elliptic regularity we have equality, D(A k ) = H 2k (K). This holds also topologically because of Banach's theorem. By the Sobolev lemma, C ∞ (K) = ∩ k H 2k (K) as a projective limit. Hence the norms on D(A k ) define the Fréchet space topology of C ∞ (K). The asserted convergence criterion for C ∞ (K) follows from this. Using duality between weighted ℓ 2 sequence spaces we obtain the convergence criterion for C −∞ (K). Finally, the formula for the coefficients follows from the (separate) continuity of the antiduality bracket.
The ℓ 2 estimates in Proposition 3 can be replaced by supremum estimates because j µ −k j < ∞ for some k ∈ N. The latter property Denote by d λ , χ λ = tr λ, and M λ ⊂ L 2 (K) the dimension, the character and the space of matrix coefficients of λ ∈K. The convolution with d λ χ λ is the orthoprojector from L 2 (K) onto M λ . If u ∈ L 2 (K), then its Fourier series λ∈K u λ , u λ = d λ u * χ λ , converges to u in L 2 (K) by the Peter-Weyl theorem. The (formal) Fourier series of a distribution u ∈ C −∞ (K) is defined by the same formula using the convolution of a distribution with a C ∞ function, i.e., (u * ψ)(x) = K u(y)ψ(y −1 x) dy for ψ ∈ C ∞ (K) with the integral representing the duality bracket. Observe that χ λ , u * χ λ ∈ M λ ⊂ C ∞ (K). In general, we call a series λ∈K u λ with u λ ∈ M λ a Fourier series with coefficients u λ . We use left translation, L x (k) = xk, to trivialize the tangent bundle T K = K × k and the cotangent bundle T * K = K × k * . Under this identication left translation is the identity on the second components. Right translation R x (y) = yx acts, on the second components, as the adjoint action, dR x −1 : X → Ad(x)X, and as the co-adjoint action, Elements X ∈ U(k C ) of the universal enveloping algebra act as left invariant differential operators X on C −∞ (K). The principal symbol of the first order differential operator X associated with X ∈ k C is σ 1 ( X)(x, ξ) = ξ, X . Denote the Ad-invariant hermitian inner product on k C by Q. We assume that Q equals the negative Killing form on [k, k] and that the center of the Lie algebra is orthogonal to [k, k]. Choose, consistent with this orthogonal decomposition, an orthonormal basis Hence A is elliptic. It follows from the left invariance of X and the invariance of Haar measure that K Xv(y) dy = 0 for all v ∈ C ∞ (K), X ∈ k C . Therefore, A is positive selfadjoint with domain H 2 (K). Furthermore, A is bi-invariant. Therefore, each M λ , λ ∈K, is contained in an eigenspace of A with eigenvalue µ = µ(λ). There exists a constant C > 0 such that Here ρ is the half sum of positive roots. The left inequality holds because A − 1 is the sum of a non-negative operator B and the Casimir operator. It is well-known that the Casimir operator contains M λ in its eigenspace with eigenvalue |λ+ρ| 2 −|ρ| 2 . Since B is a sum of − X 2 , X ∈ t, the right inequality follows from (− X 2 u|u) = Xu 2 = λ, X u 2 which holds for any highest weight vector u ∈ M λ .
Summarizing we have the following.
for all, resp. for some, N ∈ Z. If u ∈ C −∞ (K), then its Fourier series Smoothness properties of a distribution correspond to decaying properties of its Fourier coefficients. We define an approximating cone to the directions of those λ ∈K ⊂C such that the Fourier coefficients u λ do not decay rapidly as λ → ∞. A subset of a (finite dimensional) real vector space V (or of a vector bundle) is called conic or a cone iff it is invariant under multiplication with positive reals.
Let u ∈ C −∞ (K) and λ∈K u λ its Fourier series. The asymptotic Fourier support of u is the closed cone afsupp(u) ⊂C \ 0 which is defined as follows. A point µ ∈C \ 0 is in the complement of afsupp(u) iff there is a conic neighbourhood S ⊂C \ 0 of µ such that λ∈S∩K |λ| 2N u λ 2 < ∞, ∀N ∈ N.
Remark 5. Instead of working with ℓ 2 -estimates we can work with supremum estimates such as sup λ∈S∩K |λ| N u λ < ∞. This follows from the observation made after the proof of Proposition 3.
With a subset S ⊂ V one associates the closed cone S ∞ ⊂ V \ 0 as follows. A point is in the complement of S ∞ if it has a conic neighbourhood which intersects S in a relatively compact set. Equivalently, v ∈ S ∞ iff there exist sequences (v j ) ⊂ S and ε j ↓ 0 such that lim j ε j v j = v. The cone S ∞ approximates S at infinity.
The K-support supp K (π) of a representation π of K in a Hilbert space is the set of all λ ∈K ⊂C such that λ occurs in π, i.e., m K (λ : π) > 0. The set AS K (π) = supp K (π) ∞ ⊂C \ 0 is the asymptotic K-support of π.
Wavefront convergence of central Fourier series
The definition of the wavefront set of a distribution is based on the calculus of pseudodifferential operators. We collect, in our context, some definitions and results, refering to [2, Section 2.5], [1], and [3, Section 18.1] for details.
With every pseudodifferential operator A ∈ Ψ m (K) one associates its set Char A ⊂ T * K \ 0 of characteristic points. A point is noncharacteristic if there is a symbol b ∈ S −m such that ab − 1 ∈ S −1 in a conic neighbourhood of that point. Here a ∈ S m (T * K) is, modulo S m−1 (T * K), a principal symbol of A. The operator is said to be elliptic at a non-characteristic point. An operator A : C ∞ (K) → C −∞ (K) is a pseudodifferential operator iff its Schwartz kernel K A ∈ C −∞ (K × K) is a conormal distribution respect to the diagonal. More explicitly, A ∈ Ψ m (K) iff the singular support of K A is contained in the diagonal and K can be covered with open sets U ⊂ K such that the kernel is given by an oscillatory integral (3) K A (y ′ , y) = e iϕ(y ′ ,η)−iϕ(y,η) a(y ′ , y, η) dη, y ′ , y ∈ U. The phase function ϕ ∈ C ∞ (U ×k) is real-valued, linear in the second variable, and nondegenerate, i.e., det ϕ ′′ yη = 0. The amplitude a belongs to the symbol space S m (U × U × k * ). A is elliptic at ξ = ϕ ′ x (x, ζ) ∈ T * x K \ 0, x ∈ U, iff there is a neighbourhood U 0 ⊂ U of x, a conic neighbourhood V of ζ, and C > 0 such that |a(y, y, η)| ≥ |η| m /C for y ∈ U 0 , η ∈ V , |η| > C.
Let u ∈ C −∞ (K). The wavefront set WF u ⊂ T * K \ 0 equals ∩ Char A, where the intersection is taken over all pseudodifferential operators A which satisfy Au ∈ C ∞ (K). Let Γ ⊂ T * K \ 0 be a closed cone. The space C −∞ Γ (K) of distributions on K which have their wavefront sets contained in Γ is equipped with a locally convex topology. It contains C ∞ (K) as a sequentially dense subspace. Convergence of a sequence, u j → u in C −∞ Γ (K), is equivalent to u j → u (weakly) in C −∞ (K) and the existence, for every (x, ξ) ∈ (T * K \ 0) \ Γ, of a pseudodifferential operator A ∈ Ψ m (K) such that (x, ξ) ∈ Char A, and Here WF(A) is the smallest conic subset of T * K \ 0 such that A is of order −∞ in the complement. (See the remark following Theorem 18.1.28 of [3].) Let K act on C ∞ (K) via the right regular representation, R x f (y) = f (yx). The corresponding action of the Lie algebra k C is by left invariant vector fields, dR e (X)f = Xf .
The following lemma should be compared with [5, 3.1].
Lemma 6. Let λ∈K u λ be a Fourier series which converges in C −∞ (K). Assume that each u λ is a highest weight vector for the right regular representation acting irreducibly on a subspace of M λ . Let S be a closed cone ⊂C \ 0. Then λ∈S∩K u λ converges in C −∞ K×i −1 S (K).
Proof. The differential equations Xu λ = 0 and Xu λ = λ, X u λ hold for X ∈ n and X ∈ t, respectively. Here n ⊂ k C denotes the sum of positive root spaces.
Let (x, ξ) ∈ K ×k * \0, ξ ∈ i −1 S. It suffices to find a pseudodifferential operator A, elliptic at (x, ξ), such that the series λ∈S∩K Au λ converges in C ∞ (K). If ξ ∈ t * , then there exists X ∈ n with ξ, X = 0; the first order differential operator A = X has the desired properties. Now assume ξ ∈ t * . Then the cone S − R + iξ is a closed subset of it * \ 0. It follows by a simple compactness argument that |λ| + |ξ| ≤ C|λ − iξ| with a constant C > 0 independent of λ ∈ S. Assume that S is convex. Choose X ∈ t which strictly separates the disjoint convex cones −R + ξ and iS. We infer that there exists c > 0 such that where Γ = R + ξ. By continuity (4) also holds in a conic neighbourhood Γ ⊂ k * \ 0 of ξ. Let U ⊂ K be an open neighbourhood of x and H ⊂ U a hypersurface containing x such that the following holds. The real vector field X is transversal to H and every maximally extended integral curve of X in U hits H in a unique point. Furthermore, y → exp −1 (x −1 y) maps U diffeomorphically onto an open neighbourhood of the origin in k. Using the method of characteristics we solve, for every η ∈ k * , the initial value problem The solution ϕ ∈ C ∞ (U × k * ) is linear in the second variable and ϕ ′ x (x, η) = η holds in T * x K = k * for all η. In particular, ϕ ′′ yη is nondegenerate at y = x. We have Xe −iϕ(·,η) u λ = λ − iη, X e −iϕ(·,η) u λ . Since X is left invariant, K Xv(y) dy = 0 holds for all v ∈ C ∞ (K). Therefore we can perform partial integration as follows, Iterating N times and estimating the integral on the right using the Cauchy-Schwarz inequality we obtain with a constant C N > 0 independent of λ ∈ S ∩K and η ∈ Γ. In view of (4) we get for all λ ∈ S ∩K and N ∈ N. Since the L 2 norms of the Fourier coefficients are polynomially bounded we obtain, for every χ ∈ C ∞ c (U), We can assume that, making U and Γ smaller if necessary, det ϕ ′′ yη = 0 in U × k * , and ϕ ′ y (U × Γ) ∩ i −1 S = ∅. Fix χ ∈ C ∞ c (U) with χ(x) = 1. Choose a symbol b ∈ S 0 (k * ) with supp b ⊂ Γ and b = 1 in a conic neighbourhood of ξ minus a compact set. Define the pseudodifferential operator A ∈ Ψ 0 (K) with kernel K A supported in U × U and given by (3) with amplitude a(y ′ , y, η) = χ(y ′ )b(η)χ(y). It follows from (5) that λ∈S∩K Au λ converges in C ∞ (K). Furthermore, A is elliptic at (x, ξ). Hence we have proved the assertion under the additional assumption that S is convex. To remove this assumption observe that S can be covered by finitely many closed convex cones each not containing iξ. Decompose the Fourier series correspondingly.
Pullback and pushforward of distributions is well-defined and continuous under assumptions on the wavefront sets. With any C ∞ map f : X → Y map between smooth manifolds one associates its canonical relation For a closed cone Γ ⊂ T * Y \ 0 define its pullback cone f * Γ = C −1 f • Γ ⊂ T * X. If f * Γ does not intersect the zero section, then the pullback f * u = u•f extends from C ∞ (X) to a (sequentially) continuous pullback operator f * : . If f is a proper map, then the pushforward operator f * : If, in addition, f is a submersion and Γ ⊂ T * X \ 0 is a closed cone, then f * Γ := C f • Γ ⊂ T * Y \ 0 and the pushforward restricts to a (sequentially) continuous map f * : . An important example of a pullback operator is the restriction to a submanifold M ⊂ K. It is defined on distributions having wavefront sets disjoint from the conormal bundle of M. The pushforward by a projection (x, y) → x is integration along fibers.
Taking the average Av f (x) = K f (yxy −1 ) dy of a function f extends uniquely from C ∞ (K) to an operator Av : . Proof. Define g : K × K → K, g(x, y) = yxy −1 , and p : K × K → K, p(x, y) = x. Then Av = p * g * on C ∞ (K). By assumption Γ = K × S where S ⊂ k * \ 0 is an Ad * -invariant closed cone. A computation shows that ((yxy −1 , ζ), (x, y, ξ, η)) ∈ C g iff ξ = Ad * (y −1 )ζ, and η = Ad * (xy −1 )ζ − Ad * (y −1 )ζ. Clearly, g * Γ does not intersect the zero section. Hence the pullback operator g * is defined. Composing C −1 g with the relation C p leads to η = 0 and p * g * Γ ⊂ K × Ad * (K)T . The assertion follows from this. Proof. For each λ ∈K we choose a highest weight vector w λ ∈ M λ of a irreducible subrepresentation ⊂ M λ of the right regular representation. We may view w λ as a matrix coefficient of the form w λ (x) = (R x v|v) with v ∈ M λ , v = 1. Then w λ (e) = 1, and w λ ≤ sup K |w λ | ≤ 1 by the Cauchy-Schwarz inequality. The central function Av w λ is a multiple of χ λ . Comparing values at e we get Av w λ = d −1 λ χ λ .
The dimension d λ and, in view of Corollary 4, the Fourier coefficients (u|χ λ ) of u grow at most polynomially in λ. Hence w = λ∈S∩K d λ (u|χ λ )w λ converges in C −∞ (K). By Lemma 6 the series converges in C −∞ K×i −1 S (K). The assertion follows from Lemma 7 since u = Av w.
Proof of Theorem 1
Every v ∈ C −∞ (K) defines a convolution operator C ∞ (K) → C ∞ (K), w → v * w. This is a continuous linear map which commutes with right translations. Conversely, every such map is given by convolution with a unique element v ∈ C −∞ (K). Composition of maps defines the convolution u The formula is evident for smooth functions and extends to distributions by separate sequential continuity. The composition C p • C −1 f of the canonical relations consists of all The wavefront of a tensor product satisfies Moreover, as a bilinear map the tensor product satisfies corresponding separate continuity properties. It follows that, for any two cones S 1 and S 2 in k * \ 0, the convolution (u 1 , u 2 ) → u 1 * u 2 defines a separately sequentially continuous bilinear map (6) * : . Convolution with the Dirac distribution δ = λ∈K d λ χ λ ∈ C −∞ (K) is the identity, δ * u = u. In the proof of the theorem we need δ S = λ∈S∩K d λ χ λ ∈ C −∞ (K) where S ⊂C \ 0. If S is a closed cone, then it follows from Proposition 8 that the series also converges to δ S in C −∞ K×i −1 Ad * (K)S (K). Now, turning to the proof of the theorem, let u ∈ C −∞ (K). Assume that S ⊂C \ 0 is a closed cone which contains afsupp(u) in its interior. Then the series of δC \S * u converges in C ∞ (K). Using (6) with S 1 = i −1 Ad * (K)S we deduce from the above that the Fourier series of δ S * u converges in C −∞ K×i −1 Ad * (K)S (K). It follows that the Fourier series of u = δ S * u + δC \S * u converges in this space, too. In particular, we have (K × K) · WF(u) ⊂ K × i −1 Ad * (K)S. This implies that the left-hand side in (1) is contained in the right-hand side.
To prove the opposite inclusion let S a closed cone ⊂C such that WF(u) ∩ (K × i −1 Ad * (K)S) = ∅. We apply (6) to δ S * u and deduce that the Fourier series λ∈S∩K u λ converges in C ∞ (K). This implies that S is disjoint from the asymptotic Fourier support of u. Since the closure of a Weyl chamber is a fundamental domain for the coadjoint action on k * , this implies K × i −1 afsupp(u) ⊂ (K × K) · WF u.
Proof of Theorem 2
The polynomial boundedness of π implies the finiteness of the multiplicities m K (λ : π) and the convergence of its K-character The support of Θ K π equals supp K (π). We have AS K (π) = supp K (π) ∞ = afsupp(Θ K π ). The second equality holds because the L 2 -norm of each non-zero summand in (7) is ≥ 1. ¿From Theorem 1 it follows that (7) converges in C −∞ Γ (K) where Γ = K × i −1 Ad * (K) AS K (π). Assumption (2) implies that the conormal bundle of M, which is a subset of K ×m ⊥ , is disjoint from Γ. Hence the restriction The assertions of the theorem will follow from this. Moreover, it says that Θ M π| M = Θ K π | M . Let µ ∈M . Fix a representation space H µ . Let ρ = ind K M (µ) denote the unitary representation of K induced by µ. We view the representation space H ρ of ρ as the subspace of L 2 (K, H µ ) defined by f (xm) = µ(m −1 )f (x), m ∈ M, almost every x ∈ K. Then f ∈ L 2 (K, H µ ) belongs to H ρ only if it satisfies, in the sense of distributions, the first order system of differential equations Y f + µ * (Y )f = 0, Y ∈ m. Here µ * is the Lie algebra representation induced by µ. The characteristic variety of Y + µ * (Y ) is contained in K × Y ⊥ . Hence WF(f ) ⊂ K × Ad * (K)m ⊥ if f ∈ H ρ . Theorem 1, generalized to vector valued distributions, implies that afsupp K (f ) ⊂ i Ad * (K)m ⊥ for every f ∈ H ρ . This implies AS K (ρ) ⊂ i Ad * (K)m ⊥ . Indeed, if this were not true, we could find a closed cone S ⊂C \ 0, S ∩ i Ad * (K)m ⊥ = ∅, and f = λ∈K∩S f λ ∈ H ρ , f λ ∈ W λ , such that λ∈K∩S |λ| 2N f λ 2 = ∞ for some N ∈ N. Here W λ denotes the λ-isotypical subspace of H ρ . Using assumption (2) we deduce AS K (ρ) ∩ AS K (π) = ∅. Therefore supp K (ρ) ∩ supp K (π) is relatively compact, hence finite. By Frobenius reciprocity we get, with sums having only finitely many nonzero summands, | 6,941.2 | 2009-05-07T00:00:00.000 | [
"Mathematics"
] |
The Argument Web: an Online Ecosystem of Tools, Systems and Services for Argumentation
The Argument Web is maturing as both a platform built upon a synthesis of many contemporary theories of argumentation in philosophy and also as an ecosystem in which various applications and application components are contributed by different research groups around the world. It already hosts the largest publicly accessible corpora of argumentation and has the largest number of interoperable and cross compatible tools for the analysis, navigation and evaluation of arguments across a broad range of domains, languages and activity types. Such interoperability is key in allowing innovative combinations of tool and data reuse that can further catalyse the development of the field of computational argumentation. The aim of this paper is to summarise the key foundations, the recent advances and the goals of the Argument Web, with a particular focus on demonstrating the relevance to, and roots in, philosophical argumentation theory.
Introduction
The growth in computational models of argument over the past two decades has been phenomenal and represents a success story in humanities-sciences interdisciplinary research. Fragmentation in the area, however, has consistently threatened to undermine its maturity with reduced incremental development and a litany of wheel re-inventions. This is the problem that has been tackled by the Argument Web: to provide an environment or 'ecosystem' in which data sharing and re-use, incremental development of software, and theory and application interoperability are the quotidian modus operandi of research in the field. This paper aims to do two things: first, to provide an overview of the Argument Web and its various foundations, applications, tools and datasets for an audience that is more familiar with philosophical ground; and second, to show how theories of argumentation developed in philosophy, communication, linguistics and social psychology have influenced and been taken up in both the core engineering that has brought the Argument Web into being, and also in its various applications and software systems that are starting to drive usage.
The approach in the paper is necessarily broad and shallow. We begin in Section 2 with central concepts of argument representation, first for common argument concepts, then extensions particularly for dialogue (this separation reflects discussions in the philosophical literature stretching back to the early 1980s on the relationship between argument products and argument processes). The extended Section 2 thus represents the underpinning upon which the remainder of the paper depends. The first port of call for the application of these argument representations is in the philosophically very familiar task of argument analysis, in Section 3. The introduction of analytical techniques, however, is divided by the rather engineering-oriented distinction between individual and collaborative analysis. For philosophy, and particularly critical thinking and informal logic, the most prominent use of argument analysis techniques is in pedagogy and pedagogical application of the Argument Web; these are reviewed in Section 4. Driven in part by the social epistemological movement, many areas of philosophy are increasingly being influenced by a empiricist perspective and argumentation is no exception. Section 5 looks at the ways in which the Argument Web can support such research through tools for curation and management of linguistic resources. With so much data available, navigation and evaluation of argument becomes increasingly challenging, and these areas are briefly reviewed in Sections 6 and 7, respectively. An extraordinarily hot topic in computational models of argument is currently the challenge of argument mining-automatically extracting argument structures from natural text; some of the approaches and successes are reviewed briefly in Section 8 and then finally in Section 9 an example from the philosophy of science (and specifically of mathematics) is summarised to show how many of these pieces can be brought together to achieve significant theoretical advances. Across this breadth, the paper aims to equip the reader with both theoretical and practical understanding of the Argument Web.
Argument Representation
The interdisciplinary overlap between philosophical and computational approaches to argumentation has been a major growth area within Artificial Intelligence over the past three decades. AI has long been an idiosyncratic hybrid of pure theory and pragmatic engineering, and nowhere is this more true than in computational models of argument. The mathematical theories of argument which originate in works such as that by Dung (1995) have been enormously influential in theoretical models of reasoning in AI, because they provide the machinery for handling issues such as defeasibility and inconsistency in ways that traditional classical logics are not able to support. These same mathematical theories are, however, barely recognisable as theories of argumentation as the philosophical and communication scholarly communities would know them-they serve rather as 'calculi of opposition'. 1 At the same time, AI is also home to applications of theories of informal logic (Gordon et al. 2007), of pedagogic critical thinking (Reed and Rowe 2004), of rhetoric (Crosswhite et al. 2003) and of legal argumentation (Walton 2005): these applications are all rooted squarely in the tradition of argumentation theory as a discipline, and diverge from it in ways that are typically incremental and driven by pragmatic necessity. structured format. This, then, is the focus. If we want to set out to try to support harmonisation between systems, and to do so in a way that is as closely tied as possible to current models from the theory of argumentation, then we start with a simple task that is common across most AI systems of argument: representation.
The Argument Interchange Format
The Argument Interchange Format (AIF) (Chesñevar et al. 2006) was developed to provide a flexible-yet semantically rich-way of representing argumentation structures. The AIF was put together to try to harmonise the strong formal tradition initiated to a large degree by Dung (1995), the natural language research described at CMNA workshops since 2001, 3 and the multi-agent argumentation work that has emerged from the philosophy of Walton and Krabbe (1995), amongst others.
The AIF can be seen as a representation scheme constructed in three layers. At the most abstract layer, the AIF provides a hierarchy of concepts which can be used to describe argument structure. This hierarchy describes an argument by conceiving of it as a network of connected nodes that are of two types: information nodes that capture data (such as datum and claim nodes in a Toulmin analysis, or premises and conclusions in a box-and-arrow analysis in the style of Freeman (1991), for example), and scheme nodes that describe passage between information nodes (similar to the application of warrants or rules of inference). Scheme nodes in turn come in several different guises, including scheme nodes that correspond to support or inference (or 'rule application nodes'), scheme nodes that correspond to conflict or refutation (or 'conflict application nodes') and scheme nodes that correspond to value judgements or preference orderings (or 'preference application nodes'). At this topmost layer, there are various constraints on how components interact: information nodes, for example, can only be connected to other information nodes via scheme nodes of one sort or another. Scheme nodes, on the other hand, can be connected to other scheme nodes directly (in cases, for example, of arguments that have inferential components as conclusions, e.g. in patterns such as Kienpointner's (1992) 'warrant-establishing arguments'). Inference captured by multiple incoming scheme nodes thus naturally corresponds to convergent argumentation; that covered by multiple premises supporting a single incoming scheme node corresponds to linked argumentation (Walton 2006). The AIF also provides, in the extensions developed for the Argument Web (Rahwan et al. 2007), the concept of a 'Form' (as distinct from the 'Content' of information and scheme nodes). Forms allow the AIF to represent uninstantiated definitions of schemes (this has practical advantages in allowing different schemes to be represented explicitly-such as the very rich taxonomies of Walton et al. (2009), Perelman andOlbrechts-Tyteca (1969), Grennan (1997), etc.-and is also important in law, where arguing about inference patterns can become important).
A second, intermediate layer provides a set of specific argumentation schemes (and value hierarchies, and conflict patterns). Thus, the uppermost layer in the AIF ontology lays out that presumptive argumentation schemes are types of rule application nodes, but it is the intermediate layer that cashes those presumptive argumentation schemes out into Argument from Consequences, Argument from Cause to Effect and so on. At this layer, the form of specific argumentation schemes is defined: each will have a conclusion description (such as 'A may plausibly be taken to be true') and one or more premise descriptions (such as 'E is an expert in domain D'). Walton's schemes (Walton 1996;Walton et al. 2009) have been developed in full for the AIF (Rahwan et al. 2007).
It is also at this layer that, as Rahwan et al. (2007) have shown, the AIF supports a sophisticated representation of schemes and their critical questions. In addition to descriptions of premises and conclusions, each presumptive inference scheme also specifies descriptions of its presumptions and exceptions. Presumptions are represented explicitly as information nodes, but, as some schemes have premise descriptions that entail certain presumptions, the scheme definitions also support entailment relations between premises and presumptions. The AIF has here largely followed the lead of a collaboration between Walton and two AI researchers, Gordon and Prakken (Gordon et al. 2007).
Finally, the third and most concrete level supports the integration of actual fragments of argument, with individual argument components (such as strings of text) instantiating elements of the layer above. At this third layer, an instance of a given scheme is represented as a rule application node (with the terminology of rule application-RA-and conflict scheme application-CA-and so on now easier to interpret). This rule application node is said to fulfill one of the presumptive argumentation scheme descriptors at the level above. As a result of this fulfillment relation, premises of the rule application node fulfill the premise descriptors, the conclusion fulfils the conclusion descriptor, presumptions can fulfill presumption descriptors, and conflicts can be instantiated via instances of conflict schemes that fulfill the conflict scheme descriptors at the level above. Again, all the constraints at the intermediate layer are inherited, and new constraints are introduced by virtue of the structure of the argument at hand.
The AIF standard is available in a number of different 'reifications'-that is, in a number of different computer languages, from detailed and extensive data expressed in a language compatible with the Semantic Web (viz., RDF and OWL), through versions in languages familiar to programmers in commercial and scholarly domains such as JSON and Prolog, as well as compact languages and formats aimed, for example, at visualisation (such as DOT and SVG).
In addition, various threads of research have demonstrated equivalences (often with some limiting or simplifying assumptions) with major other computational approaches to argumentation such as the high profile Carneades tool which has a jurisprudential foundation and offers rich evaluative mechanisms (Gordon et al. 2007); the popular analysis tool Rationale (van Gelder 2007); one of the most mature approaches to formal representation of 'structured' (in contrast to 'abstract' argumentation) in ASPIC+ (Bex et al. 2013); and a specific form of abstract argumentation that grounds arguments in observations, Evidence-based Argumentation Frameworks or EAFs (Oren et al. 2010).
Extensions to Handle Dialogue
The next step is to allow the representation of dialogue. Several preliminary steps in this direction have been taken (Reed 2006;Modgil and McGinnis 2008;Reed et al. 2008), building on work in computational systems on protocol specification (see, e.g. the work of the FIPA group 4 ) and in philosophy on dialogical games (such as Mackenzie 1990). The motivation for this work can be summarised through O'Keefe's distinction between argument 1 and argument 2 : argument 1 is an arrangement of claims into a monological structure, whilst argument 2 is a dialogue between participants-as O' Keefe (1977)[p122] puts it, 'The distinction here is evidenced in everyday talk by ... the difference between the sentences 'I was arguing 1 that P' and 'we were arguing 2 about Q.' ' There are self-evident links between argument 1 and argument 2 in that the steps and moves in the latter are constrained by the dynamic, distributed and inter-connected availability of the former, and further in that valid or acceptable instances of the former can come about through sets of the latter. An understanding of these intricate links which result from protocols and argument-based knowledge demands a representation that handles both argument 1 and argument 2 coherently. It is this that the dialogic extensions to the AIF set out to provide.
The fundamental building blocks of dialogues are the individual locutions. In the context of the AIF, Modgil and McGinnis (2008) have proposed modelling locutions as information nodes. We follow this approach primarily because statements about locution events are propositions that could be used in arguments. So for example, the proposition, Bob says, 'ISSA will be in Amsterdam' could be referring to something that happened in a dialogue (and later we shall see how we might therefore wish to reason about the proposition, ISSA will be in Amsterdam) -but it might also play a role in another, monologic argument (say, an argument from expert opinion, or just an argument about Bob's communicative abilities).
Associating locutions exactly with information nodes, however, is insufficient. There are several features that are unique to locutions, and that do not make sense for propositional information in general. Foremost amongst these features is that locutions often have propositional content. The relationship between a locution and the proposition it employs is, as Searle (1969) argues, constant -i.e. "propositional content" is a property of (some) locutions. Whilst there are other, non-locution propositions that may also relate to further propositions, (e.g. the proposition, It might be the case that it will rain) the relationship of propositional content is certainly not ubiquitous (It is Tuesday does not have propositional content-it simply is a proposition). On these grounds, we should allow representation of locutions to have propositional content, but not demand it for information nodes in generaland therefore the representation of locutions should form a subclass of information nodes in general. We call this subclass "locution nodes". There are further reasons for distinguishing locution nodes as a special case of information nodes, such as the identification of which dialogue(s) a locution is part of. (There are also some features which one might expect to be unique to locutions, but on reflection are features of information nodes in general. Consider, for example, a time index-we may wish to note that Bob said, ISSA will be in Amsterdam at 10am exactly on the 1st March 2010. Such specification, however, is common to all propositions. Strictly speaking, It might be the case that it will rain is only a proposition if we spell out where and when it holds. In other words, a time index could be a property of information nodes in general, though it might be rarely used for information nodes and often used in locution nodes).
Given that locutions are (a subclass of) information nodes, they can, like other information nodes, only be connected through scheme nodes. There is a direct analogy between the way in which two information nodes are inferentially related when linked by a rule application, and the way in which two locution nodes are related when one responds to another by the rules of a dialogue. Imagine, for example, a dialogue in which Bob says, ISSA will be in Amsterdam and Simon responds by asking, Why is that so?. In trying to understand what has happened, one could ask, 'Why did Simon ask his question?' Now although there may be many motivational or intentional aspects to an answer to this question, there is at least one answer we could give purely as a result of the dialogue protocol, namely, 'Because Bob had made a statement'. That is to say, there is plausibly an inferential relationship between the proposition, Bob says ISSA will be in Amsterdam and the proposition, Simon asks why it is that ISSA will be in Amsterdam. That inferential relationship is similar to a conventional inferential relationship, as captured by a rule application. Clearly, though, the grounds of such inference lie not in a scheme definition, but in the protocol definition. Specifically, the inference between two locutions is governed by a transition, so a given inference is a specific application of a transition. Hence, we call such nodes "transition application nodes" and define them as a subclass of rule application nodes. (Transition applications bear strong resemblance to applications of schemes of reasoning based on causal relations: this resemblance is yet to be further explored, but further emphasises the connection between inference and transition).
So, in just the same way that a rule application fulfils a rule of inference scheme form, and the premises of that rule application fulfil the premise descriptions of the scheme form, so too, a transition application fulfils a transitional inference scheme form, and the locutions connected by that transition application fulfil the locution descriptions of the scheme form. The result is that all of the machinery for connecting the normative, prescriptive definitions in schemes with the actual, descriptive material of a monologic argument is re-used to connect the normative, prescriptive definitions of protocols with the actual, descriptive material of a dialogic argument.
With these introductions, the upper level of this extended AIF is almost complete. For both information (I-) nodes and rule application (RA-) nodes, we need to distinguish between the old AIF class and the new subclass which contains all the old I-nodes and RA-nodes excluding locution (L-) nodes and transition application (TA-) nodes (respectively). As the various strands and implementations of AIF continue, we will want to continue talking about I-nodes and RA-nodes and in almost all cases, it is the superclass that will be intended. We therefore keep the original names for the superclasses (I-node and RA-node), and introduce the new subclasses I and RA for the sets I-nodes\L-nodes and RA-nodes\TA-nodes respectively.
One final interesting question is how, exactly, L-nodes are connected to I-nodes. So for example, what is the relationship between a proposition p and the proposition 'X asserted p'? According to the original specification of the AIF, direct I-node to I-node links are prohibited (and with good reason: to do so would introduce the necessity for edge typing-obviating this requirement is one of the advantages of the AIF approach). The answer to this question is already available in the work of Searle (1969) and later with Vanderveken (Searle and Vanderveken 1985). The link between a locution (or, more precisely, a proposition that reports a locution) and the proposition (or propositions) to which the locution refers is one of illocution. The illocutionary force of an utterance can be of a number of types (Searle and Vanderveken (1985) explore this typology and its logical basis in some detail) and can involve various presumptions and exceptions of its own. In this way, it bears more than a passing resemblance to scheme structure. These schemes are not capturing the passage of a specific inferential force, but rather the passage of a specific illocutionary force. As a result, we refer to these schemes as illocutionary schemes or Y schemes. Specific applications of these schemes are then, following the now familiar pattern, illocutionary applications or YA nodes. Illocutionary schemes are referred to with gerunds (asserting, promising, etc.), whilst transitional inference schemes are referred to with nouns (response, statement, etc.), which both ensures clarity in nomenclature, and is also true to the original spirit and many of the examples in both the Speech Act and Dialogue Theory literatures.
Further Extensions
As a common interlingua for representing argumentation, the AIF thus captures a simple core of argument-related notions. Whether working in philosophy, linguistics or computer science, it is inevitable that specific research projects, teams or schools will need to go beyond this lowest common denominator. For that reason, the AIF infrastructure has been designed to support not only a core vocabulary (or "ontology"), but also a principled mechanism by which it can be extended with additional, supplementary representation systems (or "adjunct ontologies", AOs). AOs are designed according to a general guiding principle that they should encapsulate the AIF core. One of the most mature AOs available for the Argument Web is the social layer, a set of components for maintaining information about arguers and users of argumentation software (Snaith et al. 2013). Applications that make use of the social layer (such as Argublogging ) access the social layer only; the social layer encapsulates all of the information available in the AIF. In this way, AIF resources can continue to be shared between all Argument Web systems, whilst the specific needs of individual sets of applications with specific requirements can be catered for appropriately. (Of course, in many cases, including that of the social layer AO, there are also multiple applications which can share these extended data sets -OVA and Arvina, discussed later, both make use of the social layer, for example).
Examples of the AIF in Action
To show how the AIF handles familiar types of argumentative structures and exchanges, we include here three examples: a Walton argumentation scheme; a Pollock-style undercutter and a dialogical exchange. In Fig. 1, the claim women need free access to abortion forms the conclusion of four convergent arguments. Three correspond to instances of the argumentation scheme for Argument from Positive Consequences, plus a further argument not assigned any specific scheme in this analysis. For each instance, it is possible to reconstruct the implicit components associated with the scheme, such as, e.g. achieving full political, social and economic equality with men is desirable. In this way, the implicit premises of an argument can be identified by matching up explicitly mentioned structure with characteristic structure in the general scheme definition. The template provided by the scheme points the way to the implicit premises. Such reconstruction has been demonstrated to be a particularly tricky task (Hitchcock 2002), so this template-driven enthymeme resolution is theoretically valuable. Schemes also embody critical questions which, as explored by Verheij and Arguments (2005) and then extended in the AIF by Rahwan et al. (2007), play several roles, indicating not just implicit components, but also stereotypical patterns of attack. Such patterns of attack are of several kinds and one of the most important distinctions is that between rebutting and undercutting attack. Whilst conflict between claims can straightforwardly capture rebutting attack, the structure of undercutting attack is a little more complex. The notion of undercutting was introduced by Pollock (1995) so we use his example to explain the AIF approach to undercutting.
In Fig. 2, the conclusion that the object appears red is not itself attacked, but rather the inference (from the premise that it appears red to the claim that it is red) is attacked. Thus, the conflict (the Conflict Application, or CA) targets the inference (the Rule of inference Application, or RA). As both Verheij and Arguments (2005) and Rahwan et al. (2007) explore, some critical questions associated with schemes are associated with rebutting structures, some with undercutting structures.
Finally, the AIF has also been extended to handle argument that is situated in dialogical situations (Reed et al. 2008). A key insight in that work takes issue with, for example, the pragma-dialectical claim that illocutionary acts of arguing are located at the point of premise-giving (van Eemeren and Grootendorst 1992), which relies upon interpreting premise-giving as a complex speech act. By reifying the rules that govern dialogical progression, and then permitting those rules themselves to create or 'anchor' illocutionary force, arguing can be interpreted as an illocutionary act that comes about as the result of a relation between uttering a premise and uttering a conclusion, thus mirroring the logical structure of inference in its illocutionary structure. This theoretical approach that underpins the dialogical extensions to AIF is known as Inference Anchoring Theory (Budzynska and Reed 2011). Figure 3 shows how this works: dialogical activity is shown on the right hand side, connected by applications of rules of dialogue (Transitions, or TAs) such as that a challenge can be responded to by a substantiating assertion. Simple locutions can anchor illocutionary forces-Why?, for example, anchors challenging (the propositional content of which is the same as the preceding assertion). But TAs can also anchor illocutionary force: the connection between the challenge and its response is the locus of arguing (in this case, the illocutionary force of arguing has as its content an instance of the argumentation scheme for Argument from Analogy).
Individual Analysis
With techniques for representing argument, perhaps the first place to start is how to exploit that representation language and create argument data-this is the task of argument analysis. There is a long history of argument analysis software which has in turn spawned a number of review and comparison articles including the work of Harrell (2005), Scheuer et al. (2010) and Kirschner et al. (2003) which review different application areas (including deliberation, eDemocracy and eRulemaking, the law, and so on) and different philosophical starting points (Toulmin, Freeman, Walton and more). The emergence of the AIF was itself a result of this tradition, with the markup language used by Araucaria (Reed and Rowe 2004) used as a base, enhanced with features from Carneades (Gordon and Walton 2006), DeLP (García and Simari 2014) and others, reflecting not ony practical improvements but also improvements reflecting deeper theoretical insights (including the ability to handle undercutting arguments, full graph structures, etc.).
Between 2001 and 2010, Araucaria was a widely used analysis tool, with over 10,000 users in 80 countries. Though implementing a rather primitive conception of argument, it was the first tool to cater for multiple analytical styles or theories of argument-not only a common 'box and arrow' analysis characterised in the most detail by Freeman (1991), but also a view based on Toulmin's account of argument (Toulmin 1958) and in addition a further view embodying a rich diagrammatic vocabulary developed by the legal theorist Wigmore (1931). Crucially, despite the differences in the visual conventions, the underlying data structures and the XML-based language for storing them was unified with theory-specific extensions where theoretical elements were not mappable to concepts in other approaches. This established the concept of 'theory neutrality' (Reed and Rowe 2005) which remains a cornerstone of the Argument Web. That is, there are sufficient features of argumentation common to most or all theories and conceptions to form a common interlingua.
A number of systems built upon the success of Araucaria, including language specific developments such as an open-source branch of the code base in Mandarin, and a programme of published research in Polish (Budzynska 2011).
More significantly, however, most of the functionality of Araucaria was reimplemented for online, in-browser use in the OVA (Online Visualisation of Argument) tool. 5 Though diagramming in Toulmin and Wigmore styles is not available (for these, Araucaria is the only option), OVA supports a range of analytical goals.
Collaborative Analysis
Most software tools designed to support argument analysis focus on a single user. Software such as Google Docs, however, has demonstrated the strong appetite for Fig. 4 The AnalysisWall tools that support collaborative working; with OVA running online, it was extended in version 2.0 to provide a Google-style link to allow multiple analysts to work together on a single analysis (Janier et al. 2014). A number of other online tools (such as DebateGraph, debategraph.org and RationaleOnline, rationaleonline.com) support multiple concurrent users, but few support collaborative argument analysis.
Some types of analysis demand large-scale collaborative analysis. For working with live broadcast debate, the AnalysisWall (Fig. 4) provides a large, shared workspace-a very high resolution 7.7 sqm touchscreen running bespoke argument analysis software supporting effectively unlimited concurrent touch points (i.e. with no restrictions other than physical space on the number of analysts that can work to gether). 6 The analysis that is supported is similar to that offered by OVA: connecting nodes through inferences and conflicts, optionally specifying argumentation schemes, distinguishing undercutting from rebutting and marking reported speech. In addition, the very size of the AnalysisWall (at over 3m long) means that it is important to support grouping of nodes so that an analyst standing to the lefthand side can share some work by flicking an analysed group across to an analyst working on the right. The AnalysisWall supports collaborative analysis streamlined to work in real-time and has been used to conduct analysis of the BBC Radio 4 programme The Moral Maze. Coupled with a stenographer who could provide a live text stream, plus two analysts working on a separate terminal to segment the text into argumentative components which were then delivered automatically to the Wall, the hardware enabled teams of between six and ten analysts to work together, just managing to keep pace with the incoming flow of argumentation from the radio programme. The team would split and internally restructure briefly and fluidly, forming ad hoc groups of two or three to work on a part of the reasoning before returning to the overall analysis. This analysis resulted in large AIF datasets all of which are then manipulable by the other tools desribed in this paper. Results from this analysis can be found at, for example, aifdb.org/argview/789.
Argument Pedagogy
One of the largest domains and strongest drivers for work on software for argument analysis has been pedagogy (though pedagogical applications often go beyond just analysis). In many areas of the world, the teaching of critical thinking forms a key part of a broad introductory foundation to university level education, and in almost all parts of the world, definitions of university 'graduateness' refer to skills normally considered a part of critical thinking or informal logic.
The various tools for analysis described in the previous Section have all been deployed in educational settings. In particular, OVA is widely used currently with around 5000 unique users per year from over 50 countries.
One of the attractions of deploying Argument Web tools in the classroom is that there are also software applications for providing automatic assessment and feedback. Previously, the only way of making use of automated mechanisms for grading and assessment was to use multiple-choice examinations (Fisher and Scriven 1997). Such tests are not ideal and cannot be used across the board, however, because they assess only a narrow range of skills, offering only very limited scope for assessing students' depth of understanding. Many software systems have focused instead on allowing students to conduct detailed box-and-arrow style analyses (for a review and detailed comparison see, e.g., Scheuer et al. 2010)-the problem is that such analyses have had to be marked by hand.
Argument Web infrastucture, however, allows such student analyses to be easily manipulated by other software components. By running graph matching algorithms over student analyses prepared in OVA, it is possible to compare submissions against one or more model answers, converting results not only into a grade (via a tutorconfigured scoring algorithm) but also template-driven textual feedback. This is the functionality offered by the Argugrader system 7 which has been trialled both at the University of Dundee and at City University of New York. An example of the feedback offered by Argugrader is shown in Fig. 5 showing component-bycomponent marking of a student generated analysis, plus template-driven feedback and overall scoring. To the best of our knowledge, this is the only extant system that automatically performs grading on the basis of argument structure.
Argument Curation
Alongside an increasing engagement with empiricism in philosophy, the use of data has started to play a more significant role in argumentation theory (see, for example, Goodwin's panel on empirical and corpus research in argumentation at the International Pragmatics Conference in 2007). As a result, the Argument Web has a variety of features to facilitate the collection, curation, sharing, tracking, comparison and summary of datasets.
AraucariaDB (Reed 2005) was the first publicly available corpus of analysed argumentation in the world, but was rather limited in size (around 700 arguments) and analysis rubric (with no measurement of inter-annotator agreement). Though AraucariaDB was imported into the Argument Web infrastructure, it now forms just one corpus amongst many.
At the beginning of 2017, the servers hosted by the Centre for Argument Technology for providing aifdb.org 8 stored 11,000 arguments involving 60,000 claims expressed in around 1.2m words. The infrastructure supports many different scripts (and has been tested with Mandarin, Korean, Hindi, Arabic, Hebrew and Cyrillic), and the databases store analyses of arguments that have been contributed in 14 different languages. 9 As the size of the dataset increases, organisation is required. Sharing of argument analyses is crucial to support the growth of the community because such analysis is so labour intensive. However, research teams need to have confidence in the permanent availability of resources, of their immutability, that they can be cited, aggregated and reused. This is the challenge of data curation for the Argument Web, and it is to tackle this challenge that AIFdb was extended to support definition of corpora. These extensions are available online at corpora.aifdb.org. Many of the data available in AIFdb are now organised into specific corpora, where individual corpora have been developed for particular research projects or in particular research groups. Again at the start of 2017, there are over 70 corpora developed by nine different labs in Europe and North America, covering fields of argumentation as diverse as mediation, pedagogy, politics, broadcast debate, eDemocracy and financial discussion. In addition, two other corpora both of which became available in 2014: viz. the Potsdam Microtext Corpus (Peldszus and Stede 2015) and the Internet Argument Corpus (Walker et al. 2012) have been migrated to the Argument Web infrastructure. In order to meet the needs of the community, corpora can be freely defined to package up sets of analyses (or indeed sets of subcorpora) from AIFdb. Corpora may be locked by their creators so that they are immutable-though other users could create similar corpora with newer contents of course. At time of writing, the tools for managing Argument Web corpora are undergoing a process which will allow corpora to be allocated Digital Object Identifiers (DOIs) allowing them to be formally citeable in their own right. As a common research task, the statistical analysis of corpora is supported through a suite of analytics (at analytics.arg.tech, Fig. 6) that offer insight into the size and structure of corpora, and allow comparison between them (for measurements such as the κ estimate for inter-annotator agreement (Fleiss 1971)).
Argument Navigation
With many thousands of argument resources available on the Argument Web infrastructure, and the rate of addition continuing to increase, there is an increasingly acute need for tools to help navigate and interrogate the data. There are three broad types of navigation: static, dynamic and dialogical.
Static navigation of argument resources comprises straightforward visualisation techniques of graph structures and 'point-and-click' navigation traversing from one claim in an argument network to another. Though good quality interface design Fig. 6 Analytics for the Argument Web makes this a convenient mechanism for dealing with small argument networks (of up to around 100 component parts-premises, conclusions, etc.), it rapidly becomes unmanageable and confusing as the datasets become greater. An average analysis of a 45-minute episode of the BBC Radio 4 debate programme, The Moral Maze, for example, comprises around 500 components; viewing a programme in its entirety renders text illegible and interconnections little more than spaghetti (see, for example, aifdb.org/argview/789).
For those arguments naturally associated with a temporal flow, an obvious means of navigation is using the time axis to scroll back and forward through the discourse. Inspired by the graphically extraordinary work of Pluss and De Liddo (2015), tools for temporal playback are now being released for dialogical AIF resources.
Finally, an observation offered by Freeman (1991) has laid a foundation for a further type of navigation. Freeman suggests that the relationships between components in a (perhaps entirely monological) argument can be probed by considering what questions might have been asked to yield the relationships. Thus, a convergent argument structure is associated with a fictitious respondent asking, 'Can you give me another reason?' whilst a linked structure is associated with the question, 'Why is that relevant?' Though Freeman's goal was to shed light upon the tricky structural distinction, his method can be operationalised in software as a navigation tool. That is, we can treat dialogue as a means of navigating the information space defined by argument structures. Unfortunately, we cannot assume that there is a single set of dialogue moves or dialogue rules by which we might navigate any argument. Different contexts, different domains, different fields may demand different types of dialogue. So for example if we are navigating legal arguments, perhaps navigation is best achieved in part through questions concerning the burden of proof; for arguments being navigated in pedagogical settings, perhaps questions that elicit partial answers or that require more specificity for the student are most appropriate. Such dialogical navigation can thus be translated by software into varying sets of instantiated moves from which users can select at any point and sets of less constrainted moves that allow new information to be elicited from users.
As a result, the Argument Web provides a language for describing the rules of these dialogue games or dialogue protocols (essentially, this is a specialised computer programming language)-the Dialogue Game Description Language, DGDL (Wells and Reed 2012). Many well known philosophical investigations from Ramus to the present day have made use of formal dialectical games; Wells and Reed (2012) show how many of these games (Hamblin's H, Mackenzie's DC, Walton's CB, Walton & Krabbe's PPD and others) can be captured as DGDL specifications.
With many different dialectical games available in a library of DGDL specifications, a general purpose dialogue execution engine is required to run them. This is provided by the Dialogue Game Execution Platform, DGEP, which provides a simple environment that software engineers can use to build applications that support human-human or human-machine dialogue. There are several such applications available, the two most stable of which are Arvina and Argublogging.
Arvina allows multiple humans to engage in a structured dialogue simultaneously with mutiple software agents. (Arvina and DGEP make no distintion between human players and artificial players; they simply keep a track of who can say what whenthis 'level playing field' between humans and software in dialogue is known as mixed initiative argumentation). The software agents are responsible for arguments already available on the Argument Web and associated with a specific individual who originally put them forward. In this way, software agents can advocate on behalf of the original proponents. Conversely, human participants can use the dialogue game to explore the arguments of previous, human participants (or arguments that have been analysed using tools such as OVA). Figure 7 shows Arvina in use. Another dialogue application on the Argument Web is designed specifically for bloggers, enabling arguments to be co-constructed between bloggers and between blogs. The system is described in more detail in the work of , but what is salient here is the way in which dialogue games do not just prescribe how information is navigable, but also, at the same time, how it is to be extended. To extend Freeman's questions, above, if a dialogue game allows a user to express their opinion and for some other agent to ask for a reason and why that reason is relevant, then as an inevitable side effect, the Argument Web is updated not just with the user's opinion, but with the opinion structured as a linked argument. Argublogging embodies an extremely simple dialogue game, but nevertheless provides an interface that is intuitive for bloggers whilst at the same time intercepts their contributions so that the structured arguments on their blogs are available through AIF on the Argument Web. Such user-generated content holds great potential as a way of gathering large amounts of structured data for further scholarly research.
Argument Evaluation
Both computational and philosophical exploration of argument place great emphasis on the ability to evaluate arguments. In philosophy, such evaluation is founded upon standards of proof, or levels of audience acceptability, or persuasiveness or rationality, or by reference to a set of normative standards. In computational work, evaluation has been largely focused on analytical definitions of acceptability. This notion is akin to consistency, although, via the work of Dung (1995) on abstract argumentation acceptability, acceptability semantics is significantly more robust in the face of sub-deductive reasoning (such as inductive, presumptive and defeasible patterns) common in natural argumentation.
The Argument Web offers a variety of evaluation engines. Because of the strong focus in AI on evaluation over argument structures, this is an area of the Argument Web that is leading the way in terms of functioning as an ecosystem in which different research labs contribute different parts of solvers that combine to become available in general purpose form. Dung-O-Matic (Snaith et al. 2010) was an early implementation of a variety of acceptability semantics (including grounded, preferred, ideal, eager, and semi-stable) that works only on abstract frameworks. TOAST works with structured data (i.e. raw AIF structures), converting them to abstract frameworks according to Bex et al. (2013) and then calling Dung-O-Matic. Tweety (Thimm 2014) is an implementation of DeLP (García and Simari 2014) that uses a simple mapping from AIF into a logic program. Finally ArgSemSAT (Cerutti et al. 2014) uses a computational technique known as SAT solving to compute acceptability over abstract frameworks using the same translation as is performed in TOAST.
Argument Mining
If the Argument Web is to expand its reach and increase its data resources by several orders of magnitude, techniques for manual analysis and for intercepting user generated content will not be enough. It will be necessary to turn to automated techniques for identifying both the presence and the structure of argument in unrestricted natural text. This is the challenge of argument mining.
Just a few years ago, the prospect of automatically extracting the structure of reasoning from natural language text was firmly beyond the state of the art: only very preliminary work was being carried out at a small number of research groups such as Leuven, Dundee, and Toronto (Feng and Hirst 2011;Moens et al. 2007). Now, there are at least twenty research groups across the US and the EU, in which work on the problem has begun in earnest, with three workshops, including one at the major computational linguistics conference, ACL, during the summer of 2014. The rapid expansion is due in equal measure to the increasing maturity of computational techniques (such as those for argument annotation using supervised learning and topicbased argument structure recovery using unsupervised learning) and clear commercial demand in areas such as financial market prediction and marketing research.
Though Palau and Moens (2011) offer a good introduction, the field is moving so quickly that the best reviews of the area are currently offered by the tables of contents of the 2014, 2015 and 2016 ACL workshops on Argument Mining (see argmining2016.arg.tech). Pease et al. (2017) show how AI techniques for understanding abstract argumentation and its connections to structured argumentation, linguistic expression of reasoning and dialogical practice can work together in a complete cycle, from real-world argument to philosophical account, formal theory, abstract argumentation and argumentation semantics, and back to application to real-world argument. Extending earlier work (Pease et al. 2014), we describe each stage in the cycle both formally and our implementation of it, employing a range of argument tools described above.
Case Study
Firstly, we take an existing philosophical account of ways in which mathematical theories evolve via interactions between mathematicians who might have differing motivations, background theories, concept definitions and so on-specifically, Lakatos's account (Lakatos 1976), which is based on real world historical case studies (1 and 2 in Fig. 8). We then express this account as a formal dialogue system with sets of locution, structural, commitment and termination rules (3,4,8), and express this in a domain specific language for dialogue game specifications (5). This can be executed using DGEP and we define operational semantics in terms of updates to argumentation structures expressed in a linguistically oriented ontology for argumentation, AIF (6). Using TOAST, DungOMatic, and AIFdb, abstract argumentation frameworks are then automatically induced from AIF via the structured argumentation system ASPIC+ (Modgil and Prakken 2014), showing the consequences of AIF updates at the abstract layer and demonstrating how those abstract semantics yield a grounded extension that provably always corresponds to the theory that has been collaboratively created by the dialogue participants (7)-that is to say, the rules of the dialogical game defined by Lakatos correspond precisely to the rules of argumentation-based reasoning defined by Dung and others. Bringing the stages Fig. 8 The stages involved in the cycle from real-world argument to Lakatos's account of mathematical argument, to our model and implementation and application to further real-world argument full-circle, we show how the model accounts for real mathematical dialogues online (8). Closing the circle in this way, starting and ending with real-world mathematical discourse, allows us not only to demonstrate the depth of Lakatos's original insight, but also to show that the formal characterisation here remains both honest to the original and of practical utility to mathematicians. By making this connection back to the community of argumentation practice, the door is opened to mixed-initiative, collaborative mathematics. This is the first time that formally specified and fully implemented argumentation tools right through the abstraction hierarchy from linguistic expressions through structured argumentation to instantiated abstract argumentation have been brought together and applied to a specific, demanding domain of human reasoning. The foundation that has been laid here allows new explorations into mixed-initiative, collaborative reasoning between human and software participants in interactions which are both naturalistic but formally constrained and well-defined, with the potential to impact both the pedagogy and the professional practice of mathematics.
Concluding Remarks
The key challenge facing the Argument Web now is one of evaluation. Evaluation of these component parts of the Argument Web cannot simply be expressed as a series of psychological experiments or usability studies. Each component has various facets each of which can be explored separately. The evaluation of argument representation, for example, can be conducted with respect to theoretical adequacy (does it handle specific philosophical concepts in a way that remains consistent with theoretical results and predictions?); inter-annotator agreement (do analysts agree on how to analyse specific examples when trained to a particular skill level?); computational expressivity (does it allow for representation of sufficiently common complex examples?). Similarly, computation of values given argument structures might proceed with respect to intuition (that is to say, do automatic computational processes deliver the same results as humans, and are artificially computed results consistent with psychological experimentation?); mathematical models (do computational processes deliver the same results as those modelled mathematically in systems such as Dung's abstract argumentation?); epistemological definitions (do computational processes deliver results that are plausible on epistemological grounds, or that fit epistemological theory?). And so on. This evaluative programme represents a major undertaking for the community and one that will spur on the development of both the Argument Web in particular and the field of computational argumentation in general.
As things stand, the Argument Web comprises a broad set of tools, datasets, infrastructure, working policies, interfaces, standards and research programmes that is not only facilitating the coherence and maturity of research in computational models of argument, and the availability of data for training AI algorithms in argument mining, but also serving as a proving ground and perhaps also an inspirational playground for the development and testing of philosophical theories of argumentation. The Argument Web is thus not only a product of the fruitful interaction between AI and the philosophy of argument to date, but also stands to facilitate the growth of that interdisciplinary connection in the future. | 11,090.8 | 2017-05-11T00:00:00.000 | [
"Computer Science",
"Philosophy"
] |
PM 2 F 2 N: Patient Multi-view Multi-modal Feature Fusion Networks for Clinical Outcome Prediction
,
Introduction
With the development of information technology in medical area, an increasing number of devices are used for monitoring patients.And a large number of data is stored as electronic health records (EHR), which contain numerical results of physical examination in time series and clinical notes in text for patients' relevant information.The multi-type data can be utilized to predict the condition of patients, which can help in managing the resources in hospitals.Most previous works focused on modeling the problem using the time series data recorded by medical instruments (Ghassemi et al., 2015;Xu et al., 2018).However, the time series data gathered from medical devices only reflects physical status of patients in a one-sided way.Medical professionals need to utilize their expertise to analyze patients' data and make the diagnosis.The important analyses to patients' data are recorded in EHR as clinical notes.
More recent work applied natural language processing (NLP) methods to take full advantage of medical information in clinical notes for prediction tasks (Boag et al., 2018;Lee et al., 2020;van Aken et al., 2021).They utilized pre-trained language models to extract text features of clinical notes and fed them into recurrent or convolution neural networks for clinical outcome prediction.Further more, considering to combine time series data with clinical notes for improved prediction on clinical outcome, some recent work proposed multi-modal learning methods to jointly model the two kinds of data (Khadanga et al., 2019;Bardak and Tan, 2021a;Deznabi et al., 2021).They used sequence models to extract features of time series and clinical notes respectively, and concatenated them for predicting clinical outcome.However, the existing methods do not consider that the features of time series data and clinical notes fuse different parts of each other with various weights.Besides, the multi-modal features of a single patient is not sufficient for clinical outcome prediction, and the medical correlation between patients has not been exploited for this task.
To overcome the above disadvantages of the existing methods, we propose the patient multi-view multi-modal feature fusion networks (PM 2 F 2 N) 1 for clinical outcome prediction.The model enhances the multi-modal feature fusion ability in two views.Firstly, from the patient inner view, we use the co-attention (Lu et al., 2016) module to enhance the fine-grained feature interaction between time series data and clinical notes.The co-attention module allows our model to attend to important parts of time series data as well as correlated medical information of clinical notes.Secondly, from the patient outer view, other patients' information is useful to predict the status of the observed one.We construct the patient correlation graph based on the structural medical information extracted from clinical notes, and fuse patients' multi-modal features by graph neural networks (GNN).With the multi-modal feature fusion from different views, our model can gain better generalization ability to predict clinical outcome.The contributions of this manuscript can be summarized as follows: 1. We analyze the disadvantages of the existing methods for clinical outcome prediction.To improve the ability to fuse the multi-modal features from different views, we propose the patient multi-view multi-modal feature fusion networks.
2. From the patient inner view, we extend the coattention module to enhance the fine-grained feature fusion between time series data and clinical notes.Besides, from the outer view, we exploit the patient correlation graph to aggregate the multi-modal features between patients.
3. We evaluate the effectiveness of the proposed model on MIMIC-III benchmark.The exper-imental results demonstrate that our model outperforms the baseline approaches.And the further analysis to multi-modal features also shows the superiority of our model.
Time Series for Clinical Outcome Prediction
The earlier works on mortality prediction designed hand-crafted features and used traditional machine learning methods like logistic regression with various severity scores (Vincent et al., 1996).With the progress of the deep learning, the sequence models, such as: long-short term memory networks (LSTM) (Hochreiter and Schmidhuber, 1997) and gated recurrent units (GRU) (Cho et al., 2014), are utilized to tackle with time series data for clinical outcome prediction.Besides, some researchers exploited irregular sampling of the data over time in their prediction models (Zhang et al., 2021b).Furthermore, the self-attention mechanism is also used to capture the dependencies within the time series data for clinical outcome prediction (Song et al., 2018;Ma et al., 2020).
Clinical Notes for Clinical Outcome Prediction
Considering the time series data is limited in explicit medical information, some works focused on using clinical notes for outcome prediction.They utilized the pre-trained word embeddings (Zhang et al., 2019) as text features of clinical notes, and fed them into recurrent neural networks (RNN) or convolution neural networks (CNN) to extract hidden features for predicting results (Ghorbani et al., 2020).Besides, the external medical knowledge is useful to predict the physical status of patients.The clinical outcome pre-training method was proposed to integrate knowledge from multiple patient outcomes (van Aken et al., 2021).
Multi-modal Learning for Clinical Outcome Prediction
With the development of multi-modal learning, the above methods are limited in fusing various sources of available data when predicting medical outcomes.And the data of every modal can be enhanced with each other in multi-modal learning.The multi-modal learning for time series data and clinical notes showed the effectiveness on clinical outcome prediction (Khadanga et al., 2019;Deznabi et al., 2021).They utilized RNN to extract hidden representations of time series data and CNN to acquire ones of clinical notes.The two hidden features were then concatenated and fed into feed-forward neural networks (FFNN) for predicting results.Besides, to model the robust representations of patients' multi-type data in EHR, the supervised deep patient representation learning framework was proposed for clinical outcome prediction (Zhang et al., 2021a).To make use of sparse medical information in clinical notes, the named entity recognition (NER) model was utilized to extract entities in texts and the representations of them were introduced into multi-modal learning model for making predictions (Bardak and Tan, 2021a).The existing methods do not consider to evaluate the status of patients from different aspects.Therefore, we propose PM 2 F 2 N model to fuse multi-modal features from various views for clinical outcome prediction with better generalization ability.
Model
We introduce the notations about clinical outcome prediction before getting into the details of the proposed model.The training set with N s samples is denoted as {(X (i) , C (i) , y (i) )} Ns i=1 , where X (i) and C (i) are i-th patient's time series data and clinical note respectively, and y (i) is the task-defined label.Given a time series with N t time steps and N v observed variables, the patient's vital signals can be formulated as X = {x 1 , x 2 , . . ., x Nt } ∈ R Nt×Nv .We denote the clinical note with N w words as C = {c 1 , c 2 , . . ., c Nw }.After pre-processing the multi-modal data, we feed them into the proposed model.
The PM 2 F 2 N model is shown in Figure 2.For time series data, we utilize the bidirectional GRU to extract hidden representations.Considering to acquire the multi-grained features of clinical notes, we apply the NER model to extract medical entities as local features and use the term frequency-inverse document frequency (TF-IDF) method to extract global features of clinical notes.To combine the entity representations of clinical notes with hidden representations of time series data in a fine-grained way, we exploit the co-attention module to acquire the multi-modal fusion features with various attention weights.Based on the medical information of different patients, we build the patient correlation graph and exploit it to aggregate multi-modal features of various neighbors via GNN.The concatenation of global features of clinical notes, last hidden features of time series data and aggregation multi-modal features is fed into FFNN for outcome prediction.
Multi-modal Feature Extraction
Given the multi-modal data as input, we need to pre-processe them and map them into the dense representations for deep neural networks as shown in Figure 2. We denote time series data which has N t time steps and N v observed variables as X = {x 1 , x 2 , . . ., x Nt } ∈ R Nt×Nv .With the impressive performances of RNN, GRU and LSTM are utilized to extract the hidden representations of sequence data.Considering to capture the context information in forward and backward directions, we utilize the bidirectional GRU (BiGRU) to acquire the hidden features of time series data X ( Bardak and Tan, 2021a).The extraction process is simplified as BiGRU(X; where N h is the dimension number of hidden feature vector and θ 1 is the trainable parameters of BiGRU. The clinical notes contain detailed information about patients and medical knowledge implicated in inference of doctors.Considering that clinical notes may contain redundant information, we need to extract the representative features to highlight critical patient information.Therefore, we propose to extract the multi-grained features of clinical notes.To make full use of unstructured clinical note C, we utilize the TF-IDF to extract the global feature vector.With the advantage of TF-IDF, the important tokens in clinical notes can be represented by the global feature vector.However, the dimension of global feature vector is too high to represent the patient with a tight way and fit all into the memory.We then apply principal component analysis (PCA) to reduce the dimension of global feature vector and the dimension-reduced global feature of clinical note is defined as C g ∈ R Ng where N g is the dimension number of global feature vector.
Besides, there are various medical information defined as entities including: diseases, drugs, dosage and so on, in clinical notes (Kormilitzin et al., 2021).The structural medical knowledge, known as entities, is the most important information to represent the status of patients.The raw clinical notes contain lots of redundant free-text
Multi-modal Feature Fusion with Multi-view
There are different views to evaluate the physical status of patients.From the inner view of the observed patient, the doctor analyzes the multi-modal data to make diagnosis.Based on accumulated clinical experience, the doctor can also dig into the correlation between patients to provide the diagnostic result to the observed patient.From the outer view, therefore, the multi-modal data of other patients, which are correlated with the observed one in medical knowledge, is also beneficial to the diagnostic results.With the target to enhance the representation ability of our model, we propose to improve the multi-modal feature fusion strategy in two different views.
Feature Fusion with Inner View
The where W s ∈ R N d ×N d is the trainable weight in the module.The shared feature S is used to calculate the correlation between time series and medical entity features.Firstly, the multi-modal fusion features of time series data and clinical note are calculated as: noted as Ĥ and Ĉb .
Feature Fusion with Outer View
To analyze the physical status of the observed patient, the relevant patients' information is worthing referring to.The patients with the approximate physiological conditions are represented with similar multi-modal features.Therefore, we make an effort to construct the correlation graph between patients and aggregate the multi-modal features of them by their neighbors with medical knowledge relevance.Given the clinical notes {C (i) } Ns i=1 in training set, we have acquired the medical entity set {C (i) e } Ns i=1 of them by Med7 model.The patient correlation graph (PCG) A ∈ R Ns×Ns is initialized as an identity matrix.And the elements {A ij |i, j ∈ {1, 2, . . ., N s }} in the PCG are the correlation degree between i-th patient and j-th one.Considering that the medical entity is the most important information to represent the patient, we exploit it to evaluate the correlation degree between each patient as shown in Figure 3.The jaccard similarity is the metric to evaluate the correlation of two sets, and the correlation degrees are calculated as follows: We concatenate the original extracted multi-modal features and the fusion ones as the patients' features P = {p (i) } Ns i=1 where the i-th patient's multi-modal feature is calculated as p + b p , and W p and b p are trainable weights in the model.
To update the observed multi-modal features via the relevant patients, we utilize the graph convolution networks (GCN) (Kipf and Welling, 2017) aggregate ones of neighbors.The calculation of the aggregation multi-modal features is simplified as P = σ (APW g + b g ) where W g and b g are trainable weights in GCN module, and σ is the nonlinear function.The patient multi-modal fusion feature with outer view is denoted as P = {p (i) } Ns i=1 that contains various correlated ones.
Training Procedure
After acquiring the multi-modal fusion feature with multi-view, we utilize it to predict the target probabilities.The concatenation of multimodal fusion features with multi-view and original extracted features is feed into the FFNN.The prediction probabilities are calculated as ŷ(i) = FFNN h (i) Nt ; C g ; P(i) ; θ 2 where θ 2 is the trainable weights in the FFNN module.To solve the classification task, we utilize the cross-entropy loss as follows: We feed the multi-modal data into the model and acquire the loss according to Equation 2. To train the parameter weights of the model, we use the stochastic gradient descent (SGD) method to update them according to the calculated loss.
Dataset and Experiment Settings
We compare the proposed model with the existing methods on the medical benchmark dataset MIMIC-III (Johnson et al., 2016).The dataset contains the multi-type data collected from the real scenario including vital signals, clinical notes, ICD-9 code and so on.We follow the previous work (Bardak and Tan, 2021a) to extract the time series data and clinical notes from the raw dataset with the publicly available tool MIMIC-Extract (Wang et al., 2020).The detailed statistical information of the dataset is shown in Table 1.The dataset is always used for two common targets: mortality and length of stay (LOS).And there are four binary classification tasks analyzed by the above works as follows: 1. In-hospital Mortality: This task targets to predict whether a patient dies before being discharged.
2. In-ICU Mortality: This task is defined to detect patients who are physically declining and predict the mortality of them within 24 hours.
3. LOS >3: This task targets to predict whether a patient stays in the ICU longer than 3 days.
4. LOS >7: This task is defined to detect patients who stay in the ICU longer than 7 days.
After extracting the dataset, we utilize the Python package fancyimpute2 to impute the missing values in the time series data.We feed the clinical notes into the Med7 model to extract the medical entities and utilize the BioBERT-Large (Lee et al., 2020) version of the language model BERT to extract text features of the entities.The dimension numbers N d and N k of hidden features in co-attention module are set to 128, and the others are set to 256 in our model.We set the dropout rate and learning rate to be 0.5 and 0.001 respectively.During the training process, we firstly train the model on the training set 300 epochs at most and test it on the development set.According to the early stopping strategy, we stop training the model when the loss on the development set does not decrease within 20 epochs.We use two different metrics including AUROC and AUPRC to evaluate the models on the imbalance tasks.All experiments are accelerated by a single NVIDIA GTX A6000 device.
Compared Methods
We compare the proposed model with the existing machine learning methods.The models proposed by (Khadanga et al., 2019;Deznabi et al., 2021) were designed to combine the time series data with clinical notes with simple feature fusion strategy for outcome prediction.Besides, a new calibrated random forest (CaliForest) utilizing out-of-bag samples was proposed for the risk prediction (Park and Ho, 2020).Taking the structural medical information into account, the models proposed by (Bardak and Tan, 2021b,a) were implemented to combine the time series data with important medical mentions for clinical outcome prediction.The robust representations of patients' multi-model data in EHR are critical to the downstream tasks and the supervised deep patient representation learning framework was proposed for outcome prediction (Zhang et al., 2021a).The label aware attention mechanism was introduced into the multi-modal learning method (Yang et al., 2021) for the prediction task.
Experimental Results
We compare PM 2 F 2 N with the baseline methods on four classification tasks.The detailed experimental results on MIMIC-III are shown in Table 2. Our model can always achieve the best results on four tasks when compared with baseline methods.And the AUROC and AUPRC scores of the proposed model increase by 1.1% ∼ 3.7% and 0.4% ∼ 10.5% over baselines on four tasks respectively.Compared with traditional method CaliForest (Park and Ho, 2020), the deep learning models can always gain better results than it on most classification tasks.Our model can outperform the multi-modal learning methods with simple feature fusion strategy (Khadanga et al., 2019;Deznabi et al., 2021) because the proposed one takes full advantage of patient multi-view multi-modal feature fusion.Although the models by (Bardak and Tan, 2021b,a) utilized the medical entities in clinical notes, they did not model the fine-grained features between multi-modal data.And the model by (Yang et al., 2021) incorporated the label information to enhance the text features of clinical notes and ignored fine-grained feature fusion.Our model gains better results over them with the use of co-attention module for effectively modeling multimodal fusion features.Besides, the representation learning method (Zhang et al., 2021a) is beneficial to downstream risk prediction task.However, the method did not take the patient correlation in medical knowledge into account and model the relevant multi-modal features.Our model exploits the structural medical information for constructing patient correlation graph and fuses the multi-modal features by GCN based on the graph.Therefore, the proposed model gains better generalization ability for clinical outcome prediction.
Further Discussion
To dig into the model, we conduct the detailed analysis for presenting it in different aspects.The ablation study is performed to demonstrate the effectiveness of the different feature fusion strategies proposed in our model.Besides, to verify the effectiveness of the patient correlation graph, we compare the performances of the tasks that are conducted on the adjacency matrixes filled with different values.Eventually, we visualize the multimodal features extracted from the proposed model for presenting the usefulness of the patient correlation information in the feature fusion aspect.
Ablation Study
As shown in Table 3, we conduct the ablation study to present the effectiveness of the proposed multimodal feature fusion strategies.We utilize the single modal data (TS), multi-modal data (TS + CN) to train clinical outcome prediction models respectively as the base comparison methods.It proves the effectiveness of multi-modal learning that the model trained with multi-modal data achieves better results than that with single-modal data.When the patient correlation graph (PCG) is introduced into the base multi-modal learning method, the results on the four tasks are improved to vary degrees.The multi-modal feature fusion with outer view can aggregate that of various patients and improve the generalization ability of the proposed model for clinical outcome prediction.The model incorporated with co-attention (CA) is the proposed model PM 2 F 2 N and gets vary improvements on the four tasks.With the advantage of CA, our model can fuse the multi-modal features in a fine-grained way.
Effect of Patient Correlation Graph
As shown in Figure 4, we conduct the comparison experiments to demonstrate the effectiveness of the proposed patient correlation graph (PCG).The proposed model is fed with two distinct adjacency matrices filled with all 0s and 1s to replace the PCG.The "Adj=0" model utilizes the adjacency matrix filled with all 0s to disentangle GCN from our model as a baseline.And the "Adj=1" model exploits the adjacency matrix filled with all 1s to verify the effect of the patient correlation degrees.Compared with baseline "Adj=0" model, the "Adj=1" model gets various drops on AUROC and AUPRC while our model gains 0.5% ∼ 1.3%
Visualization Analysis
To verify the effectiveness of patient correlation graph (PCG) to the multi-modal fusion feature intuitively, we visualize the learned features extracted from the models with and without patient correlation graph as shown in Figure 5.We focus on the LOS >3 task because of its balanced class distribution.After training the models, we utilize them to acquire the multi-modal features of samples in the test set.We visualize the patient multi-modal fusion features in Figure 5 where the dimension is reduce to two by t-SNE.Further more, we also select the same group of patients to highlight their feature points and circle them.As the whole patients observed, the multi-modal features of them with same label that are learned with PCG are more clustered.The selected patients' features learnt with PCG are clustered into two groups with a clear boundary, but that without PCG are scattered and intertwined.
The comparison between the two results demonstrate the effectiveness of the patient correlation graph which connects the multi-modal features of relevant patients.
Conclusion
In this paper, we analyze the disadvantages of existing multi-modal learning methods for clinical outcome prediction.To enhance the multi-modal feature in different views, we propose the patient multi-view multi-modal feature fusion networks (PM 2 F 2 N) for the task.From the inner view, we extend the co-attention module to fuse the features of the time series data and structural medical knowledge in a fine-grained way.From the outer view, we exploit the correlation between patients to aggregate the multi-modal features between the similar patients.With the multi-view multi-modal feature fusion strategy, the proposed model can learn the general patient representations for clinical outcome prediction.Compared with the existing methods, our model can gain the best results on benchmark dataset MIMIC-III.And the further discussion including ablation study, effect of PCG and visualization analysis, verifies the effectiveness of the proposed strategy.Considering the heterogeneity of patients, we will try to adapt the heterogeneous graphs for modeling the correlation between patients in the future.
Limitations
The proposed model is limited to feeding the whole patient multimodal data into it and utilizing large memory to calculate aggregation multimodal features by GCN layer.Besides, the scalability of patient correlation graph (PCG) is poor because the PCG should be reconstructed when the new patients are added into the original patient set.
Figure 1 :
Figure 1: There are two views to analyze the observed patient.The inner view is to focus on the medical data of the observed patient.And the outer view is to exploit the medical correlation between patients for the observed one.
Figure 2 :
Figure 2: The patient multi-view multi-modal feature fusion networks (PM 2 F 2 N) for clinical outcome prediction.
time series data X contains various physiological signals changing over time.And the medical entity set C e includes different medical knowledge representing the condition of patient.Some of the medical information in clinical note is relevant to the physical signals at certain times.For example, the observed patient was treated with certain drugs and the physical signals would change in some aspect.To capture the fine-grained correlation between multi-modal data, we propose to exploit co-attention module for fusing the multimodal features.Although the co-attention achieved significant success in visual question answering area (Lu et al., 2016), this is the first time to expand it to the medical multi-modal data mining area.Given the extracted features H of time series data X and that C b of clinical note C, we unify the dimension number of both as: H = HW a and C b = C b W b where W a ∈ R N h ×N d and W b ∈ R N b ×N d are trainable weights.To calculate the correlation degree between time series and medical entities, the shared feature space S ∈ R Nt×Ne is defined as S = tanh H W s C T b
Figure 3 :
Figure 3: The detailed construction of the patient correlation graph.The degree of correlation between patients is defined as the jaccard similarity between medical entity sets in each patient's clinical note.
Figure 5 :
Figure5: The t-SNE visualization results of the multimodal features extracted from the models with and without patient correlation graph respectively.We evaluate the models on LOS >3 task.And the highlighted points represent the same group of patients. to
Table 1 :
The statistical information of the MIMIC-III dataset extracted by MIMIC-Extract."T.S." is short for "time-series".
Table 2 :
The experimental results on four clinical outcome prediction tasks in macro-averaged % AUROC and % AUPRC.We run the experiments 5 times with different random seeds and report the average results.Our model outperforms the baseline methods on four tasks.
Table 3 :
The results for ablation study."TS" and "CN" are short for time series data and clinical notes respectively."PCG" represents the patient correlation graph which is introduced into our model."CA" is a co-attention module to fuse features of time series and clinical notes. | 5,758.2 | 2022-01-01T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
PROBLEMS OF THE LABOR MARKET : A CASE STUDY OF BELGRADE
Аbstract: High unemployment accompanied by a decline in production is one of the major economic and social problems in Serbia. The existing economic problems, which are defined by structural mismatch, non-competitive production and traditional means of production is accompanied by an increase in the problem of poverty, due to the participation of elderly and dependent persons, a decline in the number of employees. Markets are a way to organize economic activity and can only be successful if the activities are organized in a way that enhances the overall economic benefit. If this does not happen, there is a market failure which is the situation in which the market alone fails to achieve full efficiency. Taking into account of territorial disparities in economic potentials of development, the aim of the paper is to point out the problems facing the labor market in Belgrade. The author tried to show the basic characteristics of Belgrade during his work, as well as economic, traffic and market a significant city in Southeastern Europe. The effects, instruments and benefits of market policy and alternatives to overcome existing constraints, opportunities and solutions are presented. Activities to improve the productivity and quality of work must be planned according to the expected changes in the economic structure, according to the greater involvement of Belgrade in the European and regional economic integration, and with optimal use of domestic resources to the comparative advantages of the country.
Introduction
The basic components of the labor market as the demand for labor and the supply of labor with the appearance of specific factors.The labor market is part of the economy consisting of unequal skilled workers to heterogeneous jobs in various economic activities.In the case of such heterogeneity that come with the supply side and the demand side, it is difficult to expect the achievement of optimum market (Aleksić, Vuksanović, 2017).With regard to the specific features and characteristics arising from the labor market compared to the traditional market of products that can be competitive or non-competitive.Labor market governed by the forces of supply and demand, and workers find employment through the labor market.This market includes the preparation of the workers, their recruitment, promotion, dismissals, waiting for employment, competition, job search and on the job.Basic specific labor market stems from the characteristics of the goods and services at one side and work as a human activity, on the other side.On the specifics of the labor market is influenced by numerous factors such as the inferior bargaining position of workers arising from the fact that the market is almost always there is excess supply.This situation automatically puts employers in a more favorable position.While a worker is in a situation of lack of jobs and the need for employment, employers are in a position to choose between the number of unemployed.
Serbia is the problem of high unemployment facing for many years.In the first decade of the 21 st century, Serbia is characterized by the presence of certain restrictions, which are the result not only of transition effects, but many unfavorable tendencies inherited from the previous period.One of the limitations are structural mismatches on both the supply and the demand side of the labor market in Serbia, which have a long-term character (Leković, Marjanović, 2011).A serious problem is the emigration of highly educated young people abroad, which is a great loss for the country.The most vulnerable groups of the unemployed, they are young people, unskilled workers, groups of unemployed that long job search for many reasons, as well as migrant-settlers (Radovanović, Maksimović, 2010).Given that employers other than educational attainment, employment, often take into account the additional forms of professional development, language skills, ability to work in a team, responsible attitude towards the environment, it is essential that young people through various courses acquire additional competencies (Jovanović at al., 2011).
The labor market in recent years, marked changes that have occurred as a result of rapid technological development of modern society.It is necessary to consider changes in the labor market, but also point to the still present regional disparities.The aim of the paper is to point out the movement of supply and demand of jobs and labor, their tendencies, regularity and legality.Bearing in mind the theoretical methodological concept, as well as the research to date have been set initial hypotheses.These indicate that the labor market trends defined by certain rules and trends over a longer period of time.One of the trends related to the existence of non-compliance in Serbia between the needs of the labor market and the education system.Another tendency includes informal employment, which significantly participate in total employment in Serbia.The structure of the paper consists of an introductory part stating the importance and actuality of the topic, and then defines the objectives and hypotheses.Through previous research provides an overview of the labor market in the Member States of the European Union and countries in the region.Methodology of research shows that institutions have data on the labor market in the territory of Belgrade and the entire Serbia.Analyzing the data, the author points to the long-term unemployment, informal employment and the lack of education and the labor market.In conclusion formulated proposals to improve the situation on the labor market of the City of Belgrade.
Previous research
The European Union has for many years an active policy in the field of labor market.One of the priorities of the Union as a whole and each of its individual Member States is a decrease in the unemployment rate.The average employment rate in the EU 27, measured in relation to the population aged 15-64 in 2009 was 64.6%.The employment rate is continuing to grow in the period 2000-2008.However, the global economic crisis that affected the decline in overall economic activity of the EU Member States in 2009 has led to a decline in employment of 1.3% compared to 2008 (ec.europa.eu/eurostat).
The highest employment rate in the European Union have the Netherlands (77.0%) and Denmark (75.7%).In contrast, the lowest rate was observed at Malta (54.9%) and Hungary (55.4%).Observed by the Member States, the highest unemployment rate in the period 2000 to 2008, there were Spain, Slovakia, Hungary and Greece.In the early period of this group belonged to Bulgaria, however dynamic growth in employment this country in 2008 managed to reduce unemployment to 5.6% (ec.europa.eu/eurostat).
The economic crisis that occurred in mid-2008 variously have affected some 27 EU Member States, so that there is a further difference in their unemployment rates.According to Eurostat data, the most affected is Spain, which in 2009 reached an unemployment rate of 18%, followed by Latvia (17.1%),Lithuania (13.7),Slovakia (12.0%),Ireland (11.9%),Hungary (10.0%) and Greece (9.5%).
Common measures of EU regional policy helps the education and training of workers in order to efficiently find a job.Training work potential creates greater opportunities for people to find jobs in the labor market, and companies will have the opportunity to achieve greater competitiveness and growth, particularly in new sectors.Also assists young people and those who have been unemployed and with lower qualifications.Support also consists in equalizing opportunities for women and men to get a job in the labor market (Union Europeenne, Politique regionale 2007).
Although the situation on the labor market is still problematic, growth in the region brought new jobs.Employment growth accelerated in many countries, which has increased the employment rate to 2% in June 2015 to 4.5% in June 2016 (РЗС, 2016).By September 2016, Albania recorded the highest employment growth of 8.5%, not far behind is not Serbia with 7.2%, which amounts to an average of almost 6% of the average employment growth in the region throughout 2016.Employment growth has been achieved because Albania reduce the non-formal sector in Serbia has been a recovery of production and good agricultural season, which generated an expansion of non-formal employment.To a lesser extent, growth was recorded and the FYR Macedonia and Montenegro.In Bosnia and Herzegovina, recent reforms of the labor market has yet to make a significant impact on increased employment, which decreased by 2.6% compared to June 2016 (www.worldbank.org).
Reducing unemployment depends on active labor market policies.Unemployment decreased in almost all countries in the region.This was achieved despite the increased participation of the labor force in Serbia and Albania.In FYR Macedonia, lower unemployment was recorded fiscally supported active labor market policies and the extent of private sector employment, particularly in construction.Montenegro is the only country where the unemployment rate has increased because of the life-long benefits to mothers with three or more children and to encourage them to stop working (www.worldbank.org).
Despite recent positive developments in the region, unemployment remains high.Unemployment has averaged 20% in 2016.More than 70% of the population are long-term unemployed (РЗС, 2016).The youth unemployment rate is double as high that of the working age population.
Methodological approach
Evidence in the field of labor, employment, unemployment and education deals with several institutions in Serbia.Data on businesses, employment and education can be obtained from the Agency for Business Registers, the Republic Institute for Statistics and the City Institute for Informatics and Statistics.The National Employment Service is responsible for recording data on employment and unemployment.Ministry of Education, Science and Technological Development, the Institute for Improvement of Education, Republic Agency for Development of Small and Medium Enterprises and the Institute for Pension and Disability Insurance also evidence the above mentioned information.However, it should be noted that the comparison and analysis of data is difficult because there is no methodological harmonization between the evidence dealt with by the relevant institutions (Dimov, 2007).
Data on employment who are registered with the National Employment Service (NSZ) include two categories: total employment and the employment of persons registered as unemployed.In both cases, statistics are evidence according to the activity and level of education, but not professions.Classification of Activities (Law on classification of activities and register of classification units, passed in 1996, with the application of 01.01.1998) according to which the recorded statistics of employment in the National Employment Service has 17 work field, 24 areas of work and 364 interior, specific areas of activity.In comparison, the NSZ by model classification JNZ -19 work field and 75 professions groups recorded statistics demand for professions and fill out jobs, through which can be monitored demand for certain professions.Classification of work field by sorting educational profile benefit the Republic Institute for the Advancement of Education (RZUOV) has 15 field of work.
Analyzing data on employment has resulted in a huge mismatch in the classification system used by the NSZ and RZUOV.It is possible to communicate the following data: status as unemployed evidence to the classification of professions JNZ, demand and fill out (employment) jobs by professions, also according to the classification of activities in the JNZ, as well as employment by level of degree of vocational education (no professions) according to the Classification of activities and register classification units.The Ministry of Labor, Employment Veteran and Social Affairs, the Law on Evidence was adopted, who coordinates the evidentiary basis for the statistics department of the ministry with European standards.The National Employment Service is consistent unified information system containing all the evidence that this service represented.Were unified conceptual and methodological approach-es in recording evidence through direct comparison with the situation in the labor markets in the European Union.
Results and discussion
The main characteristic of the state of Serbian economy in recent decades has low employment and high unemployment, which is one of the major economic and social problems of the country.The employment rate lags far behind the Lisbon target (70% of employment), and the unemployment rate, according to official data shows that every fifth resident of the able-bodied unemployed (РЗС, 2010).The reasons for these adverse trends in employment are, for the most part, the consequences of the transition process.Restructuring and privatization have resulted in the closure of a large number of jobs, privatized enterprises, new jobs are opened very slowly and not enough to be able to absorb all of the requirements expressed by the labor market.
According to data in 2015, the number of active persons in Serbia amounted to 3 126 100 persons (decreased by 1.3% compared to the 2014 year).The largest reduction in the contingent active persons in the region of Vojvodina (3.4%).Compared to 2014, the number of active women decreased by 1.4%, while the number of active men by 1.2%.Data for active persons shows that the Republic of Serbia in 2015 was 2,574,200 persons employed (employment rate increased by 0.6% compared to 2014).The employment rate for men it was 50.2% and 35.3% for women.During the same period, the number of unemployed amounted to 551.900 (the unemployment rate decreased by 1.5% compared to 2014).The unemployment rate for men was 16.8% and for women 18.8%.Number of inactive persons in 2015 amounted to 2,933,900 persons (increased by 0.1% compared to 2014).The inactivity rate for women was significantly higher (56.5%) compared to the inactivity rate for men (39.7%) (РЗС, 2016).According to the National Employment Service required qualifications in the Republic of Serbia in 2015 amounted to 0.46% for primary school, 52.52% for secondary school, high school 35.79%, master 11.18%, master's degree 0.02% and doctorate 0.03% (Figure 1).
Unemployment is a complex problem for which the individual may have a direct permanent consequences manifested in falling living standards and social vulnerability.In addition, Serbia faces significant demographic and social problems such as high participation of the elderly persons (over 65 years) and the emigration of young people (Vujadinović et al., 2013).Because of its complexity, solving these problems involves a comprehensive and multi-disciplinary approach.It is necessary to consider all the restrictions and apply the most effective means available to the labor market, as well as adequate instruments of economic policy and system.In addition to measures of active policy measures and implementation of complex programs of economic and regulatory reform as a way to improve economic conditions, which will achieve a better labor market outcomes (Leković, Marjanović, 2011).Increased employment is achieved by a combination of economic and social policies.To get unemployment down to a lower level, the necessary reforms are: • the demand for jobs by creating a business environment that is conducive to investment and growth in economic activity; • offer jobs through the reform of the education system; • labor market more flexible by removing the deficiency of labor legislation and the expansion of communication between the employes and the employer (Mijatović, 2012).
Experiences of developed market economies, especially those with a social-market orientation, based on the principles of competition and include a unique set of constituent and editorial principles, as the basis for the regulation of the labor market.In this way, the appropriate regulation of the labor market, long-term will be a function of the realization of economic interests expressed in the sustainability of the growth and development of economy and society, and at the same time will represent a stable instrument to improve living standards (Leković, Marjanović, 2011).In order to create conditions for the functioning of the labor market, it is necessary to improve its transparency, as well as to facilitate the mobility of economic operators and flexibility benefits.This includes establishing a complex institutional regulation of the labor market, within which moving workers and employers when making decisions.Institutional regulation of the labor market make the laws, procedures and functioning institutions.Editing to these areas is provided greater mobilization potential of the labor market to reduce unemployment and a expansion range of employment (Leković, Marjanović, 2011).
Successful development of the City of Belgrade, depends largely on economic competitiveness and accessibility.Belgrade's competitiveness in the global economy is a certain degree of sustainable utilization of territorial capital and potential of the city.The principle of accessibility is conditioned sustainable development, reconstruction and modernization of the network infrastructure, especially the regulation of traffic.Rational use of the geographical position of the City of Belgrade, as well as possibilities to adapt its economic structure of modern development processes (globalization, technological revolution, competition in the global and integrated markets) and increasing competitive advantage in regional integration movements, the City of Belgrade will get a significant place and role in the international division work.The transition in Serbia is facing regional and local economy with increasing challenges.The process of globalization affects the economy of the city and brings new opportunities and increased restrictions.
Analysis of the data relating to the economically active population City of Belgrade (44%), there are significant differences, which are reflected in the percentage of persons perform professions (36%) compared to the unemployed (8%).Economically non-active population in Belgrade city belongs to 56%.Of that number, children under 15 years of age account for 14%, pensioners 24%, pupils/students who are 15 years and over 8% of persons who perform only household chores in their household and other persons 10% (Figure 2).Taking into account the total population of the City of Belgrade, the largest percentage of those who perform professions, followed by pensioners and children under 15 years old.The reasons why the unemployed await a new job for a long time are numerous.The most common reason is lack of vacancies.Also, one of the reasons is that the workers who are long-term unemployed lost their jobs due to restructuring.Jobs require a certain level of education and skills.Therefore, it is particularly important to focus attention on the quality and type of program, competence of trainers, development of skills of workers through education and their role in meeting the needs of the labor market.Recovery of the city's economy has been present since 2000 through a five-year dynamic and steady constant growth of the social product, at an average annual rate of 5.6% (2001)(2002)(2003)(2004)(2005), which allowed multiple growth of the social product per capita (Градски завод за информатику и статистику, 2007).Gradually changing economic structure (60% of the social product is created in the tertiary sector, and trade and other activities generate 1/3 of the social product), taking primacy over the industry.The economic structure of Belgrade dominated industry tertiary-quaternary sector defines its role as the organizational, administrative, service, educational, scientific research and cultural center.At the same time modernize the industry, defining Belgrade as the industrial center of an important place in the wider regional context.Employment increased from 430,000 in 2000 to 630,000 in 2008.During 2011, 67% were employed in the service sector.In particular, the observed increase in employees in independent activities.In Belgrade work for 40.2% of all employees in Serbia (РЗС, 2011).
Analysis of the data relating to the economically active population Belgrade City by gender can be seen that a higher percentage of males (23.8%), while uniform in those who perform a profession (18.7% males and 18.1% females).When it comes to unemployment, also are not big differences in the percentages relating by gender the population.Those who worked sometimes make up 6.1%, and the percentage of first-time job seekers was 3.1%.Of the total population is from 9.2% unemployed (Table 1).The development of commercial activities in Belgrade is characterized by expressed polycentricity and decentralization of office space in the City which is the basic development orientation.The main aim of the city's economy is dynamic, coherent and competitive development in conformity with the development trends of Europe and the world.Transregional integration activity and the international division of labor based on the principles of sustainability and profitability, knowledge, market confirmed the quality of goods and services.Development of infrastructure, investment in knowledge, encouraging the development of small and medium enterprises, as well as improving the system of public financing can affect change in the existing economic structure of the city.
The concentration of production activities in periurban areas increases along with the basic traffic corridors, as well as in the neighboring municipalities in the contact zone, and the narrower area of the City is uninstalled.The long term aim of agricultural development is the establishment of a competitive and market-oriented sector based on intensive production and high environmental standards.New technologies contribute to the conservation of natural resources and achieving compliance with the growing demand of the Belgrade market.They promote peri-urban parts of the city through the gradual improvement of living standards, which should support appropriate agricultural policy, financial and legal framework.This includes increasing productivity and competitiveness, improving techniques, technology and quality of agricultural production.The modernization of the food industry through the introduction of new technologies and standards, better connectivity with primary agricultural production and the development of new processing capacity based on available raw materials will significantly increase production and improve its structure.
The concept of economic development based on the development of the functions of Belgrade as a center of tertiary services: trade, transportation, financial, tourism and construction.The most important territorial capital for the development of a modern, profitable and competitive economy of the city consists of geographical location, land, water and energy potential and human resources.To create an attractive business environment attractive for investment, the City strives to provide adequate institutional, organizational, technical and financial conditions.
Conclusion
Belgrade, as a regional center, should make use of its resources in order to create a competitive economy, economic activity in line with its potential, to respect and promote specificities, supporting the development of small urban centers and provide greater employment as one of the most important indicators of economic and social development.It is necessary to improve the production characteristics of the formation of market-profiled, modern production structure, with intensive development of small and medium-sized enterprises and with the implementation of the measures and instruments to attract competitive and profitable industrial activity.To be listed, as well as numerous other restrictions that characterize the labor market in Serbia overcome, we need a comprehensive economic policy measures, but also active labor market policies.
The main aim of the new employment policy implies the establishment of an efficient, stable and sustainable employment growth trend until 2020.According to the process joining, employment policy, as well as the activities of the institutions, labor market must completely harmonize with the EU while reducing the differences in labor market indicators.
The adoption and application of technical and technological achievements is necessary given that the global markets are becoming increasingly demanding.Employers require adequately trained workers with specific skills and work organization.It follows that flexible labor markets tend to greater economic growth, partly because companies become more competitive and profitable, and in economies in transition to increase competitiveness and long-term economic growth leads to higher employment.
Figure 2 -
Figure 2 -Economic activity in the City of Belgrade Source: РЗС, 2011
Table 1 -
Economic activity in the City of Belgrade by gender | 5,195.4 | 2017-01-01T00:00:00.000 | [
"Economics",
"Sociology"
] |
A Novel Minidumbbell DNA-Based Sensor for Silver Ion Detection
Silver ion (Ag+) is one of the most common heavy metal ions that cause environmental pollution and affect human health, and therefore, its detection is of great importance in the field of analytical chemistry. Here, we report an 8-nucleotide (nt) minidumbbell DNA-based sensor (M-DNA) for Ag+ detection. The minidumbbell contained a unique reverse wobble C·C mispair in the minor groove, which served as the binding site for Ag+. The M-DNA sensor could achieve a detection limit of 2.1 nM and sense Ag+ in real environmental samples with high accuracy. More importantly, the M-DNA sensor exhibited advantages of fast kinetics and easy operation owing to the usage of an ultrashort oligonucleotide. The minidumbbell represents a new and minimal non-B DNA structural motif for Ag+ sensing, allowing for the further development of on-site environmental Ag+ detection devices.
Introduction
Silver ion (Ag + ) has been widely used as an antiseptic in cosmetics, building materials, and medical products owing to its antibacterial properties [1][2][3][4]. However, overuse of Ag + inevitably leads to environmental pollution. Human exposure to Ag + pollution mainly comes from the release of airborne silver nanoparticles and natural water contaminated by industrial sources [5,6]. The tolerable concentration of Ag + in drinking water is~927 nM as recommended by the World Health Organization [7]. Excessive Ag + ingestion can cause certain serious health consequences, such as respiratory system injury, organ failure, and even cancer [6,[8][9][10][11]. Various methods have been developed for detecting low concentrations of Ag + in environmental samples and drinking water sources. At present, Ag + detection is mainly carried out by conventional analytical methods such as inductively coupled plasma mass spectrometry [12], optical emission spectrometry [13], atomic absorption spectrometry [14,15], and laser ablation microwave plasma torch optical emission spectrometry [16]. These conventional methods are sensitive and selective, but they rely on expensive instruments and intensive labor.
In recent years, nucleic acid molecules have gained prominence in the fields of sensing and material science because of their programmability and predictability by forming complementary base pairs [17]. DNA molecules have been used to design sensors for detecting metal ions such as Ag + , UO 2 2+ , Cu 2+ , Ca 2+ , Mg 2+ , Hg 2+ , and Pb 2+ [18][19][20][21][22][23][24][25][26]. In general, there are mainly two DNA-based strategies for Ag + detection. The first strategy utilizes an Ag +dependent DNAzyme that can irreversibly cleave an RNA or DNA substrate in the presence of Ag + [22]. The second strategy is based on the well-established knowledge that Ag + binds Biosensors 2023, 13, 358 2 of 10 to cytosine (C) at the N3 site to coordinate and stabilize a C·C mismatch [27,28]. Ag + can induce the formation of DNA i-motif or hairpin structures that contain C·C mismatch(es), thus giving reporting signals upon DNA conformational change [26,[29][30][31][32][33]. Moreover, the duplex or hairpin-forming strands can also be assembled onto nanomaterials for signal amplification [34][35][36][37][38]. The second strategy can achieve a low detection limit, but the reported ones generally used relatively long oligonucleotides, which might make the Ag + -induced DNA conformational change slow. For instance, a DNA sensor based on a 20-nucleotide (nt) hairpin required an incubation time of at least 10 min for Ag + detection. Therefore, a DNA sensor using a short oligonucleotide is expected to have advantages of fast response, easy operation, and probably anti-interference capability in a complex environment, which allow for the further development of on-site environmental detection devices [33,39].
Minidumbbell (MDB) is a type of non-B DNA structure formed by 8-10-nt sequences [40,41]. The MDB structure was initially found to form in CCTG tetranucleotide repeats, which are associated with the neurodegenerative disease of myotonic dystrophy type 2 [40,41]. The CCTG MDB is simply composed of two repeats, i.e., 5'-CCTG CCTG-3', and each repeat folds into a type II tetraloop. The C1-G4 and C5-G8 adopt Watson-Crick loop-closing base pairs; C2 and C6 fold into the minor groove, whereas T3 and T7 stack on the C1-G4 and C5-G8, respectively ( Figure 1) [40]. One of the most interesting features of this MDB is that the two minor groove residues formed a unique reverse wobble C2·C6 mispair containing one/two hydrogen bond(s) or Na + -mediated electrostatic interactions at neural pH [42], or a C2 + ·C6 mispair containing three hydrogen bonds with C2 being protonated at acidic pH ( Figure 1) [43]. Upon lowering the pH from 7 to 5, the melting temperature (T m ) of the CCTG MDB was increased from~20 • C to 46 • C [43]. Apart from pH, we wondered if Ag + could coordinate the C2·C6 mispair to stabilize the MDB and then induce a DNA conformational change for Ag + sensing. Here we report a novel and minimal DNA sensor, based on the CCTG MDB, for Ag + detection with high sensitivity and fast kinetics.
there are mainly two DNA-based strategies for Ag + detection. The first strategy utilizes an Ag + -dependent DNAzyme that can irreversibly cleave an RNA or DNA substrate in the presence of Ag + [22]. The second strategy is based on the well-established knowledge that Ag + binds to cytosine (C) at the N3 site to coordinate and stabilize a C·C mismatch [27,28]. Ag + can induce the formation of DNA i-motif or hairpin structures that contain C·C mismatch(es), thus giving reporting signals upon DNA conformational change [26,[29][30][31][32][33]. Moreover, the duplex or hairpin-forming strands can also be assembled onto nanomaterials for signal amplification [34][35][36][37][38]. The second strategy can achieve a low detection limit, but the reported ones generally used relatively long oligonucleotides, which might make the Ag + -induced DNA conformational change slow. For instance, a DNA sensor based on a 20-nucleotide (nt) hairpin required an incubation time of at least 10 min for Ag + detection. Therefore, a DNA sensor using a short oligonucleotide is expected to have advantages of fast response, easy operation, and probably anti-interference capability in a complex environment, which allow for the further development of on-site environmental detection devices [33,39].
Minidumbbell (MDB) is a type of non-B DNA structure formed by 8-10-nt sequences [40,41]. The MDB structure was initially found to form in CCTG tetranucleotide repeats, which are associated with the neurodegenerative disease of myotonic dystrophy type 2 [40,41]. The CCTG MDB is simply composed of two repeats, i.e., 5'-CCTG CCTG-3', and each repeat folds into a type II tetraloop. The C1-G4 and C5-G8 adopt Watson-Crick loopclosing base pairs; C2 and C6 fold into the minor groove, whereas T3 and T7 stack on the C1-G4 and C5-G8, respectively ( Figure 1) [40]. One of the most interesting features of this MDB is that the two minor groove residues formed a unique reverse wobble C2·C6 mispair containing one/two hydrogen bond(s) or Na + -mediated electrostatic interactions at neural pH [42], or a C2 + ·C6 mispair containing three hydrogen bonds with C2 being protonated at acidic pH ( Figure 1) [43]. Upon lowering the pH from 7 to 5, the melting temperature (Tm) of the CCTG MDB was increased from ~20 °C to 46 °C [43]. Apart from pH, we wondered if Ag + could coordinate the C2·C6 mispair to stabilize the MDB and then induce a DNA conformational change for Ag + sensing. Here we report a novel and minimal DNA sensor, based on the CCTG MDB, for Ag + detection with high sensitivity and fast kinetics. The averaged solution nuclear magnetic resonance (NMR) structure of the CCTG MDB at pH 7 (PDB ID: 5GWL) and pH 5 (PDB ID: 7D0Z). C2 and C6 formed predominantly a one-hydrogenbond mispair at pH 7, whereas they formed a stable three-hydrogen-bond mispair at pH 5 with C2 being protonated.
DNA Sequence Design and Sample Preparation
Our designed M-DNA sensor was a duplex formed by the CCTG MDB strand (5'-CCTG CCTG-3') and its complementary strand (5'-CAGG CAGG-3'), which were named CCTG2 and CAGG2, respectively. As a control, a self-complementary 8-bp duplex formed by 5'-GCAGCTGC-3' was used. The high-performance liquid chromatography (HPLC)purified DNA samples were purchased from Sangon Biotech (Shanghai, China), and they The averaged solution nuclear magnetic resonance (NMR) structure of the CCTG MDB at pH 7 (PDB ID: 5GWL) and pH 5 (PDB ID: 7D0Z). C2 and C6 formed predominantly a one-hydrogenbond mispair at pH 7, whereas they formed a stable three-hydrogen-bond mispair at pH 5 with C2 being protonated.
DNA Sequence Design and Sample Preparation
Our designed M-DNA sensor was a duplex formed by the CCTG MDB strand (5 -CCTG CCTG-3 ) and its complementary strand (5 -CAGG CAGG-3 ), which were named CCTG 2 and CAGG 2 , respectively. As a control, a self-complementary 8-bp duplex formed by 5 -GCAGCTGC-3 was used. The high-performance liquid chromatography (HPLC)purified DNA samples were purchased from Sangon Biotech (Shanghai, China), and they were further purified in our laboratory using diethylaminoethyl sephacel anion exchange column chromatography and Amicon Ultra-4 centrifugal filter devices. The ultra-violet (UV) absorbance at 260 nm was measured for DNA quantitation.
Preparation of SYBR Green I (SGI) and Metal Ion Stock Solutions
SGI (10,000×) was purchased from Beijing Solarbio Science and Technology Co., Ltd. (Beijing, China) and diluted using DMSO to a final concentration of 100× or 10× as the stock solution. It is noted that SGI 1× was equivalent to a concentration of 1.96 µM. The analytical-grade AgNO 3 , KCl, LiCl, CaCl 2 , MgCl 2 , MnCl 2 , CoCl 2 , CuSO 4 , BaCl 2 , and NiSO 4 were purchased from Sinopharm Chemical Reagent Co., Ltd. (Beijing, China) and dissolved using DI water to a final concentration of 50 µM as the stock solutions.
NMR Experiments
To monitor the binding of Ag + to the CCTG MDB, NMR experiments were performed using a Bruker AVANCE NEO 400 MHz spectrometer. One-dimensional (1D) 1H NMR experiments were conducted at 25 • C using the excitation sculpting pulse sequence to suppress the water signal.
Circular Dichroism (CD) Experiments
CD experiments were performed using a Chirascan V100 CD spectrometer with a bandwidth of 1 nm at room temperature, unless otherwise specified. The CD samples (~100 µL) were placed in a cuvette of 0.5 mm path length, and the CD spectra were collected from 200 to 350 nm with a step size of 1 nm. For each sample, three sets of scans were acquired, and an average value was taken. CD spectra were background-corrected using the corresponding buffer solution.
Fluorescence Experiments
Fluorescence experiments, except for the kinetic study of Ag + sensing, were performed using a Shimadzu RF-6000 spectrometer at room temperature. The fluorescence samples (~2 mL) were placed in a 10 mm four-sided glazed quartz cuvette, and the fluorescence spectra were collected from 512 to 650 nm with a step size of 1 nm. Fluorescence intensity was recorded at 520 nm with an excitation wavelength of 492 nm. The excitation and emission band widths were 5 nm. For a kinetic study of Ag + sensing, fluorescence experiments were performed using an Edinburgh FLS1000 photoluminescence spectrometer at room temperature. The sample containing a DNA sensor in the absence of Ag + (~2.5 mL) was first placed in a 10 mm four-sided glazed quartz cuvette, and the fluorescence intensity at 520 nm was recorded from 0 to 180 s with a step time of 2 s. Ag + was then added to this sample, and the fluorescence intensity was immediately recorded from 0 to 180 s with a step time of 2 s. The excitation and emission band widths were 2 nm.
The detailed sample conditions for NMR, CD, and fluorescence experiments are stated in the figure legends.
Ag + Induces a Conformational Change from Duplex to MDB
One-dimensional (1D) 1 H NMR experiments were first performed to investigate if Ag + could bind to C2·C6 mispair of the CCTG MDB. It showed that upon adding Ag + to the CCTG MDB, the H6 proton signals of C2 and C6 became broadened while those of T3 and T7 remained sharp and almost unchanged, suggesting that Ag + bound to the C2·C6 mispair ( Figure 2). Besides, C1 H6, G4 H8, C5 H6, and G8 H8 peaks were also found to be broadened, as it has been reported that Ag + could also bind to C-G base pairs [44].
We then tested if Ag + could promote MDB formation to induce a DNA conformational change, which is the prerequisite of most DNA sensors. For this aim, we prepared a DNA duplex formed by the CCTG MDB strand (5 -CCTGCCTG-3 ), namely CCTG 2 , and its complementary strand (5 -CAGGCAGG-3 ), namely CAGG 2 , at pH 8/7/6 and collected CD spectra to monitor DNA conformational change upon Ag + titration at 25 • C. These two strands formed a duplex in the absence of Ag + , as indicated by a positive CD band at 265 nm ( Figure 3A-C, black lines) [45]. Upon adding Ag + to the duplex, a new major band at 290 nm was observed at pH 6, but not obvious at pH 7 and 8, when the DNA:Ag + ratio was 1:2 ( Figure 3A-D, red lines). The CD band at 290 nm was characteristic of the CCTG MDB [46], suggesting that Ag + efficiently induced a conformational change from the duplex to the MDB at pH 6. Notably, the DNA:Ag + ratio of 1:2 showed the maximum population of Ag + -induced MDB ( Figure 3C). This may because Ag + is also non-selectively bound to C-G base pairs in the MDB (Figure 2), and thus more Ag + is required to promote MDB formation. We then tested if Ag + could promote MDB formation to induce a DNA conformational change, which is the prerequisite of most DNA sensors. For this aim, we prepared a DNA duplex formed by the CCTG MDB strand (5′-CCTGCCTG-3′), namely CCTG2, and its complementary strand (5′-CAGGCAGG-3′), namely CAGG2, at pH 8/7/6 and collected CD spectra to monitor DNA conformational change upon Ag + titration at 25 °C . These two strands formed a duplex in the absence of Ag + , as indicated by a positive CD band at 265 nm ( Figure 3A-C, black lines) [45]. Upon adding Ag + to the duplex, a new major band at 290 nm was observed at pH 6, but not obvious at pH 7 and 8, when the DNA:Ag + ratio was 1:2 ( Figure 3A-D, red lines). The CD band at 290 nm was characteristic of the CCTG MDB [46], suggesting that Ag + efficiently induced a conformational change from the duplex to the MDB at pH 6. Notably, the DNA:Ag + ratio of 1:2 showed the maximum population of Ag + -induced MDB ( Figure 3C). This may because Ag + is also non-selectively bound to C-G base pairs in the MDB (Figure 2), and thus more Ag + is required to promote MDB formation. We then tested if Ag + could promote MDB formation to induce a DNA conformational change, which is the prerequisite of most DNA sensors. For this aim, we prepared a DNA duplex formed by the CCTG MDB strand (5′-CCTGCCTG-3′), namely CCTG2, and its complementary strand (5′-CAGGCAGG-3′), namely CAGG2, at pH 8/7/6 and collected CD spectra to monitor DNA conformational change upon Ag + titration at 25 °C . These two strands formed a duplex in the absence of Ag + , as indicated by a positive CD band at 265 nm ( Figure 3A-C, black lines) [45]. Upon adding Ag + to the duplex, a new major band at 290 nm was observed at pH 6, but not obvious at pH 7 and 8, when the DNA:Ag + ratio was 1:2 ( Figure 3A-D, red lines). The CD band at 290 nm was characteristic of the CCTG MDB [46], suggesting that Ag + efficiently induced a conformational change from the duplex to the MDB at pH 6. Notably, the DNA:Ag + ratio of 1:2 showed the maximum population of Ag + -induced MDB ( Figure 3C). This may because Ag + is also non-selectively bound to C-G base pairs in the MDB (Figure 2), and thus more Ag + is required to promote MDB formation. We did not further lower the pH as previous work has demonstrated that the CCTG MDB completely dissociated from the duplex owing to its much higher thermodynamic stability than the duplex at pH 5 [43], therefore there would not be further conformational change upon adding Ag + . We also performed the Ag + titration at 35 • C to examine if this system could function at an elevated temperature. However, the CD signal of MDB was observed without adding Ag + (Figure 3E), which could be attributed to the relatively higher thermodynamic stability of MDB than duplex at 35 • C and pH 6. Zhang et al. have also reported that a higher temperature leads to partial melting of the initial DNA duplex and thus a lower sensitivity [47].
Design and Optimization of the CCTG MDB-Based DNA (M-DNA) Sensor
Based on the Ag + -induced formation of CCTG MDB at pH 6 ( Figure 3C,D), we designed the M-DNA sensor, which was simply composed of the 8-bp duplex formed by CCTG 2 and CAGG 2 . SYBR Green I (SGI) was used as a fluorescence reporter and it was expected to emit strong fluorescence when bound to the duplex in the absence of Ag + while giving weak fluorescence when the duplex was converted to MDB in the presence of Ag + ( Figure 4A). To ensure SGI will not affect the DNA conformational change, CD spectra were collected without and with adding SGI, and the results showed that Ag + -induced conformational change still effectively occurred ( Figure S1). change upon adding Ag + . We also performed the Ag + titration at 35 °C to examine if this system could function at an elevated temperature. However, the CD signal of MDB was observed without adding Ag + (Figure 3E), which could be attributed to the relatively higher thermodynamic stability of MDB than duplex at 35 °C and pH 6. Zhang et al. have also reported that a higher temperature leads to partial melting of the initial DNA duplex and thus a lower sensitivity [47].
Design and Optimization of the CCTG MDB-Based DNA (M-DNA) Sensor
Based on the Ag + -induced formation of CCTG MDB at pH 6 ( Figure 3C,D), we designed the M-DNA sensor, which was simply composed of the 8-bp duplex formed by CCTG2 and CAGG2. SYBR Green I (SGI) was used as a fluorescence reporter and it was expected to emit strong fluorescence when bound to the duplex in the absence of Ag + while giving weak fluorescence when the duplex was converted to MDB in the presence of Ag + ( Figure 4A). To ensure SGI will not affect the DNA conformational change, CD spectra were collected without and with adding SGI, and the results showed that Ag +induced conformational change still effectively occurred ( Figure S1). Figure S2). Therefore, the M-DNA used for Ag + sensing in the following experiments contained 50 nM CCTG2, 50 nM CAGG2, and 50 nM SGI in 10 mM NaPi at pH 6, unless otherwise specified.
To further verify whether the CCTG MDB played an important role in the M-DNA sensor for Ag + detection, we also performed Ag + titration on a controlled DNA (named C-DNA), which was an 8-bp self-complementary duplex. When the mixture of 50 nM C-DNA and 50 nM SGI in 10 mM NaPi at pH 6 was titrated with Ag + , there was only a little Figure S2). Therefore, the M-DNA used for Ag + sensing in the following experiments contained 50 nM CCTG 2 , 50 nM CAGG 2 , and 50 nM SGI in 10 mM NaPi at pH 6, unless otherwise specified.
To further verify whether the CCTG MDB played an important role in the M-DNA sensor for Ag + detection, we also performed Ag + titration on a controlled DNA (named C-DNA), which was an 8-bp self-complementary duplex. When the mixture of 50 nM C-DNA and 50 nM SGI in 10 mM NaPi at pH 6 was titrated with Ag + , there was only a little change in fluorescence intensity ( Figure S3), suggesting that the CCTG MDB played an irreplaceable role in Ag + sensing.
Kinetics, Sensitivity, and Selectivity of the M-DNA Sensor
One of the most interesting features of this M-DNA sensor is using an ultrashort 8-nt oligonucleotide, which is expected to undergo a much faster conformational change than longer i-motif and hairpin sequences [26,29,30,32]. Therefore, we also evaluated the kinetics of this M-DNA for Ag + sensing. The fluorescence intensity (520 nm) of the M-DNA sensor without Ag + was recorded from 0 to 180 s with a step time of 2 s. Ag + was then added to the same sample, and the fluorescence intensity was immediately recorded from 0 to Figure 4B shows that immediately after adding Ag + , the fluorescence intensity drastically decreased and remained almost unchanged through the entire monitoring process for 180 s. Therefore, it is safe to conclude that the reaction was completed within the acquisition time for the first data point, i.e., 2 s. It was reported that the Ag + -triggered conformational change from a single-stranded DNA to a 21-nt i-motif was complected in~15 s [26,29,30,32], therefore it is reasonable that the conformational change to an 8-nt MDB was much faster.
The M-DNA was then used to sense Ag + at various concentrations ranging from 0 to 200 nM ( Figure 4C). There was a good linear correlation between the fluorescence intensity and log[Ag + ]/log[M-DNA]. Following the rule of three times the standard deviation over the blank response [48], the Ag + detection limit was determined to be~2.1 nM. As the tolerable level of Ag + in drinking water is~927 nM [7], the detection limit of the M-DNA sensor should be sufficient for detecting Ag + in real samples containing Ag + .
The anti-interference capability of the M-DNA sensor for Ag + detection in a complex environment was also evaluated. As the drinking water source may also contain other metal ions, we evaluated the fluorescence response of M-DNA to K + , Li + , Ca 2+ , Mg 2+ , Mn 2+ , Co 2+ , Cu 2+ , Ba 2+ , and Ni 2+ , and the result showed only tiny fluorescence changes upon adding these ions ( Figure 5A). Furthermore, an additional experiment was also performed to examine if the M-DNA could detect Ag + in the presence of these interfering metal ions. Upon adding 50 nM Ag + to the solutions containing the respective interfering metal ions, the fluorescence change became significant and achieved a similar level to that of only 50 nM Ag + ( Figure 5B). Na + was not included as an interference ion in this study because the buffering system contained 10 mM NaPi. Approximately 10 to 200 mM Na + are also commonly used in buffering systems for many DNA-based sensors to neutralize the negatively charged phosphodiester backbones [26,[29][30][31]34]. The concentrations of non-Ag + ions vary in different water samples, e.g., few mM Na + in most China river and lake basins [49] and hundreds mM Na + in sea water [50]. The M-DNA sensor should be applicable for detecting Ag + in common river and lake basins, and its performance may need to be further improved for sensing Ag + in water samples containing high concentrations of interfering ions (e.g., sea waters). To examine the performance of the M-DNA sensor for Ag + detection in other water sources, we detected Ag + in tap water samples and two different lake water samples. The local tap water and lake water samples were collected and boiled for 5 min to remove chlorine, and lake water samples were further filtered with a 0.22 μm membrane follow-
Ag + Detection in Tap Water and Lake Water Samples Using the M-DNA Sensor
To examine the performance of the M-DNA sensor for Ag + detection in other water sources, we detected Ag + in tap water samples and two different lake water samples. The local tap water and lake water samples were collected and boiled for 5 min to remove chlorine, and lake water samples were further filtered with a 0.22 µm membrane following the reported procedures in the literature [26]. The M-DNA sensor was prepared using the treated tap and lake water samples instead of laboratory DI water, and no Ag + was detectable in these samples. We then added Ag + with known concentrations to the M-DNA sensor and recorded the fluorescence intensity. The Ag + concentration was calculated using the calibration curve shown in Figure 4C. The recovery ranged from 93.3% to 98.5% in tap water samples and 96.7% to 107.8% in lake water samples (Table 1), revealing a good accuracy of the M-DNA sensor for Ag + detection in environmental water sources.
Discussions on DNA-Based Ag + Sensors
As surveyed from the literature, DNA-based Ag + sensors can be generally classified into three types: (i) mismatch-containing DNA functionalized with nanomaterials [34][35][36][37], (ii) mismatch-containing DNA only [26,29,30,32], and (iii) DNAzyme [22] ( Table 2). The ensemble of mismatch-containing DNA and nanomaterials is an effective strategy to improve the detection limit by taking advantage of amplified local DNA concentration and interaction surfaces. Recently, Pal et al. have reported an electrochemical Ag + sensor based on DNA hairpin-functionalized nanoflakes with a detection limit of 0.8 pM [38]. Comparing with the detection limits of other sensors using only mismatch-containing DNA (i-motifs and hairpins), detection limit of the M-DNA sensor was the lowest. In addition, the M-DNA sensor exhibited a response time of less than 2 s, which is kinetically much faster than those using i-motifs and hairpins (Table 2). However, the M-DNA sensor requires a controlled acidic pH to work, and this limitation may be further improved by chemical modification, such as cytosine methylation, to enhance the thermodynamic stability of the CCTG MDB. Overall, the M-DNA sensor uses an ultrashort oligonucleotide to achieve a high sensitivity and fast response for Ag + detection.
Conclusions
In sum, we have designed a smart DNA sensor for Ag + detection using a new form of non-B DNA, i.e., a minidumbbell, apart from the previously used hairpins and i-motifs.
Owing to its small size, it shows fast response, high sensitivity, high selectivity, and good anti-interference capability for Ag + sensing. The performance of this M-DNA sensor may be further improved by chemical modification to further enhance the thermodynamic stability of the CCTG MDB. A successful demonstration of this M-DNA sensor provides new insights into Ag + detection, and paves the way for designing DNA-based tools to sense other metal ions and molecules.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/bios13030358/s1, Figure S1: CD changes of M-DNA with SGI in 10 mM NaPi at pH 6 before and after adding Ag + ; Figure S2: Fluorescence changes of variousconcentration M-DNA in 10 mM NaPi at pH 6, with different SGI:M-DNA ratios before and after adding Ag + ; Figure S3: Normalized fluorescence intensity at 520 nm of the M-DNA and C-DNA upon titrating various concentrations of Ag + .
Author Contributions: J.Z.: methodology, investigation, formal analysis, data curation, and writing-original draft; Y.L.: methodology, investigation, formal analysis, and writing-original draft; Z.Y.: investigation, formal analysis, and writing-original draft; Y.W.: conceptualization, methodology, formal analysis, and writing-review and editing; P.G.: conceptualization, methodology, formal analysis, writing-review and editing, supervision, and project administration. All authors have read and agreed to the published version of the manuscript. | 6,468 | 2023-03-01T00:00:00.000 | [
"Chemistry"
] |
Does rotation increase the acoustic field of view? Comparative models based on CT data of a live dolphin versus a dead dolphin
Rotational behaviour has been observed when dolphins track or detect targets, however, its role in echolocation is unknown. We used computed tomography data of one live and one recently deceased bottlenose dolphin, together with measurements of the acoustic properties of head tissues, to perform acoustic property reconstruction. The anatomical configuration and acoustic properties of the main forehead structures between the live and deceased dolphins were compared. Finite element analysis (FEA) was applied to simulate the generation and propagation of echolocation clicks, to compute their waveforms and spectra in both near- and far-fields, and to derive echolocation beam patterns. Modelling results from both the live and deceased dolphins were in good agreement with click recordings from other, live, echolocating individuals. FEA was also used to estimate the acoustic scene experienced by a dolphin rotating 180° about its longitudinal axis to detect fish in the far-field at elevation angles of −20° to 20°. The results suggest that the rotational behaviour provides a wider insonification area and a wider receiving area. Thus, it may provide compensation for the dolphin’s relatively narrow biosonar beam, asymmetries in sound reception, and constraints on the pointing direction that are limited by head movement. The results also have implications for examining the accuracy of FEA in acoustic simulations using recently deceased specimens.
Introduction
Dolphins, like all odontocetes, utilize a biosonar (echolocation) system for navigation and foraging (Au 1993). Series of echolocation clicks are produced by a set of phonic lips and propagate through the forehead to shape a biosonar beam in the forward direction (Au and Simmons 2007). Dolphins have some control over the acoustic parameters (e.g. source level and peak frequency) of their echolocation clicks (Moore and Pawloski 1990), depending on the echolocation task and environment (Houser et al 1999). They can further adjust the properties of the outgoing beam (e.g. beamwidth and direction), possibly through muscular reshaping of the forehead and inflation of the air sacs (Moore et al 2008, Wisniewska et al 2012. Our understanding of echolocation signal characteristics and biosonar performance is based in part on signal measurements in the wild and in the laboratory (under controlled experimental tasks), as well as on anatomical and acoustical modelling using finite element analysis (FEA).
FEA is a numerical, computer-based technique that allows us to model how an object with complex geometry and material properties (such as a dolphin's head with complex anatomical structures) responds to physical forces (e.g. as a result of acoustic pressure). The object is simulated as a finite set of connected elements. Physical properties of the objects determine the strength of the connections (e.g. the bulk modulus of bones and other tissues). The connections between elements are analogous to 'springs' of various stiffnesses interconnecting the objects, each element of which may have a different mass. Mathematical equations then determine how the object responds to sound, vibration, or other physical situations. For example, FEA has been used to develop hypotheses about the physical mechanisms underlying dolphin target detection and discrimination (Feng et al 2018, Wei et al 2021. The numerical models are constructed based on computed tomography (CT) images of dolphin heads (which reveal the anatomical structures) and physical property measurements (e.g. sound speed and density) of head tissues. The specimens are typically dead, either fresh (e.g. recently deceased after stranding) or older, frozen, then thawed (Aroyan et al 1992, Cranford et al 2014. There might be differences in morphology (e.g. shape of air sacs) and properties (sound speed and density of tissues) between live, fresh, and frozen-then-thawed samples (Mckenna et al 2007, Cranford et al 2014, and hence comparisons are needed. Ultimately, any model needs to be able to replicate the echolocation click features (waveforms, spectra, beam patterns, etc.) of those recorded from live, echolocating dolphins .
Bottlenose dolphin (Tursiops truncatus) echolocation signals are characterized by a short duration, relatively high peak frequency, wide bandwidth, and high source level. Mean durations of individual clicks of 18-23 µs have been reported (Wahlberg et al 2011). Mean peak frequencies may be as high as 84-124 kHz (Au 1993, Akamatsu et al 1998, Wahlberg et al 2011, de Freitas et al 2015, Ladegaard et al 2019. Mean bandwidths (3-dB) greater than 85 kHz have been measured (Au 1993, Houser et al 1999, de Freitas et al 2015. Mean peak-to-peak source levels can be in excess of 220 dB re 1 µPa (Au 1993. The echolocation beam is relatively narrow with a mean 3-dB beamwidth of approximately 10 • for both the vertical and horizontal planes (Au 1993). While such highly directional biosonar features have advantages in long-range echolocation, there can be situations when a wider field of view is beneficial-wider than what biosonar muscular adjustments allow in the vertical plane (the horizontal beamwidth can exceed 26 • according to Moore et al 2008). This raises the question of whether dolphins can enhance performance through additional fine-or gross-motor behaviours.
Behavioural adaptations that might enhance sensory functions include the lateralized behaviour accompanying foraging in several dolphin species. Dusky dolphins (Lagenorhynchus obscurus) keep the right side of their body and the right eye towards the targeted prey while circling their prey clockwise (Vaughn-Hirshorn et al 2013). Atlantic bottlenose dolphins create and swim through a plume of mud while keeping the right side of their bodies towards the water-borne prey (Lewis et al 2003). This rightside bias might be a result of right-eye supremacy in visual discrimination and visuospatial processing in dolphins (Kaplan et al 2019). Alternatively or additionally, these lateralized behaviours with right-side bias might always be associated with echolocation behaviour and be a direct result of the echolocation clicks being produced by the dolphin's right set of phonic lips (Madsen et al 2010(Madsen et al , 2013. Another gross-motor behaviour that might enhance echolocation performance is rotation about the body's longitudinal axis. Wild and captive bottlenose dolphins have demonstrated rotational behaviour. They rotated their body along the longitudinal axis by a certain degree (e.g. 90 • , 180 • ) when tracking or searching for targets (supplementary material). Whether this rotation serves a communication function, or, in fact, enhances echolocation performance has not been studied to date. In this article, our hypothesis is that rotational behaviour during echolocation increases the acoustic field of view (i.e. both the ensonified field and the field from which echoes are received). To test our hypothesis, we constructed CT-image-based 2D finite element (FE) models to numerically estimate the acoustic scene experienced by an echolocating dolphin rotating 180 • about its longitudinal axis. Using CT data of one live and one recently deceased bottlenose dolphin, we were able to compare model outputs. These outputs, specifically, were the waveforms and spectra in both near-and far-fields, as well as the echolocation beam patterns, all of which we ultimately compared with acoustic measurements from live, echolocating individuals.
CT data acquisition and analysis
The specimens were all adult bottlenose dolphins. The CT scan data of the live bottlenose dolphin were provided by the U.S. Navy Marine Mammal Program (MMP) located at the Navy Information Warfare Center (NIWC) Pacific in San Diego, CA. CT data were collected at the Balboa Navy Medical Center in San Diego, CA on a GE Optima CT580 as part of a veterinary procedure. The male dolphin was 21 year old during the scanning, it was scanned in a prone position using a 2.5 mm spiral acquisition at 120 kV and 125 mA. Scans collected by the MMP were performed in accordance with approved protocols of the NIWC Pacific Institutional Animal Care and Use Committee (IACUC) and followed all applicable U.S. Department of Defense guidelines for the care and use of laboratory animals. The CT scan data of the fresh, recently deceased bottlenose dolphin (male, 10 year old) were provided by the Woods Hole Oceanographic Institution (WHOI) Biology Department. The head of the specimen was cut and then CTscanned in a prone position using a Siemens Volume Zoom helical CT scanner, using 1 mm spiral acquisition at 120 kV × 125 mA. Images were formatted in the transaxial plane at 0.1-1 mm slice increments. Raw acquisition data and all DICOM images were archived. Approval for the research from IACUC for handling and examining the cadaveric specimens was granted by the Animal Use Committee of the WHOI after reviewing the research protocol. Specimens scanned were obtained postmortem from freshly stranded animals under NMFS and NOAA permits to D. R. Ketten.
Two sets of DICOM images were imported into Horos™ (Horos Project, Geneva, Switzerland) for analysis and 3D geometrical model reconstruction, as shown in figure 1. The anatomical configuration of the main structures of the live and deceased dolphins, such as the melon, connective tissues, air sacs, brain, etc. were carefully compared slice-by-slice in the sagittal plane, transverse plane, and frontal plane. To better compare with the live dolphin beam formation experiment results, we rotated the head of the dolphin until the maxilla was roughly parallel to the ground. After carefully checking the CT data, we found that the fiducial markers on the melon did not affect the CT image of the underlying tissues.
Acoustic property reconstruction
The HU is a calibrated measure of radio density used in the interpretation of CT images. The HU values of tissues can be automatically obtained from CT data. We exported the distribution of HU of the animal's entire head to a text file, which contained the information about coordinates and corresponding HU values.
Since the acoustic properties of tissues cannot be measured on a live animal, we used the relationships of HU-to-sound speed and HU-to-density from previous tissue measurements (Wei et al 2015) to convert the HU distributions to the distributions of sound speed and density. More details can be found in our earlier studies , in which the same procedures were used to reconstruct acoustic impedance models for the heads of a harbour porpoise (Phocoena phocoena) and Atlantic bottlenose dolphin based on CT data and measurements of the physical properties of tissues. Moreover, 3D acoustic property reconstruction was used to compare the anatomical structures of the live and deceased specimens.
FE model construction
We selected a sagittal slice closest to the midline of the head that cut through the right set of phonic lips to create a 2D impedance model, which was imported into COMSOL Multiphysics modelling software (Stockholm, Sweden) for FEA. The FE models simulated click generation and propagation from the head into the water. Three main parts were included in the models: the head of an echolocating dolphin, surrounding seawater, and the target fish located in the acoustic far-field.
The reflecting fish was simulated by an oval swim bladder (axis ratio 1:1.93), which is responsible for Figure 2. Setup of the FE model to estimate the acoustic scene experienced by a dolphin rotating 180˚about its longitudinal axis. The fishes (grey points) were located along a circle with a 0.6 m radius from −20 • to 20 • elevation. Two ovals represent the simulated fish orientated longitudinally and perpendicularly in the far-field. the main echo when dolphins echolocate on fish (Au et al 2010a). Two orientations of the fish were modelled: (1) in line with the radius vector from the tip of the rostrum (i.e. horizontal at 0 • elevation) and (2) perpendicular to the radius vector from the tip of the rostrum (i.e. vertical at 0 • elevation). We further created a model by flipping the head of the dolphin upside down to simulate a dolphin rotating 180 • about its longitudinal axis. The model setups are shown in figure 2, in which the grey points at ±1 • , • , and ±20 • elevation display the locations of the fish in each test. The fish was moved from −20 • to 20 • at a constant radius of 0.6 m (i.e. range). The centre of the circle was set as a receiving point located right in front of the tip of the rostrum. The acoustic fields of returning echoes, when the fish was located at each elevation, were computed. The COMSOL results were imported into OriginPro software (Origin-Lab, Northampton, MA, USA) for data analysis and plotting. This procedure was applied to the models of both the live and deceased dolphins.
Based on the CT data, the heads of both the live and deceased dolphins in the models contained internal structures such as the right set of phonic lips, melon, connective tissue, vestibular sac, nasal passage, premaxillary sac, maxilla, mandible, blubber, musculature, mandibular fat, brain, etc. The sound speed and density of the structures in the head of the live dolphin were input according to the acoustic property reconstruction results. Whereas the sound speed and density of the structures of the deceased dolphin were referenced from the previous study , the sound speed and density of seawater outside of the animals' heads were set as 1483 m s −1 and 998 kg m −3 , respectively. The material properties of air were used to model the gas-filled swim bladders of fish.
We used COMSOL's free mesher to map the entire model into second-order triangular elements. The element size was set as a grid spacing of 1/10th of a wavelength λ at the centre frequency f c of the excitation signal at the source (λ = c water /f c , where c water is the sound speed in water). With f c = 60 kHz, the grid spacing was ∼0.25 cm. To simulate click propagation in free space with minimal reflections from the map's boundaries, a low-reflecting boundary condition (Bérenger 1994) was applied in the models.
A transient time domain FE computation was performed with a time step set at 0.8 µs. We set the sound source at the right phonic lips based on previous acoustic measurements (Madsen et al 2010(Madsen et al , 2013. In the selected sagittal slice, the length of the right phonic lip (∼3 mm) was significantly smaller than the wavelength (at least 16.4 mm); therefore, a point source was used to model the source. A shortduration pulse with wide bandwidth was used as click excitation. It modelled the physical process that the right phonic lips open and close rapidly to generate a short pulse of the form: where A is the pulse amplitude (Pa), f 0 is the centre frequency (Hz), t p is the time from the onset of the pulse to its peak amplitude (s), and t is time (s).
The time of the pulse in equation (1) has to satisfy equation (2). The waveform and spectra of the pulse can be found in Wei et al (2018).
FE model validation
The FE model that was based on the live dolphin CT data was compared to the FE model that was based on the deceased specimen. Specifically, the waveforms in the acoustic near-field along the animal's forehead, and the waveforms, spectra, and echolocation beam patterns in the acoustic far-field were compared. Both near-and far-field predictions from both live and deceased dolphin FE models were further compared to click recordings from live, echolocating dolphins using published data from earlier studies (Au 1993 The vertical far-field beam pattern can be predicted from the FEA results by calculating the peakto-peak sound pressure of an echolocation click spreading from the source (phonic lips) over a circle of 1 m radius. The predicted beam pattern of the FE model from live dolphin CT scans was compared with that previously modelled from dead dolphin CT scans . Both modelled beam patterns were further compared to direct beam pattern measurements from live, echolocating dolphins (Au 1993).
Morphology differences between live and fresh, deceased specimen
The CT data of the live and deceased bottlenose dolphins were compared based on three planes (sagittal, transverse, and frontal). Figure 3 displays one of the images from each plane. Panels A, B, C of the live dolphin show greater air volume in the naris, vestibular sac, and spiracular cavity than panels D, E, F of the dead dolphin, where air sacs were partially collapsed. Some air was visible in the brain of the deceased dolphin (D, E) but was clearly absent from the live dolphin's brain (A, B). The melon of the live dolphin was slightly longer than that of the deceased dolphin. Thus a bigger region between the forehead apex and the most anterior projection of the melon in the head of the deceased dolphin was observable in D than in A, which might be due to individual anatomical differences.
Acoustic property reconstruction results between live and fresh deceased specimens
The grayscale in the CT scan data can only show the difference when the HU values of structures are significantly different (e.g. bones vs. soft tissues), therefore figure 4 was plotted to display the anatomical features in colours corresponding to the derived acoustic properties. The lines with different colours were located at approximately the same positions on the 2D slices of the live and deceased dolphins ( figure 4). The values of sound speed and density along the lines were extracted for quantitative comparison.
In the sagittal plane (figures 4(A) and (B)), the position of the melon of the deceased dolphin (purple line vs. blue line) was shifted about 30 mm compared to that of the live dolphin and a marginally larger forehead apex was observed. In the selected sagittal slices in figures 4(A) and (B), the distances between the forehead apex and the most anterior projection of the melon in the live and decreased dolphins' heads were approximately 50 and 65 mm, respectively. In the frontal plane (figures 4(C) and (D)), the sound speed and density of the connective tissues around the terminal region of the melon in the deceased dolphin's head were slightly higher than those in the live dolphin's head. These differences in the melon might be due to individually different anatomy or variation in positioning of the two specimens during scanning, rather than caused by death.
In both the sagittal and frontal planes, the melons of the live and deceased dolphins showed a similar trend, where both sound speed and density gradually increased from the inner core to the outer layer. The results agreed well with the measurements by Norris and Harvey (1974) using a diseased specimen, who demonstrated an inhomogeneous melon. The melon is embraced by the connective tissue, which has significantly greater values of sound speed and density. Therefore, the inhomogeneous melon combines with the connective tissue to form a distinct acoustic impedance gradient in the forehead of the dolphin (figures 4(C) and (D)). The melon thus plays an important role in sound propagation through the forehead by guiding the sound and matching acoustic impedance.
In general, the sound speeds and densities of the structures of the live and deceased dolphins in both the sagittal and frontal planes (including the melon, connective tissues, muscles, and bony structures) are very similar, supporting the previous conclusion by Mckenna et al (2007) and Soldevilla et al (2005), who suggested the variations of the sound speed and density values between live and postmortem specimens were insignificant.
FE modelling results
The comparisons of the click waveforms in the nearfield are shown in figure 5(A) for the live dolphin FE model, deceased dolphin FE model, and live echolocating dolphin recordings. The major axis of the outgoing beam for both live and deceased specimens was in the region between points a and c, in all three cases. The comparisons of the click waveforms and spectra in the far-field are shown in figure 5(B). Both the simulated clicks showed the typical broadband signal characteristics and were similar to the live measurements, albeit somewhat lower in peak frequency.
It should be noted that the click comparison between the FEA results and live dolphin measurements in figure 5 is qualitative rather than quantitative since the clicks produced by a live, echolocating dolphin are dynamic and may change with individual and task. A click-by-click analysis of clicks produced by an echolocating dolphin can have a lot of variability in waveform and spectra, which is likely accomplished through manipulation of the sacs and muscular control of the melon. The CT data used to build the FE models were static in the sense that they were not collected as a time series during actual echolocation behaviour. Therefore, the FEA results do not capture the click dynamics of a dolphin echolocating underwater. Moreover, a small discrepancy was found in terms of the polarity of the simulated clicks between the live and dead dolphins. The live dolphin's click was consistent with TRO's measurement results (the same individual) from Finneran et al (2014), and the dead dolphin's click was consistent with Au's recording data. The clicks were formed by the reflected and direct signal components in the forehead transmission. Different anatomical configurations could reflect the signals differently and result in different click waveforms in the far-field. Figure 6 compares the vertical far-field beam patterns: (1) calculated from FEA of live dolphin CT scans, (2) calculated from FEA of dead dolphin CT scans , and (3) measured from live, echolocating dolphins (Au 1993). Shapes, widths, and elevations of the two simulated and one measured beams were similar. The major axis of the simulated, live dolphin's beam was elevated 5.3 • , almost identical to the elevation angles of the main beams from the dead dolphin model and measurements by Au (1993). The vertical 3-dB beamwidth computed from the live dolphin FEA was 8.5 • (the averaged horizontal 3-dB beamwidth of the same individual 'TRO' was 7.5 • , measured by Finneran et al 2014), slightly narrower than those from the dead dolphin FEA (11.1 • ) and measurements (10.2 • ; Au 1993). This is attributed to the different head sizes.
The diameter at the blowhole of the live dolphin was ∼31.5 cm, larger than that of the dead dolphin (∼28 cm), and no measurements of the size of the live dolphin in the acoustic recording by Au (1993) were available. The width of the beam pattern has been shown inversely proportional to the size of an animal's head (Au et al 1999, Wei et al 2016, Jensen et al 2018. In addition, the simulated far-field beam based on the live dolphin CT data had significantly less energy in the side lobes than the FEA results from the dead dolphin CT data, suggesting that the model results based on live dolphin CT data may match the live recordings slightly better in terms of the energy loss into the side lobes.
Dolphin rotation simulation
We simulated the acoustic scene of an echolocating dolphin rotated 180 • about its longitudinal body axis to detect a fish in the far-field (figure 2). Two fish orientations are shown in figure 7: perpendicular (A&B) and longitudinal (C&D) to the radius vector from the tip of the rostrum. Examples of the echo wavefronts received at the tip of the dolphin's rostrum when the fish was located at +10 • elevation are plotted. The dolphin's outgoing beam pointed upwards (+5.3 • ) in the upright position (see section 3 above), but pointed downward (−5.3 • ) when the animal rotated 180 • . The location of the fish relative to the beam axis was thus not symmetrical in the upright and upside-down cases. The dolphin's rotation to 180 • led to a longer propagation path for the echo and an altered arrival waveform (figures 7(E) and (F)).
The acoustic fields at the moment when the echoes reached the tips of the rostrums in both the live and deceased dolphin models were computed and derived for each case. Correlation analysis was performed between the echo acoustic fields from the numeric positions (e.g. +1 • vs. −1 • , +2 • vs. −2 • , etc.), the results are shown in figure 8. In the 2D models, rotating 180 • along the body axis created a mirror-image relationship between the geometries of the unrotated and rotated models (see figure 7). For example, when the fish was located at the elevation angle +10 • , rotating 180 • along the dolphin's body axis would be equivalent to only moving the fish to −10 • . Therefore, the correlation coefficient between the echo acoustic fields in the position +10 • and −10 • can serve as a qualitative measure of the different acoustic fields that dolphins experience after body rotation. In particular, a low signal correlation value combined with sharply different signal spectra can be used as a qualitative measure of the different acoustic fields that dolphins experience after body rotation.
It should be noted that correlation coefficients in the longitudinal cases of both the live and dead dolphin models are scattered compared to those in the perpendicular cases (figure 8). Differences in the significance of the correlations between the models was due to the orientation of the oval (simulated fish). When the simulated fish was orientated Figure 6. Comparison of vertical far-field beam patterns: modelled based on CT data from a live dolphin, modelled based on CT data from a deceased dolphin , and measured from a live, echolocating dolphin (Au 1993). perpendicularly, the length of the reflection arc was ∼0.1 m, however, the length of the reflection arc reduced to ∼0.01 m when the oval was orientated longitudinally. The nearly ten-fold difference in the length of the reflection arc and the different shapes of reflection boundaries caused variations in the returning echoes (see the distinct difference in the amplitude between echoes A&B vs. C&D in figures 7(E) and (F). In other words, the aspect-dependence of the target greatly affected the spread of the echo reception field of view. Nevertheless, the trends of the two dead dolphin cases closely resembled those of the live dolphin suggesting that FE modelling of dolphin biosonar transmissions can effectively be performed using freshly deceased specimens.
As an example, figure 9 shows comparisons of received echo waveform and spectra between the unrotated and 180 • rotated models. When the simulated fish was located at 1 • elevation, regardless of perpendicular or longitudinal orientation (figures 9(A) and (C)), the variations in the echo characteristics were limited. However, when the simulated fish was located at 20 • elevation (figures 9(B) and (D)), significant variations were visible.
Discussion
Our understanding of dolphin biosonar behaviour and performance is informed both through measurements and modelling. In this article, we present a FE model for dolphin echolocation click production and propagation based on CT scans of a live dolphin. Model outputs were compared to those based on CT scans of a recently deceased dolphin and to acoustic recordings of clicks in both near-and far-fields from live, echolocating dolphins.
Training live odontocetes for CT imaging is challenging (Houser et al 2004) and a method has not yet been developed to measure the physical properties of dolphin tissues in vivo. Therefore, FE acoustic models are usually based on CT scans and tissue measurements using specimens that are recently deceased (e.g. after live stranding) or thawed after having been frozen. The accuracy of the morphological descriptions extracted from the scans and tissue measurements is essential. Mckenna et al (2007) examined the post-mortem changes in geometry, density, and sound speed of anatomical structures based on a comparison between CT scans of a live and a post-mortem Atlantic bottlenose dolphin. Limited post-mortem differences were found in the morphology of the forehead structures. However, whether these limited differences would affect FE modelling results was not examined. Cranford et al (2014) found general similarities in forehead sound propagation between the FE models from a live and a carefully preserved post-mortem specimen. However, their simulated beam patterns (both vertical and horizontal beam patterns) were wider than the beam measurements from live, echolocating animals. More importantly, no data were presented about the signal characteristics of the modelled click for comparison with previously recorded clicks.
Our study filled these gaps and further demonstrated that the FEA results of the live and dead dolphins were similar in terms of the near-and far-field waveforms and spectra, as well as the beam pattern. Moreover, the simulated results of the two FE models were similar to echolocation click recordings from live dolphins (Au 1993, Au et al 2010b, Finneran et al 2014, suggesting the two FE models can simulate certain aspects of the echolocation system of a live dolphin. Our results provide evidence that 2D FE acoustic models based on fresh, deceased specimens are sufficiently accurate that CT scans of live animals are not necessarily needed (depending on the goals of the simulation). We hypothesise, however, that FE We presented a simplified, 2D model of dolphin rotational behaviour during echolocation. The correlation coefficients of the returning echo acoustic Live-perpendicular and dead-perpendicular represent the simulated data from the live dolphin and deceased dolphin when the fish was placed perpendicularly, respectively; live-longitudinal and dead-longitudinal represent the simulated data from the live dolphin and deceased dolphin when the fish was placed longitudinally, respectively. Note that all of x-axes here are scaled logarithmically. Linear regression lines were drawn to indicate trends. fields in figure 8 suggested that the rotation may provide additional information in the vertical plane about the targeted fish. When the targeted fish was moving away from the dolphin's main beam axis in each test, we found strong tendencies for the correlation of the echo acoustic fields to decline (see figure 8). Because of the relatively narrow biosonar beam (Au 1993), the farther the targeted fish is away from the main beam, the greater the differences in the echo acoustic fields during rotation (see figure 9). Thus, the advantage of rotational behaviour should be relatively limited if a fish is close to the main response axis of the echolocation beam. However, if a fish is located off of the main response axis, the rotational behaviour could provide a wider insonification area and provide more acoustic information about the fish, thus compensating for both the relatively narrow echolocation beam and the constraints imposed by the limited head movement of the dolphin.
Rotation not only increases the insonified field but also the area of reception. Dolphins' hearing thresholds are asymmetrical in both the vertical and horizontal planes (Au and Moore 1984, Aroyan 2001, Accomando et al 2020. In the vertical plane, dolphins tend to be more sensitive to sounds arriving from below, which is likely the result of specialized acoustic pathways for echolocation beginning in the lower jaw (Brill et al 1988, Supin and Popov 1993, Aroyan 2001, Cranford et al 2010. When the dolphin is stationed, the receiving windows are located in the two areas around the lower jaws (labelled in figure 2). Thus, rotation of the lower jaws with the body during rotational behaviour could potentially compensate for the dolphin's asymmetries (Aroyan 2001) in sound reception by orienting the most sensitive sound reception path towards the insonified target. It should be noted that we used an ellipse to represent a fish in this study. With acoustically reflective bone structures and swimbladders, fishes would have even more details in the echo and a greater difference between echoes at different angles. Therefore the reality is likely to be an even stronger argument for our hypothesis. The next logical step is to conduct experiments and model dolphin rotation in 3D to test the hypothesis. We cannot exclude the possibility that rotational behaviour may serve other biological functions or that dolphins only rotate for fun. Therefore, controlled acoustic experiments would be critical for us to further understand this behaviour. Our 2D study was only able to consider two positions of the dolphin: 0 • and 180 • rotated. Our 2D FE model provides the starting point to understanding the possible role of the frequently Figure 9. Comparisons of the received echo waveforms and spectra between unrotated and rotated models. When the fish was located at (A) 1 • elevation and in a perpendicular orientation, (B) 20 • elevation in a perpendicular orientation, (C) 1 • elevation in a longitudinal orientation, and (D) 20 • elevation in a longitudinal orientation. The relative amplitude values were normalized according to the maximum peak-to-peak sound pressure of the received waveform in (A). observed dolphin rotational behaviour. However, dolphins live in a 3D environment and their rotational behaviour exploits this dimensionality. Indeed, the creation of 3D acoustic fields for dolphin echo-location beams and returning echoes could provide more information on not only the question of how rotational behaviour might benefit an echolocating dolphin, but on biosonar behaviour in general.
Conclusions
We constructed numerical models based on both live and fresh deceased Atlantic bottlenose dolphin CT scan data to estimate the acoustic scene of an echolocating dolphin rotating 180 • about its longitudinal axis while detecting a fish at elevation angles of −20 • to 20 • . The models suggest that dolphins experience different echo acoustic fields when in rotation, which may provide extra information about the insonified fish, particularly when a fish is located out of the main response axis of the echolocation beam. The rotational behaviour not only provides a wider insonification area but also a wider receiving area to compensate for the dolphin's relatively narrow biosonar beam and asymmetries in sound reception. This paper represents the first step towards understanding how the dolphin's rotational behaviour could contribute to its echolocation performance.
This study also compared the anatomical configuration and acoustic properties of the forehead structures between the live and fresh deceased dolphins, as well as examining the accuracy of FEA in acoustic simulations using freshly deceased specimens. Our results suggest that freshly deceased specimens, if handled properly, may be sufficient alternatives for FE acoustic modelling when live specimens are not available. This has important implications for the study of acoustic mechanisms for biosonar production and hearing in species only accessible after stranding (i.e. most species of echolocating whales are unavailable for experimental measurements).
Data availability statement
The data that support the findings of this study are available upon reasonable request from the authors | 7,911.4 | 2023-03-14T00:00:00.000 | [
"Physics"
] |
Gaussian Process Modeling Blazar Multiwavelength Variability: Indirectly Resolving Jet Structure
Blazar jet structure can be indirectly resolved by analyzing the multiwavelength variability. In this work, we analyze the long-term variability of blazars in radio, optical and X-ray energies with the Gaussian process (GP) method. The multiwavelength variability can be successfully characterized by the damped-random walk (DRW) model. The nonthermal optical characteristic timescales of 38 blazars are statistically consistent with the $\gamma$-ray characteristic timescales of 22 blazars. For three individuals (3C 273, PKS 1510-089, and BL Lac), the nonthermal optical, X-ray, and $\gamma$-ray characteristic timescales are also consistent within the measured 95$\%$ errors, but the radio timescale of 3C 273 is too large to be constrained by the decade-long light curve. The synchrotron and inverse-Compton emissions have the same power spectral density, suggesting that the long-term jet variability is irrelevant to the emission mechanism. In the plot of the rest-frame timescale versus black hole mass, the optical-$\gamma$-ray timescales of the jet variability occupy almost the same space with the timescales of accretion disk emission from normal quasars, which may imply that the long-term variabilities of the jet and accretion disk are driven by the same physical process. It is suggested that the nonthermal optical-X-ray and $\gamma$-ray emissions are produced in the same region, while the radio core which can be resolved by very-long-baseline interferometry locates at a far more distant region from the black hole. Our study suggests a new methodology for comparing thermal and nonthermal emissions, which is achieved by using the standard GP method.
INTRODUCTION
Flat spectrum radio quasars (FSRQs) and BL Lac objects (BL Lacs) are classed into a special class of active galactic nuclei (AGNs) called blazars, whose jets nearly point to the Earth. Blazars are highly variable over the entire electromagnetic bands. One popular scenario is that the accretion onto a supermassive black hole is the central engine, driving relativistic jet. But the detailed process is still unclear. Thanks to the high variability of blazars, one can investigate the physical process close to the central engine (e.g., Rieger 2019), such as the location of the emitting region and the jet-disk connection (e.g., Ackermann et al. 2016;Meyer et al. 2019;Zhang et al. 2022).
Using advanced interferometric instruments, blazar radio jet can be directly resovled on ∼parsec-scale (see Hovatta & Lindfors 2019, for a recent review). This provides a calibrator for multi-band variability analysis. There have been lots of works attempting to investigate the underlying physical process of blazar jet with multi-band variability (e.g., Chatterjee et al. 2012;Nakagawa & Mori 2013;Xiong et al. 2017;Goyal et al. 2018Goyal et al. , 2022. Max-Moerbeck et al. (2014) investigated the time-domain relationship between radio and γ-ray emission of blazars, and found the correlations only exist in a minority of the sources over a 4 yr period. They found radio variations lagging the γ-ray variations, suggesting that the γ-ray emission originates upstream of the radio emission. This result is further verified by Liodakis et al. (2018) who concluded that the radio variation is usually substantially delayed to the other wavelengths for blazars. Bhatta (2021) analyzed the correlation between optical (V -band) and γ-ray variabilities for blazars and found that the optical variability is highly correlated with the γ-ray variability except for 3C 273, however, no significant lagging is found. The multi-band variability analysis can be considered as an indirect approach for resolving blazar jet.
The GP method becomes popular in modern time-domain astronomy (e.g., Ryan et al. 2019;Burke et al. 2021;Yang et al. 2021;Griffiths et al. 2021;Covino et al. 2022;Rueda et al. 2022;Stone et al. 2022;Zhang et al. 2022). The GP method enables us to effectively extract information from astronomical variability. For example, Zhang et al. (2022) used a GP method to characterize the γ-ray variability of AGNs with stochastic process. It is found that the DRW model can successfully fit the γ-ray variability, which is similar with the optical variability of AGN accretion disk (Kelly et al. 2009;Li & Wang 2018;Burke et al. 2021). Moreover, Zhang et al. (2022) suggested a connection between the jet and the accretion disk by comparing the rest-frame γ-ray timescales of blazars with the optical accretion disk timescales of quasars.
In this work, we analyze the multi-band variability of blazars with the GP method, which is independent of the temporal correlation analysis. We hope to extract additional information from the variability. Using the data from Fermi-Large Area Telescope (Fermi-LAT), we carried out systematic research of γ-ray variability of AGNs recently (Zhang et al. 2022). So far, the Small and Moderate Aperture Research Telescope System (SMARTS) monitoring program 1 ) and the Steward Observatory (SO) spectropolarimetric monitoring project 2 (Smith et al. 2009) can provide almost ten years' (from 2008 to 2018) optical data of Fermi blazars. RXTE AGN Timing & Spectral Database 3 (Rivers et al. 2013) provides long-term X-ray data, and the Owens Valley Radio Observatory (OVRO) 40 m program (Richards et al. 2011) provides radio light curves (LCs) from 2008 to 2020 4 . Using these public data, we analyze the radio, optical and X-ray variability of three individual blazars, as well as optical variability for a sample including 38 Fermi blazars. The format of this paper is as follows. In Section 2, we describe the data as well as the GP method. The modeling results of the three individual sources and 38 blazars are shown in Section 3. We give discussions and physical interpretations of the results in Section 4. In Section 5, we conclude the paper with a brief summary.
Data and Sources
We use photometric data of blazars from the SMARTS and SO monitoring projects. The SMARTS program gives photometric data at five wavelength bands (B, V, R, J, K), which were taken from the 1.3 m telescope at the Cerro Tololo Inter-American Observatory. SO is a long-term optical program to support the Fermi Telescope, utilizing both the 2.3 m Bok Telescope on Kitt Peak and the 1.54 m Kuiper Telescope on Mt.Bigelow in Arizona. The campaign of the SO program spanned almost a decade from 2008 November to 2018 July. The X-ray data can be gained from RXTE observation which provided us with 16 yr (1996)(1997)(1998)(1999)(2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012) data in 2-10 keV. OVRO 40 m program gives radio data of blazars from 2008 to 2020, which is a large-scale, fast-cadence 15 GHz radio monitoring program. We select sources having long-term continuous observations and a good sampling. For the source with a large gap in the LC, we only use the data covering a longer period before or after the gap for analysis. Finally, We have 38 blazars in the optical band, including 23 FSRQs and 15 BL Lacs. Three blazars (3C 273, BL Lac, and PKS 1510-089) have long-term RXTE X-ray data. They are also in the sample of selected optical sources. Unfortunately, among the three sources, only 3C 273 has the OVRO LC. Table 1 gives the general information of these targets.
Gaussian Process Method
GPs are a class of statistical models, which are widely applied for modeling stochastic processes. For the one who is interested in the stochastic behavior of astronomical variability, GP provides a flexible method to model the LC with stochastic processes. The application of GPs for astronomical time-series is discussed in a recent review (Aigrain & Foreman-Mackey 2022). Considering a data set of y n at coordinates x n , the GP model consists of a mean function µ θ (x) parameterized by θ and a kernel function (covariance function) k α (x n , x m ) parameterized by parameters α (Foreman-Mackey et al. 2017). For time-series data, the GP is one-dimensional, and the coordinates are time, x n =t n . After obtaining the likelihood function with the above information, one can use Bayesian inference to estimate the posterior distribution over the parameter space. In practical application, the key point is choosing the kernel function. The DRW process (called Ornstein-Uhlenbeck process in physics) is widely used to describe the variability of AGNs (e.g., Burke et al. 2021), and it is defined by an exponential covariance function (e.g., Kelly et al. 2009;Zu et al. 2013), where t nm = |t n − t m | is the time lag between measurements m and n. The amplitude term (σ DRW ) represents the amplitude of the random disturbance, and the damping (characteristic) timescale (τ DRW ) represents the timescale that the system returns to the stability after experiencing a disturbance. Sometimes, an excess white noise term (σ 2 n δ nm where σ n is the excess white noise amplitude and δ nm is the Kronecker δ function) is needed in the situation that there is a white noise in the LC in addition to the quoted measurement errors (Foreman-Mackey 2018; Burke et al. 2021). A more complex kernel is the stochastically driven damped simple harmonic oscillator (SHO), which is described by a second-order differential equation (Foreman-Mackey et al. 2017). The SHO kernel has been used to model the AGN accretion disk (Yu et al. 2022) and jet (Zhang et al. 2022) variability.
Celerite software package is a GP tool for a stationary process (Foreman-Mackey et al. 2017). It uses the semiseparated structure of a special covariance matrix to directly analyze and compute the GP likelihood for large data sets. Yang et al. (2021) and Zhang et al. (2021Zhang et al. ( , 2022 have tested the efficiency of this method for the study of AGN jet variability, and suggested that celerite has a strong potentiality for studying AGN variability (also see Burke et al. 2021). Here, we use the DRW model implemented in celerite package to model the multi-band variability of blazars.
The Markov Chain Monte Carlo (MCMC) sampler emcee 5 is adopted to perform posterior analysis. We assume log-uniform priors on each of the parameters. The MCMC sampler is run for 50,000 iterations with 32 parallel walkers. The first 20,000 steps are taken as burn-in. After modeling the LCs, we should estimate the fitting quality for assessing whether the fitting results are reliable, e.g., whether the standardized residuals follow a Gaussian white-noise sequence.
The power spectral density (PSD) can be constructed by using the fitting results. The DRW PSD is in the form of It is a broken power-law form with slope 0 below the broken frequency (f b ) and slope -2 above the broken frequency. The conversion between the timescale τ DRW and f b is τ DRW = 1/(2πf b ). The LC with large cadence or insufficient length leads to a large bias on the characteristic timescale derived from modeling. If the timescale is larger than the mean cadence of LC and less than 1/10 of the length of LC, the measurement of the damping timescale from the LC is reliable (Burke et al. 2021).
Results of 3C 273, PKS 1510-089 and BL Lac
We first analyze the multi-band variability of the three individual sources, 3C 273, PKS 1510-089, and BL Lac. We present the celerite fitting results of the LC for each source in the following. The measured timescales given in the main text are with errors in 95% confidence intervals.
For 3C 273, the optical data in both B and V bands are available. We show the modeling results in Figure 1, in which the left panel is for B-band LC and the right is for V -band LC. The DRW model can agree well with both LCs. Looking at the distribution of standardized residuals and the auto-correlation function (ACF) of standardized residuals (see details in Zhang et al. 2022), we believe the characteristic of each LC has been captured successfully. Through MCMC sampling, we get the posterior probability density distributions of two parameters (σ DRW and τ DRW ) and show them in Figure 2. The values are listed in Table 2. The results are different between the two bands. The parameters can be constrained by the B-band data but with large uncertainties, e.g., τ DRW = 59 +41 −28 days. Comparing the timescale with the cadence and the length of the LC, we believe that the B-band timescale is reliable. A broken frequency corresponding to the characteristic timescale is shown in the B-band PSD ( Figure 3). While the V -band timescale is ≈ 3200 days, very close to the length of the LC. This means that the V -band timescale is unreliable, which is also confirmed by the single power-law PSD ( Figure 3). We show the modeling results of X-ray LC, the posterior probability density distribution of parameters, and the PSD in the right panel in Figure respectively. The values of the parameters can be found in Table 3. It is shown that the DRW model can describe the X-ray variability of 3C 273. The parameters are well constrained. The X-ray PSD presents a broken frequency that corresponds to a timescale of τ DRW = 28 +7 −6 days. We give the radio results together with the X-ray results. The radio PSD (the left panel of Figure 6) of 3C 273 is a single power law. The radio timescale is too large to be reliable. For PKS 1510-089, the V and B-band LCs can be described by the DRW model ( Figure 7 and Figure 8). The V -band τ DRW of 39 +18 −14 days is larger than the B-band τ DRW of 11 ± 3 days ( Table 4). As expected, we get a smaller value of f b in V -band PSD (Figure 9). The X-ray LC of PKS 1510-089 also can be fitted well by the DRW model ( Figure 10). The parameters are well constrained (Table 3), and the PSD is in the form of typical DRW PSD. A trusted timescale τ DRW = 26 ± 3 days is obtained.
For BL Lac, only the V -band and X-ray data are available. For the X-ray data, there are two large gaps in the first 2800 days of LC, we then take the following 2500 days of LC for analysis. When modeling the two LCs of BL Lac, we get poor fitting (ACF of residuals deviating from the white noise) with the two-parameter DRW model. An excess white noise term is then added to the DRW model, and we model the LCs with the three-parameter DRW model again. The modeling of LC, the posterior distribution of parameters, and the broken power-law PSDs are shown in Figure are shown in the right panels. One can see that the three-parameter DRW can fit the LCs well. Note that the highest flux point in the LC is poorly fitted. After removing the highest flux point, the modeling results are unchanged. We obtain the X-ray timescale of τ DRW = 63 +49 −30 days and V -band τ DRW of 47 +26 −19 days (Table 4). We applied the SHO model to the optical and X-ray data of BL Lac. The fitting is not improved significantly, and the parameters cannot be constrained. This suggests that the SHO model is not a good choice. The DRW including an additional white noise can describe the variability behavior. The value of σ 2 n (0.01 for the V -band LC; 0.04 for the X-ray LC) is larger than the squared of the mean size of the light curve uncertainties (σ y 2 ) where σ y 2 =0.0001 for V -band and 0.0036 for X-ray data. We have σ 2 DRW >σ 2 n +σ y 2 for both the V -band and X-ray data, which ensures that the fitted DRW amplitude is reasonable (Burke et al. 2021). It is possible that the quoted measurement errors are underestimated, and the excess white noise can account for excess measurement noise.
The γ-ray variability of the three sources has been analyzed in our previous work (Zhang et al. 2022) with the same method. We give the multi-band timescales with the errors in 95% confidence intervals of the three sources in Table 4. For 3C 273, the B-band, X-ray, and γ-ray timescales are consistent within the errors. The V -band and radio PSDs are single power-law, having no corresponding characteristic timescales. For PKS 1510-089, the V -band, X-ray and γ-ray timescales are consistent within the errors but the B-band one has a smaller value. For BL Lac, the V -band, X-ray and γ-ray timescales are also consistent within the errors.
Optical Results of 38 Blazars
The DRW model can successfully fit the long-term optical LCs of the 38 blazars. Based on the criteria of selecting reliable measurements of the damping timescale, we get reliable optical timescales for the 38 blazars. The basic information of the 38 blazars and the modeling results are given in Table 1 and Table 2, respectively. Except for Note-(1) source name, (2) data source, S is for SMARTS and SO is for Steward Observatory blazar data archive, 3C 273 and PKS 1510-089 which are analyzed in Section 3.1, the timescales for different optical bands are consistent for the other 36 sources. This indicates that the optical emission of the 36 blazars has the same origin, i.e., the jet emission. In Table 2, we only list one optical band result for these sources. The timescale is between 10 days and 200 days. Some notes should be given on PKS 2052-47 and Ton 599. The fitting to the LC of PKS 2052-47 needs an additional white noise, and the relation σ 2 DRW (0.16) > σ 2 n (0.026) + σ y 2 (0.0016) still holds. Ton 599 has big gaps and few data in the first half of its V -band LC, and we select the second half of the LC to analyze. Note-(1) source name, (2)(3)(4)(5) multi-band damping timescales in the observed frame. The uncertainties of the damping timescales represent 95% confidence intervals of the distribution of the parameter.
Origin of the optical emission from 3C 273 and PKS 1510-089
The optical emission of 3C 273 and PKS 1510-089 is complicated. Blue bump can be seen in their multi-band spectral energy distributions (SEDs; e.g., Abdo et al. 2010;Nalewajko et al. 2012;Castignani et al. 2017). SED modeling results showed that the accretion disk has a significant contribution to the optical emissions of 3C 273 and PKS 1510-089 (e.g., Nalewajko et al. 2012;Yan et al. 2012;Castignani et al. 2017). In addition, Zhang et al. (2019) found that a long-term variation trend in the optical continuum LC of 3C 273 does not appear in the emission-line variation. This suggests that the long-term variation trend is not contributed by the accretion disk, and it could originate from the jet. Li et al. (2020) quantitatively decoupled the optical emissions from the jet and accretion disk in 3C 273 and found that the jet emission accounts for 10%-40% of the total optical emission. Pandey et al. (2022) studied the correlation between V -band flux and polarization degree (PD) variations using SO observation during 2008-2018. They found a significant positive correlation only in two of the ten observing cycles. Note that the PD is quite small, and it changes from 0.04% to 1.58% during 2008-2018. The V -band single power-law PSD we obtained here is different from the typical PSD of the accretion disk (Suberlak et al. 2021;Burke et al. 2021) and jet variability (Zhang et al. 2022). The complicated mixture of the jet and accretion disk emissions at the V -band may result in the single power-law PSD. The mixed emission also results in the weak correlation between V -band and Fermi γ-ray variabilities reported by Bhatta (2021). We find no significant correlation between B-band variability and γ-ray variability for 3C 273 and PKS 1510-089. Looking at the location of the blue bump in SED (Roy et al. 2021), we suggest that the B-band emission of 3C 273 is dominated by the accretion disk photons.
For PKS 1510-089, the V and B-band timescales are clearly different, indicating different origins for the two bands' emissions. The V -band polarization of PSK 1510-089 is averagely greater than that of 3C 273, varying from 0.2% to 25.82% (Pandey et al. 2022). Among the ten observing cycles during 2008-2018, a significant positive correlation Note-(1) waveband, (2) the range of black hole mass in solar mass, (3) the mean damping timescale (redshift-corrected) with unit day. The uncertainties of timescales represent 1σ confidence intervals.
between V -band flux and PD variations is found in 5 cycles. Moreover, Castignani et al. (2017) found a good correlation between the long-term SO V -band and γ-ray LCs. These results suggest that the V -band emission is dominated by jet contribution. Also looking at the location of the blue bump in SED (Nalewajko et al. 2012), the B-band emission with a smaller timescale of 11 days is suggested as the accretion disk contribution.
Comparing Optical and γ-ray results
Long-term Fermi γ-ray LCs of 22 blazars have been analyzed by Zhang et al. (2022) with the same GP method. The optical timescale in this work is generally consistent with the γ-ray timescale (Figure 14). We examine the consistency of the timescales in the two energy-bands by using a statistical significance test (T-test). We get t-statistic=1.1 and p-value=0.28 (>0.05), which means that in statistic there is little difference between the two groups of timescales. The optical amplitude term σ DRW is less than one, and the γ-ray σ DRW can be greater than 10. This means that γ-ray variability can be more energetic than optical variability.
We separated the sources into two groups with M BH < 10 9 M and M BH > 10 9 M . The mean timescales (redshiftcorrected) in different ranges of black hole mass are listed in Table 5. It is found that the mean timescale of the sources in the mass range of 10 9 -10 10 M is smaller in both γ-ray and optical energies. However, we have a few sources with the mass of 10 9 -10 10 M , therefore this result may be tentative.
In Figure 15, we plot the relationship between the damping timescale in the rest frame (τ rest damping ) and the black hole mass of blazars along with the results of normal quasars from Burke et al. (2021). The timescales should be modified into the rest frame with the following formula: An average Doppler factor of δ D =10 is used here and the redshift z for each source is given in table 1. We show the optical, X-ray, and γ-ray results in the plot. It is found that the nonthermal optical τ rest damping of blazars and the thermal optical timescale of normal quasars occupy the same space in the plot of τ rest damping − M BH . The X-ray results for the three individual blazars are also in the same area as the optical results. The B-band timescale of 3C 273 is a typical value of accretion disk timescale. The B-band timescale of PKS 1510-089 is an outlier value among the accretion disk timescales. This value significantly deviates from the relation between damping timescale and black hole mass reported by Burke et al. (2021). Figure 15. Plot of the rest-frame timescale versus black hole mass. The gray data, lines, and area represent the optical accretion disk results for normal quasars taken from Burke et al. (2021). Red data are γ-ray results for blazars taken from Zhang et al. (2022), and the purple and blue data respectively represent the optical and X-ray results for blazars obtained in this work.
It is difficult to directly resolve the inner jet structure of the blazar 6 . Especially, the location of the high-energy emission region is still a hot open question (e.g., Madejski & Sikora 2016;Böttcher 2019). Multi-band variability analysis provides an indirect approach to resolve the emission regions. The cross-correlation method is frequently used in multi-band variability analysis (e.g., Liodakis et al. 2018;Bhatta 2021).
GP method has been wildly used to characterize the AGN accretion disk variability (Kelly et al. 2009;Zhang et al. 2018;Lu et al. 2019;Burke et al. 2021). In blazar science, it becomes popular in recent several years (e.g., Goyal et al. 2018;Ryan et al. 2019;Covino et al. 2020;Tarnopolski et al. 2020;Yang et al. 2021;Zhang et al. 2022). In this work, we use the GP method to study the multi-band variability of the blazar. This provides results independent of the cross-correlation method.
The γ-ray variability of the blazar has been studied by Zhang et al. (2022) with the GP method. Here we focus on the X-ray and optical variability of the blazar. Multi-band emission from the blazar is dominated by the nonthermal jet contribution. Two special blazars are 3C 273 and PKS 1510-089. An optical-ultraviolet bump appears in their SED, which is associated with their thermal accretion disk emission (e.g., Nalewajko et al. 2012;Yan et al. 2012;Castignani et al. 2017).
We fit the long-term optical LCs from the database of SO and SMARTS with the DRW model. Finally, 38 blazars with a reliable characteristic timescale are selected. Except for 3C 273 and PKS 1510-089, the timescales in different optical colors agree with each other for the remaining 36 blazars. This indicates that the emissions in different optical colors of the 36 blazars have the same origin, i.e., the jet emission. Ruan et al. (2012) modeled the optical LCs covering from 2002 December through 2008 March of 51 blazars using the DRW model. They found that the observed damping timescale peaks at ∼80 days, and the intrinsic timescale τ rest damping peaks at ∼800 days 7 . The distribution of the optical timescale obtained in this work is flat (Figure 14), and the average optical τ rest damping is ∼400 days, which is smaller than the result of Ruan et al. (2012). All blazars in our sample are Fermi-detected γ-ray sources. While the sample studied by Ruan et al. (2012) would be dominated by the blazars of non-Fermi detection. Therefore, the results indicate that the optical timescale of the blazar of non-Fermi detection may be longer than that of the blazar of Fermi detection. Xiong et al. (2015) found that the two population blazars indeed have different physical properties, for example, the blazar of non-Fermi detection has a smaller Doppler factor (Paliya et al. 2017).
In the reverberation mapping studies of 3C 273 and PSK 1510-089, a nonechoed long-term trend is found in the optical continuum LC Li et al. 2020;Rakshit 2020). This reveals the mixed origin of their optical emission. New clues on the origin of the optical emission can be found in our results. The V and B-band timescales of PSK 1510-089 are different. Its long-term V -band variability is correlated with the γ-ray variability (Castignani et al. 2017), suggesting that the V -band emission is dominated by jet contribution. The long-term polarization variation (Pandey et al. 2022) also supports that the nonthermal component is dominated at V -band. The V -band emission of 3C 273 seems to be more complicated. The jet contribution to V -band emission may be strongly time-dependent and may vary in a large range. This complicated mixture of jet and accretion disk emission results in a single power-law PSD. For the two sources, no significant correlation is found between B-band and γ-ray variabilities in our analysis. The B-band emission is naturally considered as the accretion disk contribution. For 3C 273, the B-band timescale of ≈ 60 days is a typical value for the accretion disk emission of normal quasars. While the B-band timescale of ≈ 11 days of PKS 1510-089 is significantly smaller, and it deviates from the τ rest damping − M BH relation of Burke et al. (2021) (Figure 15). This short timescale may imply special properties of its accretion disk.
The nonthermal optical, X-ray and γ-ray variabilities all have the typical DRW PSD. Namely, the PSD of synchrotron emission is the same as that of inverse-Compton (IC) emission, consistent with the simulations with a time-dependent one-zone leptonic blazar emission model (Thiersen et al. 2022). In other words, the long-term jet variability is irrelevant to the underlying emission mechanism. Burke et al. (2021) suggested that the DRW damping timescale measured from the accretion disk variability of normal quasars could be associated with the thermal instability timescale expected in the AGN standard accretion disk theory. Zhang et al. (2022) measured the γ-ray DRW damping timescale of AGNs from the Fermi-LAT data, and found that the γ-ray timescales of 23 AGNs occupy almost the same space with the optical variability timescales of normal quasars in the plot of τ rest damping − M BH . In this work, we add the nonthermal optical timescale of blazars in this plot. The nonthermal optical timescale of blazars also locates at the same region with the thermal optical timescale of normal quasars in the plot (Figure 15). This implies that the jet variability is relevant to the accretion disk. The thermal instability in accretion disk may not only cause the accretion disk variability but also the jet multi-band variability. Statistically, the nonthermal optical τ rest damping of 38 blazars are consistent with the γ-ray τ rest damping of 22 blazars. Individually (3C 273, PKS 1510-089, and BL Lac), the damping timescales of the jet variability in optical, X-ray, and γ-ray energies are consistent within the measured errors. Our results indicate that multi-band jet emissions are produced in the same region. However, we still cannot know the distance from the emission region to the central black hole. The radio observation is helpful to constrain this distance (Max-Moerbeck et al. 2014). We modeled the OVRO radio LCs covering over ∼ten years, and we obtain a single power-law PSD. In this work, we only show the radio result for 3C 273 as an example. We also modeled the 30-yr radio LCs of 3C 279 and 3C 454.3 obtained from Aalto University Metsähovi Radio Observatory, and we still get an unconstrained timescale. The results indicate the radio timescale is very large and may be larger than 10 years. Through the very long baseline interferometry (VLBI) observation, one can determine the distance from the radio core to the central black hole. Comparing the optical/X-ray/γ-ray timescale and the radio timescale, we can infer that the optical/X-ray/γ-ray emission region is far upstream from the radio core.
SUMMARY
We analyze the blazar's radio, optical, and X-ray variabilities using the GP tool celerite. The DRW model can successfully fit the jet multi-band variabilities. The multi-band characteristic timescale is used to probe the structure of the emission region in the blazar jet. Our main results are as follows.
(i) The synchrotron and IC emissions have the same PSD, i.e., the typical DRW PSD. This indicates that the jet's long-term variability is irrelevant to the underlying emission processes. In the plot of τ rest damping −M BH , the jet timescales locate at almost the same space as the accretion disk timescales of normal quasars, implying that the jet and accretion disk variability is driven by the same physical process (Zhang et al. 2022).
(ii) The nonthermal optical, X-ray, and γ-ray variability has a consistent characteristic timescale. The radio characteristic timescale is very long which cannot be constrained by decades-long LC. The results indicate that the nonthermal optical-X-ray-γ-ray emission is produced in the same region, which is upstream and far from the radio core. This supports the basic hypothesis of the standard Synchrotron-Self-Compton jet model.
The GP method provides a flexible approach to understand the variability pattern of AGN in the framework of stochastic process. Adopting the standard GP tool (Foreman-Mackey et al. 2017), we build the link between accretion disk (thermal emission) and the jet (nonthermal emission), i.e., Figure 15. This is a new methodology for comparing thermal and nonthermal emissions, additional to the comparison between the thermal and nonthermal luminosities (e.g, Ghisellini et al. 2011;Sbarrato et al. 2012;Ghisellini et al. 2014). | 7,400.4 | 2023-01-03T00:00:00.000 | [
"Physics"
] |
Tilted femtosecond pulses for velocity matching in gas-phase ultrafast electron diffraction
Recent advances in pulsed electron gun technology have resulted in femtosecond electron pulses becoming available for ultrafast electron diffraction experiments. For experiments investigating chemical dynamics in the gas phase, the resolution is still limited to picosecond time scales due to the velocity mismatch between laser and electron pulses. Tilted laser pulses can be used for velocity matching, but thus far this has not been demonstrated over an extended target in a diffraction setting. We demonstrate an optical configuration to deliver high-intensity laser pulses with a tilted pulse front for velocity matching over the typical length of a gas jet. A laser pulse is diffracted from a grating to introduce angular dispersion, and the grating surface is imaged on the target using large demagnification. The laser pulse duration and tilt angle were measured at and near the image plane using two different techniques: second harmonic cross correlation and an interferometric method. We found that a temporal resolution on the order of 100 fs can be achieved over a range of approximately 1 mm around the image plane.
Introduction
Recent improvements in electron pulse technology have resulted in tabletop sources capable of delivering femtosecond pulses to a target [1,2]. This has enabled ultrafast electron diffraction (UED) experiments on solid samples with a resolution on the order of 200 fs [2]. In these experiments, thin (submicrometer) samples were used to capture diffraction patterns in transmission mode. Another important application of UED is investigating ultrafast chemical reactions on isolated molecules. For these experiments, the target is a gas beam with a diameter typically between 0.1 and 1 mm. The group velocity mismatch between the laser and electrons results in a blurring of the resolution as the pulses traverse the gas jet [3]. The group velocity mismatch limits the resolution of most experiments to several picoseconds [4][5][6], with the highest resolution of 850 fs achieved using a microjet with a diameter of only 0.1 mm [7]. A laser pulse with a tilted intensity front was used to match the velocity of the laser and electrons on the surface of a solid sample [8]. However, the matching has only been demonstrated at a single plane along the propagation direction; therefore, it is not clear that it can be applied to the problem of gas UED, in which the target is extended. For example, electrons with a kinetic energy of 100 keV travel with a speed of 0.55 c, where c is the speed of light in vacuum. A laser pulse propagating at an angle of 57°with respect to the electron beam, with a pulse front tilted at the same angle, would match the speed of the electrons. However, the problem is that the velocity must be matched throughout the length of the gas target and both the tilt angle and duration of the laser pulse change as it propagates. In this paper, we investigate this issue in detail and show experimentally that, with the appropriate optical design, it is possible to achieve a resolution on the order of 100 fs.
A tilted laser pulse has an intensity front that is tilted with respect to the direction of propagation. The component of the laser group velocity in the direction normal to the intensity front depends on the tilt angle [9]. This dependence can be exploited to match the normal laser velocity to the velocity of either the electrons or the electromagnetic waves traveling more slowly through a medium. Several applications of tilted pulses have been reported in the literature. Tilted pulses have also been used to match the group velocity of a pump laser to the phase velocity of terahertz waves to increase the efficiency of optical rectification [10][11][12]. In xray laser generation, a tilted laser front-generated by step mirrors or a slightly misaligned laser compressor-was used to provide optimum preplasmas for a traveling wave pumped by a second laser pulse [13][14][15]. The angular dispersion used to generate a tilted pulse front also results in a rotation of the pulse front as the beam focuses and defocuses [16,17]. Femtosecond laser pulses with such rotating fronts have been used recently for electron wakefield acceleration and the generation of attosecond pulses [18][19][20].
For gas-phase UED experiments, the laser beam is focused to a spot size below 0.5 mm to achieve high fluence and minimize the group velocity mismatch. Thus, in addition to a large tilt angle, a large demagnification factor is required. In our configuration, a grating provides the angular dispersion. The grating surface is imaged onto the target with a demagnification factor (M) of 12.7. At the image plane, the diffracted components will recombine into a short pulse, as long as the diffracted beam is normal to the grating surface. However, a longer pulse duration is expected both before and after the image plane. We use an optical setup with a long Rayleigh length, about 2 mm, to lessen the broadening of the pulse duration as it traverses the target. Optical aberrations might also result in increased pulse duration across the beam, even at the image plane. For gas-phase UED experiments, the pulse duration must remain short over the length of the target. We measured the tilt angle and pulse duration as a function of the distance from the image plane and at several positions laterally across the beam. We used a simple technique that consists of measuring the interference between the tilted pulse and a known reference pulse [21]. These results have been compared to those from a previously demonstrated technique using second harmonic generation (SHG) between the tilted pulse and a reference pulse [22]. The interferometric technique is more convenient for in situ measurements because it requires only a detector (or screen) to be placed at the position of the measurement.
Theory
A laser pulse with a tilted front can be generated by propagation through a prism [23,24] or diffraction from a grating [24]. In the case of diffraction from a grating, the tilt angle γ is given by [22,24] is the angular dispersion, θ out is the angle of the diffracted beam with respect to the grating normal, λ 0 is the central wavelength of the laser pulse, and M is the demagnification factor. If no demagnification is involved, M will be 1. The angular dispersion depends on the diffracted order and the grating constant where d is the grating constant and k is the diffraction order. In our experiments, we used the first-order diffracted beam (k = 1). If the beam is demagnified using imaging optics, M is a function of position and is given by the ratio of the diameter of the laser beam on the grating to the diameter of the laser beam at the position where the tilt is measured. The angular dispersion Ψ generated by the grating results in a temporal chirp such that pulse duration increases as the pulse propagates away from the grating. The pulse duration τ at a distance z from the grating is given by where τ 0 is the original laser pulse duration (60 fs in our experiments) [17,22]. To recreate a short pulse at a target position, the grating surface must be imaged at this position. It has been shown that a nonzero value for θ out will result in temporal aberrations [22], as the distance to the image plane will be different across the beam. The requirement of θ = 0 out prevents us from using the experimental setup shown in references [15,18]. In [22], the pulse duration on the image plane was measured for M = 1. Here, we investigate the case of large demagnification and measure the pulse duration both across the beam and as a function of distance to the focal plane. Figure 1 shows the experimental setup used to generate and measure the tilted pulses. The tilt angle and pulse duration were measured using a cross correlation between the tilted pulse and a (nontilted) reference pulse using SHG or by directly measuring the interference between the two pulses. For the SHG method, the two pulses overlapped in a thin barium borate (BBO) crystal placed at the image plane (see inset in figure 1). Due to the tilt in one of the pulses, the two pulses spatially overlapped only along a narrow strip. The width of the spatial overlap region depended upon the duration of the pulses. A second harmonic signal was generated in the region where the two pulses overlapped. This region was imaged on a charge-coupled device (CCD) camera, while light at the fundamental frequency was blocked with a filter. The tilt angle was measured by recording the horizontal shift in the overlap region as a function of the delay of the reference pulse. The BBO crystal and CCD were translated together along the direction of the optical beam to measure the tilt angle and pulse duration as a function of distance from the image plane. For the interferometric measurement, the CCD camera was placed directly at the position where the pulses overlapped. Instead of measuring the width of the region where SHG was observed, the region of interference was measured directly.
Experimental setup
The laser system delivered 60 fs pulses at a central wavelength of λ = 800 nm 0 and pulse energy of 2 mJ with a repetition rate of 5 kHz. The laser beam was attenuated and then evenly split into two beams with a polarizing beam splitter (BS1 in figure 1). One beam was diffracted from a reflection grating to introduce angular dispersion. The laser beam incident on the grating was slightly elliptical with a full width at half maximum (FWHM) of 7.0 mm in the vertical direction and 6.0 mm in the horizontal direction.
The diffraction grating was a gold-coated holographic grating with a grating constant of d = 150 mm −1 . This small grating constant was used to achieve the desired angular tilt with a large demagnification factor. Using the grating formula, in out 0 , we calculated that an incident angle of θ =°6.9 in would result in the diffracted beam being normal to the grating surface. In this configuration, the diffraction efficiency of the grating into the first order was 80%. The reference beam traveled through a variable delay line to adjust the time of arrival at the BBO crystal. Both beams were focused onto the BBO with a 23 cm focal length lens (L 1 ). The distance between the grating and the lens was = S 315 cm 0 . Using the lens maker's equation, , where ϕ 0 and ϕ i are the laser beam diameters on the grating and on image plane, respectively. The beam size on the image plane was elliptical with FWHM of 0.56 and 0.47 mm. The tilt angle (γ) was 56.8°. The value of M (and thus the tilt angle) could be adjusted to match electron pulses with different kinetic energies by changing the distances. The BBO crystal used for SHG had a thickness of 0.20 mm. A FGB39 band pass filter was used to block the fundamental wave and transmit the second harmonic. The surface of the BBO crystal was imaged using lens L 2 (2.4 cm focal length) onto a CCD detector with four times magnification. Data were recorded for several positions before and after the image plane by moving the BBO crystal, lens L 2 , and the detector together along the direction of beam propagation for the SHG measurements. For the simpler interferometric measurements, only the detector position had to be varied. A small vertical angle was introduced between the tilted and reference pulses that created interference fringes. The pulse duration was also measured at five different positions across the beam by changing the delay between the pulses. and z L1 is the either distance from lens L 1 to the detector in the interferometric method or to the BBO crystal in the SHG method. The experimental results from the interferometric and SHG techniques are in good agreement. The tilt angle decreased with increasing Δz because the beam diameter was increasing. The minimum beam diameter occurs before the image plane. The zero of Δz was defined as the i replaces d. We treat the light field on the image plane as an image of the light field immediately leaving the grating, such that the phase modulation imprinted by the grating on the beam has a spatial frequency increased by a factor of M.
Assuming a Gaussian beam away from the focus, the divergence angle (α) after the focal plane can be approximated by α = focal plane to the image plane and Δ R z ( ) is the laser beam radius as a function of Δz. We expect this to be a good approximation because S is significantly larger than the Rayleigh length (2 mm). The demagnification relative to the beam size at the image plane is where R i is the radius of the beam at the image plane. The tilt angle near the image plane is There is good agreement between the theoretical and experimental results. The plotted line in figure 2 corresponding to the theory does not include any fitted parameters. The measured tilt angles from the interferometric method are consistently larger than those measured with the SHG method. We attribute this to the uncertainty in finding the position of the image plane (position 0 in figure 2). This uncertainty is on the order of ±200 μm for the interferometric method. With the SHG method, the position of the image plane can be determined more accurately because the overlap region is imaged onto the CCD with magnification. For Δz values that are small compared to S, the tilted angle (γ) varies at a rate of approximately −1.5°/ mm. A change in tilt angle on the order of 1°would not significantly affect the velocity mismatch or temporal resolution of the experiment. For example, if the tilt angle were off by 1°, the velocity difference would be only 0.015 c.
Pulse duration
In the SHG measurement, the pulse duration (τ T ) of the tilted pulse was obtained from the FWHM of the spatial overlap of the two beams ΔL (figure 3), where τ R is the pulse duration of the reference pulse. The tilted pulse duration τ Δz ( ) at different positions can be deduced from R c a m e r a 2 2 2 [22]. Here, τ camera depends on the pixel size of the CCD camera. Our camera has a pixel size of 5.2 μm, corresponding to a temporal resolution of . For the interferometric method, τ T can also be extracted from the measured width of the interferometric cross correlation (ΔL). Assuming that the beams have a Gaussian temporal profile, the electric fields of the reference (ε R ) and tilted (ε T ) pulses at a specific z coordinate can be written as describes the time delay of the tilted pulse front. The interference intensity at the detector is where c.c. denotes the complex conjugate. The first and second terms make a constant contribution (C 0 ) to laser intensity, whereas the third and fourth terms represent interference. Substituting equation (6) into equation (7), we obtain Figure 4 shows the experimental and calculated pulse widths [τ T (Δz)] over a region of approximately 3 mm before and after the image plane. There is good agreement between both experimental methods, and between the experiment and theory. The calculation was completed using equation (3) with the displacement and tilted angle as described in section 4.1. The pulse duration reaches a minimum of 66 fs at the image plane, compared to the initial pulse duration of 60 fs. The small broadening of the pulse duration maybe be due to aberrations or dispersion in the optical system. The pulse duration increases to 300 fs at a distance of 3 mm after the image plane. The pulse duration increases more rapidly for displacements toward the focus because the beam diameter becomes smaller and thus, the tilt angle and dispersion are larger.
At each position, the pulse duration was measured at five lateral positions across the beam, and it was found to vary by approximately 5% across the beam with the minimum near the beam center, which means that aberrations do not cause significant broadening transversely in our configuration. Figure 5 shows the pulse duration as a function of the lateral position from the beam center and as a function of distance for a region near the image plane.
Damage threshold of grating
The full characterization of the pulse duration and the good agreement with theory give us confidence that this method can be applied successfully to UED and other experiments. With the current setup, the main limitation to the intensity that can be reached at the target (the image plane) is the damage threshold of the grating. A tighter focusing geometry would lead to higher intensity, but would also result in a reduced Rayleigh length that would make it more difficult to have velocity matching over an extended target. Thus, the best way to increase the intensity is to keep the same configuration but reduce the size of the beam incident on the grating. To determine how far the intensity could be increased with our current configuration, we measured the damage threshold of the grating. The laser pulse was focused to an area of 4.5 × 10 −3 cm 2 on the grating. The grating surface was imaged on a CCD camera with three times magnification. The pulse energy was increased in small steps and the grating was exposed to the laser for 40 min at each step. Damage was defined as any change in the diffracted laser beam. For a pulse energy of 0.54 mJ, a damaged spot was observed after 2 min of irradiation. For an energy of 0.50 mJ, a damage spot was visible after approximately 40 min. For a pulse energy of 0.40 mJ, no damaged was observed even after 2 h of irradiation. Thus, we conclude that the grating can be safely operated at a fluence of 90 mJ cm −2 , with the damage threshold at or below 110 mJ cm −2 . For a pulse duration of 60 fs, safe operation would mean a maximum intensity of 1.5 × 10 12 W cm −2 on the grating and 2.4 × 10 14 W cm −2 on the image plane.
Implication for gas-phase UED experiments
Our results indicate that it is be possible to reach a temporal resolution on the order of 100 fs for gas-phase UED experiments using tilted laser pulses to compensate for the group velocity mismatch. For example, let us consider the broadening that can be expected for a typical gas jet with a diameter between 0.5 and 1 mm. For a UED experiment without pulse tilting, the velocity mismatch would limit the resolution to several picoseconds [25]. With pulse tilting, there are two contributions to the broadening: the increase in the laser pulse duration and the remaining velocity mismatch due to the change in the tilt angle of the laser as it traverses the target. In our measurements, we started with a laser pulse duration of 60 fs at the output of the laser, and measured a pulse duration of 66 fs at the image plane of the grating. The average pulse duration over a 0.5 mm distance around the image plane was 71 fs, and it increased to 78 fs when averaged over 1 mm. For the effect of the remaining velocity mismatch, we consider that the angle changes at a rate of 1.5°/mm. If the tilt angle is off by 1°, the velocity mismatch between the laser and the electrons will be 0.015 c. This will lead to a broadening of the resolution of about 60 fs mm −1 . The total resolution of the experiment can be calculated as = + + T t t t total Laser GVM Electron , where the three terms on the right represent the laser pulse duration, broadening due to group velocity mismatch, and the electron pulse duration, respectively. For an electron pulse duration of 100 fs, the overall temporal resolution will be 126 fs for a target diameter of 0.5 mm and 140 fs for a target diameter of 1 mm. For longer electron pulses, the resolution will be determined mainly by the electron pulse duration. If shorter electron pulses are available, a resolution of 50 fs is within reach by using shorter laser pulses and keeping the target diameter below 0.5 mm.
In many photochemistry experiments, the photon energy required for excitation is in the ultraviolet range. Here, we discuss how changing the wavelength will affect the overall time resolution. Take the third harmonic, λ = 267 nm, as an example. To keep the tilt angle the same, the grating constant d must be tripled to compensate for the wavelength change, according to equations (1) and (2). Substituting this change into equation (4), we obtain that the tilt angle variation around the image plane will be the same as for the 800 nm pulse. The change in the pulse duration is described by equation (3), where the wavelength dependent term inside the square root is proportional to λ 6 Ψ 4 or λ 6 d 4 . Thus, at the shorter wavelength, the spreading in the pulse duration will be less severe if the same tilt angle is used. Therefore, the time resolution broadening due to the variation of the tilt angle around the image plane will be independent of wavelength, whereas the broadening due to pulse duration will be reduced for shorter wavelengths.
Conclusion
We experimentally studied the generation and measurement of a high-intensity tilted front femtosecond laser pulse for velocity matching in UED experiments. The tilted pulse was generated by imaging the surface of a diffraction grating onto the target position. We measured the tilted angle and pulse duration in the range of ±3 mm around the image plane using the SHG method and an interferometric method. The interferometric method is better suited for in situ measurements, as it requires only a measurement of the interference pattern at the target position. With an input pulse duration of 60 fs, we measured a pulse duration of 66 fs on the image plane, and the duration stayed below 100 fs for a distance of ±0.5 mm around the image plane. We showed that the pulse duration does not vary significantly across the beam. Our optical configuration is well suited for applications that require a large tilt angle and high fluence, and it can be used to reach an intensity above 10 14 W cm −2 on the image plane while preserving the pulse duration and tilt angle through the target. For UED experiments, tilted pulses could be used in combination with methods to produce femtosecond electron pulses [1,26,27] to break the 100 fs resolution barrier in gas-phase experiments. | 5,357.8 | 2014-08-04T00:00:00.000 | [
"Physics"
] |
Sub-cycle coherent control of ionic dynamics via transient ionization injection
Ultrafast ionization of atoms or molecules by intense laser pulses creates extremely non-stationary ionic states. This process triggers attosecond correlated electron-hole dynamics and subsequent ultrafast non-equilibrium evolution of matters. Here we investigate the interwoven dynamic evolutions of neutral nitrogen molecules together with nitrogen ions created through transient tunnel ionization in an intense laser field. Based on the proposed theoretical frame, it is found that nitrogen molecular ions are primarily populated in the electronically excited states, rather than staying in the ground state as predicted by the well-known tunneling theory. The unexpected result is attributed to sub-cycle switch-on of time-dependent polarization by transient ionization and dynamic Stark shift mediated near-resonant multiphoton transitions. These findings corroborate the mechanism of nitrogen molecular ion lasing and are likely to be universal. The present work opens a route to explore the important role of transient ionization injection in strong-field induced non-equilibrium dynamics. Modelling the coherent dynamics of molecular nitrogen ions during strong-field ionisation is challenging as it constitutes an open, many-body quantum system. Here, the continuous injection of ions via the transient strong field is shown to increase the population of excited states, thereby creating the possible population inversion for ion lasing.
T he fundamental process of strong-field ionization of atoms by intense ultrashort laser pulses occurs at attosecond timescale [1][2][3] and lasts for pulse lengths in femtoseconds. The temporal confined strong field ionization (SFI) creates broad bandwidth non-stationary ionic states along with the launching of attosecond electron wavepacket forming the foundation of attosecond physics [4][5][6][7][8] . One of the key problem is how the coherence of the ionic states affect the subsequent ultrafast non-equilibrium evolution ranging from charge migration in molecules 9 , electron transport in condense matter 10 to ultrafast laser processing of materials 11 . It is particular intriguing to question whether the further interaction of the ions with lasers can be considered independently by assuming the prior ionization is completed?
Recent experiments indicate that the interplay of sub-cycle SFI and the followed ion-laser coupling is indispensable for nitrogen molecular ion lasing [12][13][14][15][16] , which suggests the new possibility to manipulate the ion coherence upon its creation toward ion-based quantum optics. While SFI itself can be described well by the celebrated Keldysh tunneling theory 17 , a complete model treating both ionization of neutrals and laser-ion couplings on the equal footing is still lacking. Theoretically, dealing with bounded multielectron problem is already a difficult task, it is even challenging for open quantum many-body systems when ionization is involved. Exact time-dependent multilelectron theories are limited to two electrons cases 18 or struggling with ionization-induced derivative discontinuities in density-functionals 19 . A lot of theoretical works have devoted to the coherent ionic evolution in a multi-channel formalism 20-24 but the ion-laser coupling within the ionization process remains elusive.
Considering the ionization of nitrogen molecules by intense laser pulses. As illustrated in Fig. 1, ionization creates the ion in three possible electronic states at the moment t i by releasing the electron, the remaining linearly polarized laser field will further induce coherence and population transfer among the electronic and vibrational states with the couplings depending on the geometry of the molecular ion. According to the tunneling ionization theory 17,25 , the population on the excited ionic states is expected to be much less than that on the ionic ground state. However, due to the continuous presence of the laser field, the ions created by SFI will be polarized by the laser field 26 causing the electron to oscillate back and forth between electronic states.
In the present work, by treating nitrogen molecules as an open quantum system, we systematically investigate the role of transient injection of ions by SFI on ultrafast population redistribution of ionic states. As we will demonstrate, the sub-cycle turn-on of the polarization of the ion upon its creation breaks the timereversal symmetry and it is thus possible to enhance the population on the higher excited states. Furthermore, in the case of resonance, the population on the excited states can also be greatly increased by multiphoton couplings mediated by dynamic Stark shift due to the instantaneous polarization. The ultrafast population redistribution of electronic states of molecular ions assisted by transient ionization might be a universal process for other molecular ionization from multiple orbitals and bond breaking and thus SFI can serve as an intrinsic attosecond probe to the sub-cycle ionic coherence. Our finding sheds more light on the SFI-coupling mechanism and provides crucial implications for further research on coherent emissions from molecular ions.
Results
Population inversion and vibrational redistribution. We now present our results, which are obtained by solving the ionizationcoupling model in the methods section. The details of calculation can be found in Supplementary Notes 1 and 2. We first discuss the nitrogen molecule whose axis is aligned to be 45°with respect to the pump laser polarization and therefore both the transitions of X-A, X-B are permitted. The rotational degree of freedom is assumed to be frozen in the present work. The solid curves in Fig. 2a shows the population evolution of the three electronic states of N þ 2 in the pump laser field by solving ionizationcoupling equation. It can be clearly seen that the final populations on the states A and B are prominently increased while that on the state X is greatly decreased in comparison to the case that only ionization is considered (the dashed curves in Fig. 2a). It can be seen that with the help of the transient injection the population between the states B and X is thus inverted. This is striking in view of that other processes, e.g, shake-up 27,28 or recollision 29,30 are unlikely to produce ionic population inversion.
The vibrational state-resolved distributions for each electronic state are respectively displayed in Fig. 2b-d. As can be seen, for the electronic state X, the ionization itself mainly populates the vibrational states v = 0, 1 because of the relatively larger Franck-Condon factors of the two states 31 . However, when the coupling is incorporated, the vibrational population on the state v = 0 is largely reduced while higher vibrational states are more populated. It can be attributed to vibrational Raman-like processes when a strong coherent coupling is produced between the states X and A. As a consequence of the X state population reduction, almost all the vibrational population on the state A are efficiently enhanced due to one-photon X-A resonant transition. Remarkably, the vibrational-state population on the state B (especially for v = 0) are considerably increased as well although the laser frequency is far off from X-B resonant energy. Note that the vibrational energy gap between the states A(v = 0-4) and X (v = 0) spans from 1.12 to 2.06 eV and the driving laser photon energy ranges from 1.38 to 1.77 eV (700-900 nm). For the excited state B(v = 0-4), the vibrational energy differences with respect to the ground state X(v = 0) are in the range of 3.17-4.37 eV. The population transfer from the state X(v = 0) is only accessible to high vibrational states B(v = 3, 4) through a direct three-photon coupling.
Sub-cycle control of laser-ion coupling via transient ionization injection. In order to better understand the observed population enhancement of the lower vibrational states (i.e., v = 0, 1, 2) of the state B, we fixed nitrogen molecular axis to be parallel to the laser polarization, by which the A-X coupling is avoided. In the following, we mainly focus on the B (v = 0) state because the Fig. 1 Transient ionization-coupling model. Illustration of the dynamic processes of nitrogen molecules in an intense laser pulse. Strong field ionization injects the ions into three possible electronic states that are driven by the remaining laser pulse. Note that the X-B transition is parallel and X-A transition is perpendicular to the molecular axis. HOMO highest occupied molecular orbital. manipulation of its population is critical to air lasing at 391 nm 12,14,15,[32][33][34][35] . Figure 3a shows the evolution of the vibrational population of the ionic states B(v = 0) and X(v = 0) for the parallel alignment case. Again, the population on the state B(v = 0) is apparently promoted with the aid of simultaneous ionization and coupling. To investigate the influence brought by ionization on the coupling between B(v = 0) and X(v = 0), we first consider the ionization injection at the peak laser intensity and then follow the evolution of the ions in the second half of the laser field EðtÞ ¼ f ðtÞ cosðωt þ ϕÞ with different phase ϕ. The corresponding laser fields for ϕ = 0, π∕4, π∕2 are respectively depicted in the top portion of the Fig. 3b. The injection is all populated on the ground state X of N þ 2 , i.e., ρ þ XX ðv ¼ 0Þ ¼ 1. Figure 3b plots the final population of the state B(v = 0) as a function of the initial phase ϕ for the pump wavelength 800 nm. Interestingly, the obtained population on the state B(v = 0) is strongly modulated with a period of π and the maximum value is achieved at ϕ = 0. This explains that the yield population calculated in 13 who chose ϕ = π∕2 is less than that calculated in 12 who chose ϕ = 0 using their three-state coupling model.
We now consider how the coupling varies with the field profile assuming the ionization injection occurs at the peak field of each optical cycle (corresponding to fixed ϕ = 0). The population on the B(v = 0) and X(v = 0) states of N þ 2 after the laser-ion coupling are shown in Fig. 3c as a function of the instant of transient ionization injection. The yield population on the state B(v = 0) shows a nearly Gaussian dependence on the moments of ionization injection following the intensity profile of the pumping laser, as depicted by the black squares Fig. 3c. The stronger of transient electric fields at the injection of the ground-state ions, the more population on the state B(v = 0) and reversely the less population on the state X(v = 0) are found after the pulse is finished.
Multiphoton transition mediated by dynamic Stark shift. Further enhancement of the population on the state B(v = 0) can be achieved by resonant transitions from X(v = 0). Since this process is sensitive to the central wavelength of laser pulses, we calculated the population dependence on the driving laser frequency by solving ionization-coupling equation. As shown in Fig. 4a, a three-photon resonant peak for B(v = 0) appears at the wavelength of 900 nm, which deviates from the field-free threephoton resonant wavelength (1173 nm). Additional resonant peak around 1560 nm belongs to five-photon resonance. Figure 4b shows the corresponding frequencies of the two resonant peaks as a function of the peak intensity of the driving laser field. Both three-photon and five-photon resonant peaks exhibit a linear dependence on the peak intensity of the laser field. Surprisingly, the shift of the resonance is independent of the driving frequency, which can be attributed to the Stark shift by instantaneous polarization of the laser coupled X and B state at the instant of ionization injection. Figure 4c shows the sub-cycle control of the population on the state B(v = 0) by changing the envelope phase ϕ at the resonant wavelength of 900 nm. It can be seen that the population on the state B(v = 0) is also modulated with a period of π. However, the maximum value is no longer at ϕ = 0 and the minimum value is not reached at ϕ = π∕2, which signifies that the polarization mechanism is not dominating in the case of resonance. The irregular modulation (black circles) indicates that a multi-channel interference might contribute to the population increment of the state B(v = 0). Figure 4d shows the dependence of the population yields on the injection time. A striking difference with the results in Fig. 3c is that in the case of resonance, the population on the state B(v = 0) is closely related to the post-ionization interaction time. The population on the state B(v = 0) shows a gradual decay with decreasing coupling time in the current conditions. It should be noted that the three-photon resonant transfer from X(v = 0) to B(v = 0) at a resonant wavelength is taking place during the evolution of the quantum coherent system driven by ultrafast polarization. Therefore the population increment of the state B (v = 0) should originate from the two-channel interference, i.e., polarization and three-photon resonant coupling.
Discussions
The above results in Fig. 3 can be qualitatively explained with ultrafast polarization theory. At a particular time, nitrogen molecular ions are prepared by transient SFI, which can be regarded as an ultrafast pump for the ionic system. Immediately following SFI, the ion is polarized by the instantaneous laser field as illustrated by the contour plot shown in Fig. 3c.
Since the electronic states B and X forming a pair of charge resonance states, strong polarization could occur due to their coherent coupling in the residual laser field. For different injection instants of ionization, the polarization state of ions is different, resulting in different population increment of the state B(v = 0). For the case in Fig. 3b, the probability for populating on the state B(v = 0) is greatest at ϕ = 0 while at ϕ = π∕2, the probability is smallest. For comparisons of different ionization injection moments at the same envelope phase in Fig. 3c, the final population on the state B(v = 0) varies because the instantaneous polarizability is proportional to the transient electric field. Therefore, transient ionization injection acts as an ultrafast probe of a quantum coherent system consisting of the neutral and molecular ions. Note that we ignore the dynamic anti-screening due to the ionic core polarization 26 on the tunneling ionization from the valence shell electrons.
The injection of ionization can be considered as an ultrafast streaking of the laser-driven ionic dynamics analogy to the attosecond streaking and transient absorption techniques 4,5,5,36,37 . Achieving the experimental measurement analogy to attosecond streaking is a challenging and open problem due to the lacking of an attosecond pulse in the visible wavelength range. However, two approaches may be accessible to measure the transient injectioninduced population on the B state. With the aid of high harmonic sources, the phase-dependent population on the X or B state can be detected by attosecond absorption spectroscopy based on the transitions from the X to C (i.e., C 2 Σ þ u ) state or from the B to D (i.e., D 2 Π g ) state. Another possible means is to measure the absorption spectroscopy from X to B state after compressing a 400 nm pulse to a few cycles in addition to the polarization gating technique 15,38 . The Fourier transformation of the phasedependent populations on the state B in Fig. 4c quantitatively gives the relative contributions of multiple photon dressing, which is shown in Supplementary Fig. 1. A concrete analysis on this based on an analytic solution is given in Supplementary Note 2. It is worth mentioning that for the non-parallel alignment case, transient ionization could influence both the coupling of A-X and B-X, which is discussed in Supplementary Note 3. Last but not the least, in the current model, we have ignored the ionization delays while injection of ions and the coherence brought by ionization. A more complete calculation including these effects will be carried out in the future.
In conclusion, we have systematically investigated the coherent evolution of nitrogen molecular ions, which are continuously injected by transient SFI in an intense laser field. It is found that the population on the excited states of nitrogen molecular ion can be greatly increased due to transient-injection-induced collapse of a quantum system and resonant multiphoton couplings. Our findings provide crucial clues to create high-intensity air lasers using femtosecond laser pulses and highlight the importance of ionization-induced coherence. The proposed theoretical frame allows us to treat transient ionization and laser-ion coupling for open quantum systems under strong laser fields. It can be generalized to explore the important role of transient ionization injection in other strong-field induced non-equilibrium dynamics, e.g., localization of electrons during dissociative ionization and autoionization of molecules in intense laser pulses.
Methods
Strong field ionization-coupling equation for ionic dynamics. We consider tunnel ionization mainly from three N 2 molecular orbitals, i.e., HOMO (i.e., highest occupied molecular orbital), HOMO-1, and HOMO-2 in a linearly polarized pump laser field with the peak intensity of 3 × 10 14 W cm −2 , pulse duration of 15 fs and the central wavelength of 800 nm. The ionization energies of these orbitals are respectively 15.6 eV, 17 eV and 18.8 eV 39 . The molecular Ammosov-Delone-Krainov (MO-ADK) theory is employed to calculate the transient ionization rates of the involved three orbitals 25 . Once the three ionic states X 2 Σ þ g (i.e., X), A 2 Π u (i.e., A) and B 2 Σ þ u (i.e., B) are prepared at the moment of ionization, population couplings among them will take place in the residual laser field.
Considering both the transient ionization injection and coupling effects, the evolution of the ionic density matrix ρ + (t) in an intense laser field is given by Fig. 3 Coherent manipulation of ionic population by subcycle ionization injection. a Dynamic evolution of electronic state populations of N þ 2 when the molecular axis is parallel to the laser polarization. b Variation of B(v = 0) population with the initial injection phase ϕ after interacting with a half laser pulse at 800 nm. The corresponding laser fields as functions of time for ϕ = 0, π∕4, π∕2 are respectively depicted. c dependence of the yield population on the injection instants within the full laser pulse. At these instants, the laser field takes local maximum at each individual optical cycle. The contour plot illustrates the transient ionic states that are polarized according to the instantaneous laser field.
where H I is the ionic Hamiltonian in the interaction picture whose explicit form is provided in Supplementary Note 1. The last term in Eq. (1) describing the continuous injection of ions by transient ionization within the full laser pulse is one of the significant advances in this work. Its nondiagonal elements are assumed to be zero due to a vanishing coherence resulting from transient ionization and the diagonal elements are given by where i = X, A, B labels the electronic states and v = 0-4 are the vibrational quantum numbers. The neutral population ρ 0 is decaying according to dρ 0 dt ¼ À P Γ iv ρ 0 where Γ iv (t) are the ionization rates to the respective electronicvibrational states. For the time being, the rotational states are unresolved and dissipation by collisions are ignored because of the short pulse duration.
Data availability
Simulation data and figures are available from Z.Z. upon reasonable requests. | 4,450.6 | 2020-03-12T00:00:00.000 | [
"Physics"
] |
Kinases of two strains of Mycoplasma hyopneumoniae and a strain of Mycoplasma synoviae : An overview
Mycoplasma synoviae and Mycoplasma hyopneumoniae are wall-less eubacteria belonging to the class of Mollicutes. These prokaryotes have a reduced genome size and reduced biosynthetic machinery. They cause great losses in animal production. M. synoviae is responsible for an upper respiratory tract disease of chickens and turkeys. M. hyopneumoniae is the causative agent of enzootic pneumonia in pigs. The complete genomes of these organisms showed 17 ORFs encoding kinases in M. synoviae and 15 in each of the M. hyopneumoniae strain. Four kinase genes were restricted to the avian pathogen while three were specific to the pig pathogen when compared to each other. All deduced kinases found in the non pathogenic strain (J[ATCC25934]) were also found in the pathogenic M. hyopneumoniae strain. The enzymes were classified in nine families composing five fold groups.
Introduction
Edmond Nocard and Emile Roux successfully cultivated the agent of the contagious bovine pleuropneumonia, Mycoplasma mycoides, over a century ago (Nocard and Roux, 1898).Since that time, approximately 111 species of the genus Mycoplasma have been identified in animals.These and other 102 species comprise the class of Mollicutes (Minion et al., 2004).These prokaryotes are known as the smallest self replicating organisms (Glass et al., 2000;Westberg et al., 2004).Most members of this class are pathogenic and colonize a wide variety of hosts, such as animals, plants and insects.Mollicutes represent a group of Low-G+C-content eubacteria that are phylogenetically related to the Clostridium-Streptococcus-Lactobacillus branch of the phylum (Woese et al., 1980;Rogers et al., 1985;Maniloff, 1992).As a consequence of the reduced biosynthetic machinery, Mollicutes live in nature as obligate parasites and depend on the uptake of many essential molecules from their hosts (Papazisi et al., 2003).Thus, they have been considered model systems for defining the minimal set of genes required for a living cell (Morowitz, 1984).
Although, Mollicutes have a simple genome, mycoplasma diseases are complex and relatively unknown (Minion et al., 2004).One hallmark of these diseases is the chronicity (Ross, 1992), but equally important is the ability to alter or circumvent the immune response and to potentiate diseases caused by other pathogens (Ciprian et al., 1988;Thacker et al,. 1999;Muhlradt, 2002).A key factor in the ability of mycoplasmas to establish a chronic infection is their genome flexibility, which allows them to produce a highly variable mosaic of surface antigens (Citti and Rosengarten, 1997;Chambaud, et al., 1999;Shen et al., 2000Assunção et al., 2005).
In the last years, the genomes of ten mycoplasma species have been completely sequenced (Himmelreich et al., 1996;Glass et al., 2000;Chamabaud et al., 2001;Sasaki et al., 2002;Berent and Messik, 2003;Papazisi et al., 2003;Westberg et al., 2004;Jaffe et al., 2004;Minion et al., 2004).Recently, the complete genomes of a pathogenic (7448) and nonpathogenic (J [ATCC 25934]) strains of Mycoplasma hyopneumoniae, as well as the complete genome of a strain (53) of Mycoplasma synoviae (Vasconcelos et al., 2005) were obtained.Both species cause great adverse impact on animal production.M. hyopneumoniae is the causative agent of porcine enzootic pneumonia, a mild, chronic pneumonia of swine, commonly complicated by opportunistic infections with other bacteria (Ross, 1992).Like most other members of the order Mycoplasmatales, M. hyopneumoniae is infective for a single species, but the mechanisms of host specificity are unknown.M. synoviae is the major poultry pathogen throughout the world, causing chronic respiratory disease and arthritis in infected chickens and turkeys (Allen et al., 2005).
Kinases play indispensable roles in numerous cellular metabolic and signaling pathways, and they are among the best-studied enzymes at the structural, biochemical, and cellular levels.Despite the fact that all kinases use the same phosphate donor (in most cases, ATP) and catalyze apparently the same phosphoryl transfer reaction, they display remarkable diversity in their structural folds and substrate recognition mechanisms, probably due largely to the extraordinarily diverse nature of the structures and properties of their substrates (Cheek et al., 2005).
Complete genome sequencing identified 679, 681 and 694 Open Reading Frames (ORF) of M. hyopneumoniae strains J (Mhy-J), 7448 (Mhy-P) and M. synoviae strain 53 (Msy), respectively.Analysis of these mycoplasma genomes by bioinformatics tools identified 15 Mhy-J ORFs, 15 Mhy-P ORFs and 17 Msy ORFs, all of which encode kinases.Due to the biological importance of these enzymes we expect that their study will improve the comprehension of the reduced biosynthetic pathways in mollicutes.
Methods
By using previous results from the complete genomes of M. synoviae and M. hyopneumoniae, J and 7448 strains as input to BLAST search tools we obtained 17 ORFs encoding kinase homologues in M. synoviae and 15 in both strains of M. hyopneumoniae.Putative biological functions of the kinases were deduced by using Pfam interface and InterPro information.The classification of enzymes into fold groups and families was performed by following the scheme described by Cheek et al. (2005).In brief, all kinase sequences from the NCBI non-redundant database were assigned to a set of 57 profiles describing catalytic kinase domains by using the hmmsearch program of the HMMR2 package (Eddy, 1998).Sequences from each Pfam/COG profile presenting significant PSI-BLAST (Altschul et al., 1997) hits to each other were clustered into the same family.Families in the same fold group share structurally similar nucleotide-binding domains that have the same architecture and topology (or are related by circular permutation) for at least the core of the domain.Multiple sequence alignments were generated using the ClustalX 1.81 software (Thompson et al., 1997).The amino acid sequence relationships were generated with the predicted protein sequences obtained from 47 kinase-encoding ORFs identified in the complete genome sequences of M. synoviae and M. hyopneumoniae.A phylogenetic tree was constructed by multiple sequence alignments (pairwise alignments) using the Clustal X 1.81 program (Thompson et al., 1997) and visualized by using the TreeView software.The tree was constructed by using the minimum evolution (neighbor-joining) method (Saitou and Nei, 1987).
Robustness of branches was estimated using 100 bootstrap replicates.
Mycoplasma kinases
In this study we briefly review the kinase genes of M. hyopneumoniae and M. synoviae, and we describe a classification and metabolic comparative analysis of kinases of these organisms.In the genome sequences we identified a total of 47 kinase-encoding ORFs which are related to several different biosynthetic pathways, such as purine and pyrimidine metabolism, glycolysis, pyruvate metabolism, as well as cofactor metabolism and others (Table 1).The two M. hyopneumoniae strains have equal numbers of the same kinases-encoding ORFs.Three of these are absent in M. synoviae (glycerol kinase, glucokinase and 5-dehydro-2deoxygluconokinase) which has an additional 17 ORFs that encode kinases.Four of them (three ORFs encoding deoxyguanosine kinase and one ORF encoding N-acetylmannosamine kinase) are exclusive to this species when compared to M. hyopneumoniae strains J and 7448 (Table 1).These differences between the two species could be related to specific nutritional requirements found by each pathogen in its respective host.All kinases found in the pathogenic strain 220 Bailão et al. of M. hyopneumoniae (7448) were also identified in the nonpathogenic strain (J).This finding could be explained by the fact that such enzymatic activities may be essential to Mollicutes which have a reduced metabolism.
Kinase classification
The classification of kinases found in M. hyopneumoniae strains J and 7448, as well as in M. synoviae was performed according to the description of Cheek et al. (2005).Here, the definition of kinase was restricted to enzymes which catalyze the transfer of the terminal phosphate group from ATP to a substrate containing an alcohol, nitrogen, carboxyl or phosphate group as phosphoryl acceptor.The classification scheme lists a total of 25 kinase family homologues which are assembled into 12 groups based on the similarity of the structural fold.Within a fold group, the core of the nucleotide-binding domain of each family has the same architecture, and the topology of the protein core is either identical or related by circular permutation (Cheek et al., 2005).In the two M. hyopneumoniae strains and in the M. synoviae strain the 47 identified ORFs code for 18 different kinases classified in nine families.These were grouped into five fold groups, as shown in Table 2. Fold Group 2 (Rossmann-like) contains 11 enzymes divided into five families, in which all the seven members of the P-loop kinase family are proteins involved in purine and pyrimidine metabolism.The remaining four members of this group are fall into four families which, together with four members of Group 4 and a member of Group 5 (TIM b /a barrel kinase) are involved in the carbohydrate metabolism.Group 1 (Protein S/T-Y kinase) and Group 8 (Riboflavin kinase) are each represented by one enzyme only, which participate in signaling cascades and riboflavin metabolism, respectively.
Nucleotide metabolism and kinases
Mollicutes are unable to synthesize purines and pyrimidines by de novo pathways, and guanine, guanosine, uracil, thymine, thymidine, cytidine, adenine and adenosine may serve as precursors for nucleic acids, and nucleotide coenzymes in these organisms (Himmelreich et al., 1996).They only synthesize ribonucleotides by the salvage pathway.In the complete genome of M. hyopneumoniae and M. synoviae we identified six kinases in the first one and seven kinases in the second one, all of which catalyze key steps in the nucleotide salvage pathway.Deoxyribonucleotides are produced from ribonucleotides by a ribonucleoside diphosphate reductase.Adenine, guanine and uracil can be metabolized to the corresponding nucleoside monophosphate by adenine phosphoribosyltransferase, hypoxanthine-guanine phosphoribosyltransferase and uracil phosphoribosyltransferase, respectively.ADP, GDP, UDP and CDP are generated by adenylate, guanylate, uridylate and cytidylate kinases.Only M. synoviae has three ORFs encoding deoxyguanosine kinase, which can convert deoxyguanosine to dGMP.However, a nucleotide diphosphate kinase (ndk), the main enzyme for the production of NTP from NDP, was not found in the M. hyopneumoniae and M. synoviae genomes.This finding is in agreement with data from other Mollicutes genome sequences.It was proposed that the absence of an ndk gene ortholog in Mollicutes could be compensated by 6-phos- phofructokinases, phosphoglycerate kinases, pyruvate kinases, and acetate kinases.In addition, besides reactant ADP/ATP, these organisms could use other ribo-and deoxyribo-purine and pyrimidine NDPs and NTPs (Pollack et al., 2002).Like in M. penetrans, important enzymes such as uridine kinase and pyrimidine nucleoside phosphorylase, which convert cytosine in CMP, are also missing in the two species.The synthesis of CTP from UTP by CTP synthetase is possible only in two M. hyopneumoniae strains.The production of deoxythymidine diphosphate from thymidine may be performed by thymidine and thymidylate kinases.A gene encoding ribose-phosphate pyrophosphokinase is present and this enzyme would produce 5-phosphoribosyl diphosphate, a crucial component in nucleotide All kinases involved in the nucleotide salvage pathway are fall into fold Group 2.Moreover, only ribosephosphate pyrophosphokinase is not in the P-loop kinases family of this group.
Kinases involved in the metabolism of carbohydrates
Both M. hyopneumoniae and M. synoviae have the entire set of genes responsible for glycolysis.Like in M. pulmonis (Chambaud et al., 2001), M. hyopneumoniae strain 232 (Minion et al., 2004), and M. mobile (Jaffe et al., 2004), glycolysis in M. hyopneumoniae J and 7448 can begin by direct phosphorylation of glucose by glucokinase (Group 4; ribonuclease H-like family) activity.Alternatively, as described for other Mollicutes (Fraser et al., 1995;Himmelreich et al., 1996;Glass et al., 2000), M. synoviae produces glucose 6-posphate only by the action of phosphoenolpyruvate-dependent sugar phosphotransferase system.The two species M. hyopneumoniae and M. synoviae have a 6-phosphofructokinase (Group 2; phosphofructokinase-like family), phosphoglycerate kinase (Group 2; phosphoglycerate kinase family) and pyruvate kinase (Group 5; TIM β/α? barrel kinase family).These three key enzymes also participate in the glycolysis pathway, like in other Mollicutes.In addition, they have an acetate kinase (Group 4; ribonuclease H-like family), an essential enzyme in the production of acetyl-CoA from acetate.
Even though, M. synoviae and M. hyopneumoniae strains have glycerol transporter-related proteins, only the second species presents a glycerol kinase (Group 4; ribonuclease H-like family) enzyme which could directly convert glycerol to glycerol 3-phosphate.This product is then converted into glyceraldehyde 3-phosphate.
In their amino sugar metabolism, mycoplasmas can produce fructose 6-phosphate (F6P) also from N-acetyl-Dglucosamine.In this pathway, M. synoviae N-acetylmannosamine kinase (Group 4; ribonuclease H-like family) catalyzes a key reaction in the production of F6P from N-acetyl-neuraminate.Even though both species lack the inositol metabolism pathway, only M. hyopneumoniae presents a 5-dehydro-2-deoxygluconokinase (Group 2; Thia-min pyrophosphokinase family), an enzyme which catalyzes a step in this pathway.The presence of specific kinases in the M. synoviae and M. hyopneumoniae (strain J and 7448) genomes shows the possibility for the use of different metabolic routes by each mycoplasma in response to the specific nutritional conditions found by each pathogen in its respective host environment.
Riboflavin metabolism and kinases
M. hyopneumoniae and M. synoviae lack enzymes that synthesize many coenzymes and cofactors.However, they produce Flavine Adenine Dinucleotide (FAD) from riboflavin.This process is performed in two steps where, in the first step, riboflavin kinase phosphorylates riboflavin to form flavin mononucleotide (FMN).Next, FMN is converted to flavin adenine dinucleotide (FAD) by a FMN adenylyltransferase (Karthikeyan, et al., 2003).FAD is an enzyme cofactor used in several metabolic pathways.In M. synoviae and M. hyopneumoniae, the two steps are performed by a single bifunctional enzyme riboflavin kinase/ FMN adenylyltransferase, as occurs also in bacteria (Manstein et al., 1986;Mack et al., 1998).It is a unique enzyme and the only representative for fold Group 5.
Amino acid sequence relationships
In order to investigate the phylogenetic relationships of the kinase families of M. synoviae 53, M.hyopneumoniae J and M.hyopneumoniae 7448, the 47 deduced amino acid sequences of the ORFs encoding kinases were aligned using the ClustalX 1.81 program.Robustness of branches was estimated by using 100 bootstrap replicates.
Figure 1 shows the phylogenetic tree for kinases as calculated from the neighbour-joining method.The tree was rooted with Group 1 since it has only one representative.The kinase sequences were well resolved into clades.The P-loop kinase family of Group 2 (Rossmann-like) was clustered into four subclades (Figure 1, letters A, B, C and D).The subclades B and C comprise sequences from M. synoviae, M. hyopneumoniae J and M. hyopneumoniae 7448 implicated in phosphorylation of the monophosphate nucleotides.Thymidylate kinase and deoxiguanosine kinase convert TMP to TDP and deoxiguanosine to dGMP, respectively.Although these enzymes have different functions, they have structurally similar nucleotide-binding domains following the classification described by Cheek et al., (2005).The other members of the Rossmann-like Group, which are the phosphoglycerate kinase, ribokinase-like and thiamine pyrophosphokinase families, clustered in individual groups.The sequences from Group 4 formed four clades.Although belonging to the same fold group they are implicated in different metabolic pathways.
Concluding Remarks
In the complete genomes of M. synoviae strain 53, M. hyopneumoniae strains J and 7448 we identified kinases in-volved in many essential metabolic pathways such as carbohydrates, purine, pyrimidine and cofactors metabolism.The presence of those enzymes evidenced the metabolic machinery utilized by these organisms which are considered minimalist models.
Figure 1 -
Figure 1 -Phylogenetic tree obtained from kinase amino acid sequence relationships.The kinase fold groups and families are shown in brackets on the right side.The Group 2: Rossmann-like P-loop kinases were clustered into four sub-groups (A, B, C and D).The numbers on the branches are bootstrap values obtained with 100 replications.The kinase encoding ORFs are represented by MSkinase (M.synoviae), MHJkinase (M.hyopneumoniae J) and MHPkinase (M.hyopneumoniae 7448).
Table 1 -
Kinases identified in the M. synoviae and M. hyopneumoniae genomes.
Table 2 -
Classification of M. synoviae and M. hyopneumoniae kinase activities by family and fold group*. | 3,505.6 | 2007-01-01T00:00:00.000 | [
"Biology"
] |
Safety evaluation of a new setup for transcranial electric stimulation during magnetic resonance imaging
BACKGROUND
Transcranial electric stimulation during MR imaging can introduce safety issues due to coupling of the RF field with the stimulation electrodes and leads.
OBJECTIVE
To optimize the stimulation setup for MR current density imaging (MRCDI) and increase maximum stimulation current, a new low-conductivity (σ = 29.4 S/m) lead wire is designed and tested.
METHOD
The antenna effect was simulated to investigate the effect of lead conductivity. Subsequently, specific absorption rate (SAR) simulations for realistic lead configurations with low-conductivity leads and two electrode types were performed at 128 MHz and 298 MHz being the Larmor frequencies of protons at 3T and 7T. Temperature measurements were performed during MRI using high power deposition sequences to ensure that the electrodes comply with MRI temperature regulations.
RESULTS
The antenna effect was found for copper leads at ¼ RF wavelength and could be reliably eliminated using low-conductivity leads. Realistic lead configurations increased the head SAR and the local head SAR at the electrodes only minimally. The highest temperatures were measured on the rings of center-surround electrodes, while circular electrodes showed little heating. No temperature increase above the safety limit of 39 °C was observed.
CONCLUSION
Coupling to the RF field can be reliably prevented by low-conductivity leads, enabling cable paths optimal for MRCDI. Compared to commercial copper leads with safety resistors, the low-conductivity leads had lower total impedance, enabling the application of higher currents without changing stimulator design. Attention must be paid to electrode pads.
Introduction
Roughly two decades ago, Nitsche and Paulus showed that human motor cortex excitability could be non-invasively modulated by weak electric currents applied through the intact skull by surface electrodes [1]. Since then, the use of transcranial electric stimulation (TES) techniques in neuroscience applications has grown tremendously. There is also increasing interest to apply TES inside magnetic resonance imaging (MRI) scanners. This is motivated by the wish to use functional MRI (fMRI) for characterizing the physiological stimulation effects. More recently, MRI is also applied to shed light on the physical current flow inside the brain. Simulation of the current flow using forward models of the head anatomy [2,3] is feasible, but the accuracies and reliabilities of the results are challenged by a number of factors. For example, the ohmic conductivities of the head tissues at low frequencies are quite uncertain, highlighting the need to validate the simulated fields [4].
MR current density imaging (MRCDI) [5] and MR electrical impedance tomography (MREIT) [6] are two emerging modalities that can indirectly measure the current flow in the brain and the conductivity of the tissue, respectively. These techniques have the potential to improve the accuracy of electric field simulations for TES, as well as for source localization in electro-and magnetoencephalography (EEG and MEG) [7], and can aid in the characterization of pathological tissue [8]. Similar to TES, MRCDI and MREIT use weak currents applied via surface electrodes. The current-induced changes of the static magnetic field are measured and used to determine the current flow or tissue conductivities at low frequency.
The current flow inside the brain changes the magnetic field only slightly, resulting in a low signal-to-noise ratio (SNR) of the measurements and making them prone to artifacts. The effect of the stray fields from the cable currents has previously been studied [9]. Unless the leads are aligned fully parallel to the main magnetic field, the induced stray fields will strongly influence the currentinduced magnetic field changes measured in the brain, and therefore the current density and conductivity reconstruction. This can be corrected for by tracking the cables in MR images and using the Biot-Savart law to subtract induced stray fields [10]. Although the correction method significantly improves the current density reconstruction results, it would be preferable to orient the leads parallel to the main magnetic field to reduce residual errors and increase the robustness of the measurement approach.
For TES, MRCDI and MREIT, the currents are applied through lead wires connected to the subject's scalp via surface electrodes. Extra safety measures have to be taken when conductive materials are used in an MR scanner, especially when in contact with tissue. Many incidences of patient burns caused by coupling between the RF field and lead wires have been reported [11]. However, no burn incidents have been reported for TES-MRI. Heating of leads can be caused by direct electromagnetic induction in wire loops [12] and highly conductive loops must be avoided during MRI. Another important origin of heating is the antenna effect that occurs when wires or other conductors of appropriate length act as "receive antennas" for the RF field. For increasing field strengths, the antenna effect becomes an increasing problem due to the shorter wavelength of the RF field. Half a wavelength is typically found to be the critical length for heating [13,14], but it has been shown experimentally that the length of the lead required to observe the effect also depends on the boundary conditions on each end of the lead [15]. For high impedance at one end (open or connected to a safety resistor or high impedance amplifier) and relatively low impedance at the other end (connected to tissue) ¼ RF wavelength can be critical as well. Therefore, the design of lead wires that reliably prevent the occurrence of electromagnetic induction and antenna effects despite varying boundary conditions is important, but challenging.
Conventional TES devices use highly conductive leads (usually copper) between the stimulator and surface electrodes. For the TES device that is most commonly used in combination with MRI (DC-STIMULATOR MR, neuroCare Group GmbH, München, Germany), most of the cable is realized as twisted pair cable and 5 kU safety resistors are added to each of the two leads to limit the length of highly conductive material near the scanned subject (Fig. 1a). This design improves safety but prevents an optimal cable orientation for MRCDI and MREIT experiments. Because the device supplies a maximal output voltage of 30 V, the safety resistors also limit the maximum possible stimulation current to around 2 mA. Most TES studies so far have used currents up to 2 mA, but there is recent interest to explore higher current strength of up to 4 mA to increase efficacy [16e20]. The safety resistors limit the use of TES-fMRI studies to characterize the physiological effect of higher TES currents.
The aim of this work is to redesign the leads for combined MR and current injection experiments to remove the above restrictions. Specifically, the goal was to develop leads that would allow long straight wire paths parallel to the static magnetic fields and support stimulation currents up to 4 mA with the existing stimulator while not compromising safety. Instead of using highly conductive materials and local safety resistors, we propose to use a distributed resistance by having leads with much lower conductivity, while also having an overall lower total impedance. Carbon fiber leads are routinely used for EEG-MRI and are reported to decrease specific absorption rate (SAR) compared to copper leads [21]. Here, we extend this approach to TES and further minimize the risk for the occurrence of antenna effects by using an even less conductive silicone rubber material (s ¼ 29.4 S/m) for the lead wires. Numerical methods have previously been used to estimate SAR for combined EEG-MRI studies [21e23], as well as for TES-MRI experiments [24,25].
In this study, we use both numerical simulations to estimate SAR as well as experimental temperature measurement. We first simulate a worst-case antenna effect at 298 MHz to investigate the relationship between the antenna effect and conductivity. Secondly, we simulate two electrode types with various lead configurations to ensure safety at both 128 MHz and 298 MHz, corresponding to the proton Larmor frequencies at 3T and 7T magnetic field strength. Lastly, temperature measurements are performed on the electrodes and leads made in-house during in vivo MRI at both field strengths.
Electrode and lead design
We constructed two commonly used TES electrode types inhouse: 1) The circular electrode commonly used for non-focal stimulation in TES or for MRCDI and MREIT (Fig. 1b) and 2) the center-surround electrode used for focal stimulation in TES experiments [26] (Fig. 1c). Both types are 3 mm thick. The circular electrodes are 5 cm in diameter. The center-surround electrodes have an outer ring with an inner and outer diameter of 10 cm and 8 cm, respectively. The diameter of the center electrode is 3 cm. For all electrodes, a 90 cm silicone rubber strip with a cross-sectional area of 10 mm 2 was cut out and used as the lead wire. Both electrodes and leads are made from silicone rubber (ELASTOSIL® R 570/60 RUSS, Wacker, Munich, Germany). The resistance of each of the lead wires is 2 kU ± 200 U. To ensure proper electrical connection and mechanical strength, the rubber leads are sewed on to the electrodes. The other ends of the leads are connected to copper leads with cable crimps. Medical grade touch-proof safety connectors are connected to the copper leads (MS1525-B, St€ aubli, Pf€ affikon, Switzerland). A glass-fiber braided sleeving (GSS6, HellermannTyton, Crawley, Germany) is used for thermal and electrical insulation. The glass-fiber sleeving is also sewed on to the electrode and connected to the copper wire to relieve the silicone rubber lead of any strain. Ten20 conductive EEG paste (D.O. Weaver and Co., Aurora, CO, USA) is used between the electrodes and abraded skin to ensure proper connection.
Simulations
Finite-difference time-domain (FDTD) simulations were performed in Sim4Life (ZMT, Zurich, Switzerland) to obtain specific absorption rate (SAR) results. Simulations were performed with 128 MHz and 298 MHz harmonic excitations. All simulations ran until convergence at À30 dB, tested for steady-state on the lumped elements and sources on the RF coils.
Phantom
The heterogeneous male body model Duke from the IT'IS foundation was used in the simulations [27]. The head was positioned at the centers of the birdcage coils in all simulations. 2 mm isotropic resolution was used for Duke's head and shoulders and 4 mm for the torso. The rest of the body was segmented according to the automatic gridding produced by Sim4Life. This was done to reduce simulation time while still allowing sufficient current flow to obtain accurate simulation results [28].
RF coils
For 128 MHz simulations, a generic body coil was used (Fig. 3). Although the proton Larmor frequency of the scanner used in the experiments is 123 MHz the small difference in frequency will have minimal influence on the results. The 298 MHz coil (Fig. 2a) is a model of a transmit head coil [29] (7T volume T/R, Nova Medical, Wilmington, MA). Both coils are 16 rung high-pass birdcage coils. The dimensions are given in Table 1. The coils have two input ports on the superior end-ring 90 apart shifted 45 relative to the body model. The coils were iteratively tuned to the respective frequencies loaded with Duke with the head placed in the centers of the coils. The coils were driven in quadrature mode with equal input power on both ports. For coil model validation, see S2 in supplementary materials.
SAR evaluation
SAR is a measure of the RF power absorbed by the tissue and is given by (1) where s is the tissue conductivity, r is the density of tissue end E ! is the peak electric field inside the tissue. According to international guidelines IEC 60601-2-33 [30], SAR is limited during MRI to avoid excessive heating of a subject due to absorbed RF power. The two relevant limitations for head MRI are head SAR (SAR averaged over the mass of the head) and local head SAR given as the peak spatial average SAR over 1 g or 10 g of tissue.
To evaluate the influence that the electrodes and leads have on SAR, we compare head SAR and 1 g local head SAR for a reference simulation to simulations that include electrodes. The head SAR and 1 g local head SAR ratios are expressed as SAR is compared for 1 W radiated power as well as for a calibrated B 1 field. The input power P for each simulation is scaled such that the average amplitude of B 1 in the center slice of the coil is the same for all simulations. This is done the following way: P ref is the input power for the reference simulation, set to 1 W, B 1,ref is the average B 1 amplitude for the center slice of the reference simulations, and B 1 is the average B 1 amplitude for the corresponding simulation before normalization. The electrodes, leads, and gel are excluded when R m and R 1g are calculated to include only tissue SAR when averaging is performed. By excluding the electrodes from R m and R 1g calculations, these values only express the changes in SAR on the tissue caused by adding electrodes and not the power loss in the electrodes and the gel. Temperature measurements are used to ensure that the heating caused by the power loss in the electrodes and gel is within regulation limits [30].
Antenna effect simulations
To examine the relationship between the antenna effect and the conductivity of the lead wires, a worst-case simulation was performed with the lead wires parallel to the z-direction with one end connected to the circular electrodes and one end in free space as seen in Fig. 2a. This simulation was only performed at 298 MHz as the antenna effect becomes an increasing problem for higher frequencies. The simulations were performed with varying lead lengths from 0 to 100 cm with 10 cm increments including 25 cm and 75 cm as they are approximately ¼ and ¾ wavelength in air at the proton Larmor frequency 298 MHz. Two conductivities were used for all the incremental lengths of the lead wires, namely 5.8$10 7 S/m for copper and 29.4 S/m for silicone rubber with constant cross-sectional area. For 25 cm (the worst-case length), multiple conductivities were simulated with logarithmic increments from 10 2 S/m to 10 7 S/m. The average power dissipation on the electrodes is used as a measure of the severity of the antenna effect.
Realistic lead configuration simulations
For the realistic lead configurations, simulations were performed for the center-surround and circular electrodes at 128 MHz and 298 MHz, respectively. Four lead configurations were simulated as seen in Fig. 3. A right-left and an anterior-posterior montage were simulated for the circular electrodes ( Fig. 3a and b). The center-surround electrodes were only simulated for a rightleft montage (Fig. 3c). The lead configurations in Fig. 3aec show the intended use cases. In addition, the lead configuration in Fig. 3d was also simulated to ensure that misplacing the leads will not have critical consequences due to higher E-field close to the RF coil. All three electrode montages were also simulated without leads for both magnetic field strengths. The electrodes and gel are shown in Fig. 2b and c. The add-on subgrid feature using the Acceleware GPU solver (Acceleware, Calgary, Canada) in Sim4Life was used to obtain a fine resolution for electrode, gel and leads while keeping the same grid size in the rest of the simulation space. An isotropic grid size of 0.5 mm was used for electrodes, gel and leads. With this resolution, the smallest structure in any direction is minimum 4 times the grid size.
The conductivity s and the relative permittivity ε r for both the The leads were terminated with an equivalent resistor representing the output impedance of the combined copper cable, filter, and stimulator (DC-STIMULATOR MR, neuroCare Group GmbH, München, Germany). The output impedances at the relevant simulation frequencies were found with a vector network analyzer (VNA). Since the leads are 90 cm long, the equivalent resistors are far outside the effective exposure volume of the coils as seen in Fig. 3.
Experimental setup
Experiments were performed on 3T (MAGNETOM Prisma; Siemens Healthcare, Erlangen, Germany) and 7T (Achieva; Philips Healthcare, Best, The Netherlands) whole-body MRI scanners. Two senior researchers involved in the project were scanned in the experiments. Informed consent was obtained from the participants prior to the MR scans. The touch-proof safety connectors on the electrode leads were connected via a Biopac MECMRI-1 cable to the Biopac MRIRFIF pi filter (BIOPAC Systems, Goleta, USA). The filter reduces noise from the outside and is located in a panel between the scanner room and the control room. For stimulation, a neurostimulator will be connected to the filter on the control room side. The neurostimulator was not used in the experiments as the output impedance of the copper cable and filter remained the same independent of the stimulator. For safety assessment of electrodes and leads, temperature measurements were performed in the 3T and 7T scanners. Image quality assessment and imaging of the leads for stray field correction in MRCDI was performed at 3T. See S1 in the supplementary material for further details and results.
Temperature measurements
For temperature measurements at 3T, the built-in birdcage body coil was used as the transmit coil while at 7T, a birdcage head coil was used for excitation (7T volume T/R, Nova Medical, Wilmington, MA). Fiber-optic probes (Opsens Solutions, Quebec City, Canada) were used to measure the temperature. Four probes were available. The probes were placed in the gel between the electrode and the scalp at various locations indicated in Table 3 and Fig. 6a and b. When a reference probe was used, it was taped to the top of the head of the subject and insulated with a pad to better imitate the scenario of the other probes.
A Rapid Acquisition with Relaxation Enhancement (RARE) sequence was used for both field strengths to obtain a high SAR for the temperature measurements. The sequence parameters were adjusted to obtain approximately 100% reported SAR by the scanner relative to the SAR limit. At 3T, the RARE sequence parameters were repetition time T R ¼ 175 ms, echo time T E ¼ 100 ms, refocusing tip angle ¼ 180 , echo train length ¼ 15, image matrix 512 Â 512 Â 27 and resolution 0.43 Â 0.43 Â 5.2 mm 3 . And at 7T, T R ¼ 3584 ms, T E ¼ 47.54 ms, echo train length ¼ 9, image matrix 768 Â 768 Â 33, resolution 0.28 Â 0.28 Â 3 mm 3 and a varying refocusing tip angle. A Pseudo Continuous Arterial Spin Labeling (pCASL) sequence (T R ¼ 4100 ms, T E ¼ 18 ms, excitation tip angle ¼ 90 , image matrix 73 Â 73 Â 60, resolution 3 Â 3 Â 4 mm 3 , tag duration/tag delay ¼ 1500/1800 ms, tag pulse angle ¼ 24 and tag gradient strength ¼ 7 mT/m) was also used at 3T, as it is a relatively high SAR sequence that will potentially be used for TES-MRI studies. About 50% SAR was reported for the pCASL sequence. The sequences ran for 20 min to achieve sufficient data to accurately model the temperature increase and find the steady-state temperature. The model used is where T(t) is the temperature at time t, T ss is the steady-state temperature, DT is the difference between the start and steadystate temperature, and t C is the time constant of the exponential term.
Stray field comparison An MRCDI experiment with 1 mA current injection was performed to illustrate the change of stray fields from the leads when the improved lead configuration is used. Lead configurations seen in Fig. 1a and our proposed use as seen in Fig. 3a were compared. See G€ oksu et al. [10] for further details on the used MRCDI method. The imaging of silicone rubber used for cable tracking is presented in supplementary material S1.
Antenna effect
Simulations with varying copper lead lengths showed that the antenna effect occurs at odd multiples of ¼ RF wavelength with ¼ being worse than ¾ (Fig. 4a). The same simulations but with lowconductivity silicone rubber showed that the antenna effect is eliminated with this material (Fig. 4a). Further investigation of the relationship between the antenna effect and conductivity at the worst-case length (25 cm) is shown in Fig. 4b. Carbon fiber leads with a conductivity of 6.1$10 4 S/m [31] only reduces the severity of the antenna effect by about 25%, while low-conductivity silicone rubber robustly prevents the occurrence of an antenna effect.
SAR for realistic lead configurations
SAR simulation results for realistic lead configurations are presented in Fig. 5 and Table 2. In Fig. 5, 1 g local head SAR for all three electrode montages with center leads (see Fig. 3aec) are shown. Adding electrodes, gel, and leads to Duke, changes the spatial variation of SAR, especially around the electrodes, but 1 g local head SAR is for all simulations in the same location as for the reference simulation. For 128 MHz simulations, 1 g local head SAR occurs on the skin on the left side of the neck while for 298 MHz it is in the cerebrospinal fluid. As seen in Table 2, only minimal changes to the B 1 field and SAR occur for 128 MHz and 298 MHz with circular electrodes, while the center-surround electrodes have more influence on the B 1 field, and therefore higher SAR after normalization.
Temperature measurements
One of the temperature measurements (indicated in Table 3) including the fitted model is shown in Fig. 6c. The modeled steadystate temperatures for all the measurements are listed in Table 3. Probe positions indicated in Table 3 are presented in Fig. 6a and b for the center-surround and circular electrodes, respectively.
For the circular electrodes at 3T, the highest temperature was observed on the posterior electrode for the anterior-posterior montage, with a temperature of 37.6 C compared to 35 C for the reference probe. For the right-left montage, the maximum temperature on the electrodes was only 1 C higher than for the reference probe.
The center-surround electrodes at 3T showed the highest temperature increase (max 38.5 C), with no observable difference between electrode pads with and without leads. Off-center leads also did not give rise to higher temperatures. For the pCASL sequence, the steady-state temperature was about 1 C lower than for the RARE sequence in the same session, marked with an asterisk in Table 3.
Very limited increase was found for all measurements at 7T. The highest measured difference between the reference probe and a probe on the electrodes was 0.6 C.
Stray field comparison Fig. 7b and e show the fields from the leads calculated with Biot-Savart law using the tracked lead location seen in Fig. 7a and d. Fig. 7c and f are the measured current-induced fields in the MR scanner. The measured fields are both from currents flowing in the leads and in the subject's tissue. It is clear from comparing Fig. 7b and c that DB zc is dominated by stray fields from the leads. With our optimized lead configuration with the silicone rubber leads as seen in Figs. 3a and 7d, where the leads are aligned in the z-direction, the stray field from the leads are greatly reduced (Fig. 7e) and therefore there is no relationship between lead stray field and DB zc that is dominated by tissue currents.
Discussion
TES electrodes with copper leads pose a potential danger to the subject when used during an MRI session. Due to the coupling between the RF field of the scanner and the highly conductive leads, burns of the subject's scalp can occur unless appropriate measures are taken, such as adding safety resistors in a well-considered way. To minimize coupling between the RF field and the leads in general, we propose to use leads made with a low-conductivity material. By that, we gain flexibility to optimize the leads for the intended applications while ensuring safety. Additionally, this makes it easier to safely design more complex electrode configurations with multiple leads, such as the 4x1 montage [2] used for focal stimulation in TES-fMRI experiments. These electrodes and leads can relatively easily be constructed in-house from sheets of conductive silicone rubber.
In simulations, the antenna effect was found for odd multiples of ¼ RF wavelength (Fig. 4a), which is in agreement with previously reported experimental results [15] with the same boundary conditions. The antenna effect is often believed to occur at ½ RF wavelength only, but as pointed out by Balasubramanian et al. [15], this depends on the boundary condition of the leads. With low impedance at one end and high at the other, it occurs at ¼ RF wavelength, whereas with the same boundary condition at each end, e.g. immersed in tissue, antenna effect occurs at ½ RF wavelength.
Simulation results with varying lead conductivity at worst-case length (Fig. 4b) prove that the antenna effect will not occur for silicone rubber with lead conductivity at 29.4 S/m. This increases the flexibility of the lead configuration and improves the experimental setup for MRCDI experiments. Additionally, no safety resistors are needed, which decreases the overall resistance of the leads compared to the conventional setup and allows for higher stimulation currents. This enables the use of increased stimulation current to study immediate and after-effects on BOLD activity using a standard stimulator. Also, safety resistors enforce nodes in electromagnetic waves, which may cause high local fields causing heating in nearby material and even resistor damage [24]. Careful design is needed to limit these effects and ensure appropriate distance from tissue. The simulations also show that carbon leads, which are often used as a safer alternative to copper leads, only reduce the severity of the antenna effect in our simulations by about 25%. Therefore, using carbon leads can provide a false sense of safety and has to be considered carefully for each specific case. Previous simulation work on a 256-electrode EEG cap has shown consistent results by comparing peak local SAR for varying lead conductivities [22]. The authors reported a 6-fold increase in peak 1 g local head SAR for high conductivities (including carbon) and no increase for conductivities below 100 S/m. The study was not for a specific resonance condition as in our case. Our results may therefore also be relevant for the EEG-MRI community.
Only very limited changes were observed for R m and R 1g for all simulations with realistic lead configurations as seen in Table 2. In most cases the B 1 was slightly lower than B 1,ref due to the additional load on the coil when electrodes and leads were included. For 1 W input power, this is also reflected in the slightly lower SAR for some simulations, especially with leads. For the same reason, all simulations with leads have lower SAR and more influence on B 1 than simulation without leads for 1 W input power. Slight changes to the spatial distribution of the RF field caused by electrodes and leads can also influence R m and especially R 1g .
Overall, a very small change and mostly reduction in SAR is seen before B 1 normalization, while some increase in SAR is reported after normalizing. This is not seen as a problem since, if the scanner increases the input power, then the calculated SAR will be adjusted accordingly and the SAR safety limits will be reached earlier. In worst case, this will have an influence on the available ranges of sequence parameters, but not on safety.
Although the electrodes have an influence on local SAR values in the proximity to the electrodes as seen in Fig. 5, the 1 g local head SAR close to the electrodes does not exceed peak 1 g local head SAR already present in the reference simulation. The peak 1 g local head SAR was also in all simulations at the same location as for the reference simulations. Therefore, the local head SAR limits imposed by the scanner will still ensure conformance with the safety regulations. Additionally, in the 298 MHz simulations, the centersurround electrodes are located close to the locations of peak local head SAR without negative effect. Fig. 3aec). Insets in the corner for g), h) and j) show the surface SAR around the electrodes. Fig. 6. In a) and b) the numbers on the electrodes indicate the probe position referred to in Fig. 6c and Table 3 c) Temperature measurement for center-surround electrodes with center leads (Fig. 3c). L in the legend indicates that it is a measurement on the left electrode. The black curves are the fitted models (Eqn (4)).
The highest measured temperature for the circular electrodes was on the posterior electrode. This is most likely not due to higher power dissipation on the electrode, but rather better thermal insulation as the head rests on the electrode and cushions.
At 3T, more heating was observed for the center-surround electrodes than the circular electrode. The heating was independent of lead position as well as whether leads were attached or not. In agreement with a previous study by Kozlov et al. [25], this indicates that the shape and size of the electrodes have high influence on heating. Therefore, care must be taken when designing new electrodes for TES-MRI experiments. The impedance around the ring of the center-surround electrode is about 300 U. Higher impedance would cause less heating, but it is a tradeoff between heating and homogeneous stimulation currents for focal stimulation. It has to be pointed out that the heating observed is with a high-SAR RARE sequence, which is not a recommended sequence for TES-MRI experiments. Usually, low-SAR echo planar imaging (EPI) or gradient-echo sequences will be used or in worst-case the pCASL sequence also tested in the temperature experiments. The pCASL sequence showed about 1 C less heating than the RARE sequence. In contrast to our findings at 3T, a previous study reported much less heating on the center-surround electrodes [32]. The unspecified conductivity of the electrode material has high influence on the heating, but more importantly only an EPI sequence was used for the heat test. Since EPI used for fMRI experiments is usually a very low-SAR sequence at 3T, insignificant heating would be expected.
At 7T no considerable heating was observed. This is attributed to the fact that SAR is already higher at 7T for similar sequences, and the input power is therefore more restricted compared to 3T.
Although noticeable heating was measured on the centersurround electrodes at 3T, the temperatures were always lower than limits imposed by the international guidelines IEC 60601-2-33 [30] stating that the maximum tissue temperature has to be limited to 39 C.
To further ensure safety and reduce the risk of resistor damage, the conventional electrodes and leads used in this work are limited to use with EPI sequences. The manufacturer requires removal of the cables when other sequences are used. Under the conditions evaluated, this is not necessary with low conductivity silicone rubber leads since no safety resistors are used and safety tests have been performed for high SAR sequences. Table 2 Head SAR and local head SAR (1 g average) ratios (R m and R 1g ) for both field strengths and all simulated electrode montages compared to reference simulations. SAR is given for 1 W input power as well as normalized for the B 1 field in the center slice of the coil. The ratio between B 1,ref and B 1 for the corresponding simulation is used for normalization. All local head SAR maxima were at the same location as for the reference simulation.
Frequency
Electrodes Setup Normalized to 1 W input Table 3 Modeled steady state temperature from all measurements with various setups and probe positions. The numbers for the positions are indicated in Fig. 6aeb and L, R, A and P refer to electrode position (left, right, anterior and posterior). Ref is the reference probe on top of the head away from the electrodes. For the RARE sequence, the SAR was approximately 100% and for ASL, it was approximately 50%, varying slightly with subjects. The asterisks (*) indicate the same scan sessions for a RARE and pCASL sequence with electrodes and temperature probes in the same location. The double asterisks (**) indicate the data shown in Fig. 6c The rubber leads being visible on MR recordings without adding additional material is a practical benefit when doing stray field correction in MRCDI [9] (see supplementary material S1). Conventional copper leads and insulation are not visible on MRI, and therefore additional material needs to be attached around the leads before use. Attention must be paid to image distortions caused by chemical shift, gradient non-linearity, and concomitant fields, however.
The effect of the stray field on the measured DB zc demonstrated in our MRCDI experiment necessitates correction by applying the Biot-Savart-Law [9,10]. However, any errors and inaccuracies of lead position estimation will have far less influence for the optimized cable configuration made possible by using silicone rubber leads.
Additionally, the lead location cannot be tracked during DB zc measurements. Therefore, any movements during the experiment will have detrimental effect on the stray field correction for the non-optimized setup.
The electrodes and leads have very limited and only superficial influence on B 1 and B 0 maps. However, when calculating the signalto-fluctuation-noise ratio for an EPI time series, image artifacts having long ranging effects (centimeters) were found when copper leads and safety resistors were used. These artifacts were not found with silicone rubber leads. (see supplementary material S1).
Conclusion
We have proposed to use low-conductivity silicone rubber as leads for current injection electrodes in the MR scanner. This eliminates the potential safety hazard that comes with the coupling between high-conductivity materials and the RF field, such as the antenna effect that is not necessarily eliminated with carbon cables or safety resistors. For our setup, the simulations showed no increase in head SAR and head local SAR for both field strengths. Additionally, no temperature above the safety limits was recorded. Due to the increased flexibility of lead configurations, these electrodes offer an advantage for MRCDI experiments due to the reduction of compromising stray fields. For TES-MRI experiments, the maximum stimulation currents can be increased for voltagelimited stimulation devices due to lower overall resistance.
Declaration of competing interest
There are no known conflicts of interest associated with this publication and there has been no significant financial support for this work that could have influenced its outcome. | 7,982.8 | 2021-03-08T00:00:00.000 | [
"Physics"
] |
Cellular senescence contributes to age‐dependent changes in circulating extracellular vesicle cargo and function
Abstract Extracellular vesicles (EVs) have emerged as important regulators of inter‐cellular and inter‐organ communication, in part via the transfer of their cargo to recipient cells. Although circulating EVs have been previously studied as biomarkers of aging, how circulating EVs change with age and the underlying mechanisms that contribute to these changes are poorly understood. Here, we demonstrate that aging has a profound effect on the circulating EV pool, as evidenced by changes in concentration, size, and cargo. Aging also alters particle function; treatment of cells with EV fractions isolated from old plasma reduces macrophage responses to lipopolysaccharide, increases phagocytosis, and reduces endothelial cell responses to vascular endothelial growth factor compared to cells treated with young EV fractions. Depletion studies indicate that CD63+ particles mediate these effects. Treatment of macrophages with EV‐like particles revealed that old particles increased the expression of EV miRNAs in recipient cells. Transfection of cells with microRNA mimics recapitulated some of the effects seen with old EV‐like particles. Investigation into the underlying mechanisms using bone marrow transplant studies revealed circulating cell age does not substantially affect the expression of aging‐associated circulating EV miRNAs in old mice. Instead, we show that cellular senescence contributes to changes in particle cargo and function. Notably, senolytic treatment of old mice shifted plasma particle cargo and function toward that of a younger phenotype. Collectively, these results demonstrate that senescent cells contribute to changes in plasma EVs with age and suggest a new mechanism by which senescent cells can affect cellular functions throughout the body.
| INTRODUC TI ON
Aging is associated with a decline in the function of a number of organ systems which ultimately contributes to increased risk of developing age-related diseases (Lopez-Otin, Blasco, Partridge, Serrano, & Kroemer, 2013). Although much attention has been given to how aging affects tissue structure and function, less is known about how aging affects circulating factors. Investigation of circulating factors has been primarily focused on biomarker discovery for disease diagnosis and risk stratification. However, more recent studies have demonstrated that circulating factors that change with age can affect tissue homeostasis. In a landmark study, Conboy and colleagues demonstrated that exposure of aged mice to young circulating factors improved skeletal muscle progenitor cell function (Conboy et al., 2005). Since this initial discovery, there have been a number of studies demonstrating the ability of young circulating factors to improve the function of aged tissues (Conboy & Rando, 2012). However, there are also changes in the aged host's circulation which adversely affect tissue function/repair. For example, Rebo and colleagues demonstrated that a single blood exchange between young and old mice led to rapid inhibition of skeletal muscle regeneration in young mice (Rebo et al., 2016). Therefore, there is a need to better define the factors that change in the aged circulation to understand the pathophysiology of aging.
Extracellular vesicle (EV) is a broad term used to describe membrane encapsulated vesicles that range in size from 30 to 1,000 nm and arise from different modes of secretion. Extracellular vesicles are secreted by cell types throughout the body into the extracellular environment or bodily fluids such as the blood and urine where they carry a number of factors including microRNAs (miRNAs) and proteins (Alibhai, Tobin, Yeganeh, Weisel, & Li, 2018;Yanez-Mo et al., 2015). Changes in plasma EV content reflect changes in cellular function in a number of disease states. For example, changes in circulating EV cargo have been observed in diabetes, postmyocardial infarction, and in cancer (Jansen, Nickenig, & Werner, 2017;Schwarzenbach, 2015). Extracellular vesicles also play a role in inter-cellular communication as they are capable of transferring their cargo to influence recipient cell function. Given their potential for regulating cellular function, EVs have been suggested to be key mediators of aging (Robbins, 2017;Takasugi, 2018). Despite this interest, how plasma EVs change with age and the underlying mechanisms that contribute to these changes are poorly understood.
Here, we investigate the changes that occur in plasma EVs during aging. We examine how aging affects plasma EV concentration, size, cargo, and function. Using bone marrow transplant experiments, we investigate the role of aging circulating cells in regulating plasma EV miRNA expression. Furthermore, using in vitro and in vivo models of senescence as well as senolytic treatment of aged mice we examine the contribution of senescent cells to plasma EVs. Our study suggests a key role of cellular senescence in regulating circulating EV miRNA cargo and function in aged mice.
| Aging affects plasma EV concentration and size
Plasma EVs were isolated from young (3 month) and old (18-21 month) mice using size exclusion chromatography (SEC) which efficiently separates plasma particles from plasma proteins (Figure 1a). Based on the peak particle count and separation from plasma protein, fractions 7-10 were collected for experiments. Examination of isolated particles by transmission electron microscopy (TEM) revealed particles with diverse sizes (<300 nm in diameter) and morphologies, including the expected "cup-shaped" morphology in both preparations ( Figure 1b). Quantification of particle size from TEM images revealed a greater number of smaller particles in old versus young EV fractions (Figure 1c). To further quantify changes in particle size and concentration, we used nanoparticle tracking analysis (NTA) which measures the rate of particle Brownian motion in solution to determine size and uses the number of particles tracked to determine concentration. Nanoparticle tracking analysis revealed a significant reduction in plasma particle concentration and smaller mean particle size in old versus young EV fractions (Figure S1a-c).
A decline in particle concentration with aging has been previously shown with human plasma using a precipitation-based method (Eitan et al., 2017); thus, we next assessed whether old murine plasma also shows reduced particle count using this approach. Lower particle count in old versus young plasma was also observed using the precipitation method; however, the amount of co-isolated protein was substantially higher compared to fractions collected using SEC, suggesting reduced purity (Figure S1e-i). Thus, we used SEC-purified vesicles for this study.
As NTA cannot distinguish EVs from other particles such as lipoproteins, we characterized isolated plasma particles by flow cytometry and Western blotting. Surprisingly, examination of CD63, an EV marker, using flow cytometry revealed significantly increased fluorescence in old versus young EV fractions ( Figure S1d). Western blotting similarly demonstrated significantly greater levels of TSG101, CD81, and CD63 in old versus young EV fractions, suggesting increased EV levels in old plasma (Figure 1d,e). Assessment of plasma contaminants revealed that old EV fractions had significantly less APOA1, APOB-100, and APOB-48 compared with young fractions (Figure 1d,e). Reduced lipoprotein content may be responsible for the lower particle counts by NTA as chylomicrons which carry APOB represent a substantial number of plasma particles (Sodar et al., 2016). Albumin levels were similar between young and old EV fractions ( Figure S1i). Importantly, neither EV fractions contained organelle contaminants indicated by a lack of calnexin signal. Collectively, these data demonstrate that although circulating particle count is lower; EV levels are significantly elevated in old plasma. As characterization of our EV fractions indicate that SEC leads to co-isolation of plasma factors, the collected fractions used are referred to as EV enriched fractions and particles as EV-like particles throughout this study.
| Old plasma EVs alter cellular responses to stimuli
Next, we investigated whether aging affects the function of isolated plasma EV enriched fractions. The functional effects of young and old EV fractions were tested using two cell types that interact with circulating EVs, macrophages and endothelial cells. First, we examined how EV fractions affected the expression of activation markers in unstimulated macrophages including polarization markers arginase 1 (Arg1), mannose Receptor C-Type 1 (MRC1), transforming growth factor β-1 (Tgfβ1), and inducible nitric oxide synthase (iNOS) as well as cytokine expression. Treatment of young unstimulated macrophages with young or old EV fractions increased the expression of MRC1 and Tgfβ1 compared to PBStreated cells (Figure 2a and Figure S2). Although old EV fractions reduced IL1β expression compared to young EVs, the overall trend of basal gene expression was similar between young and old EVtreated cells and many of these cytokines had low expression in F I G U R E 1 Aging alters plasma EV-like particle concentration and size. While our plasma EV preparations were relatively free of protein contamination, we sought to demonstrate that the observed functional effects were due to EVs, rather than contaminants. We therefore depleted CD63 + particles from isolated fractions using anti-CD63-coated beads and compared the functional effects to IgG bead (control) treated fractions. First, we confirmed the capture of CD63 + particles using anti-CD63-coated beads and that IgG beads did not capture CD63 + particles ( Figure S1j). Young macrophages were treated with IgG or CD63-depleted EV fractions after which the response to LPS was assessed. Cells treated with EV fractions incubated with IgG beads exhibited similar profiles as described above; old EVs increased Arg1, IL-10, MRC1, and Tgfβ1 expression and reduced IL-6 and iNOS expression compared to PBS-treated cells. However, in CD63-depleted fractions old EV fractions no longer stimulated expression of Arg1, IL-10, MRC1, and Tgfβ1 or reduced IL-6 and iNOS expression, as expression was similar to PBS-treated cells ( Figure 2h and Figure S5a). In contrast, the effects observed with young EV fractions were similar between IgG and CD63 treated fractions as both reduced the expression of pro-inflammatory cytokines compared to PBS-treated cells ( Figure S5b). This indicates that the effects observed with young EV preparations may be due to other non-EV co-isolated factors. Lastly, we examined whether depletion of CD63 altered the ability of old EV fractions to stimulate macrophage phagocytosis. Cells treated with old EV fractions incubated with control beads exhibited an increase in phagocytic activity, consistent with our previous results. Following CD63 depletion old EV fractions no longer simulated phagocytosis, indicating that CD63 + particles act to increase macrophage phagocytosis ( Figure 2i). CD63 depletion did not affect young EVs as both IgG and CD63 treated fractions did not affect phagocytic function ( Figure S5c,d). Together these data using depleted fractions indicate that CD63 + particles are key effectors in old EV fractions.
| Aging alters plasma EV miRNA expression
To investigate whether aging impacts plasma EV-like particle cargo and whether this cargo may contribute to changes in function, we screened miRNA expression in young and old EV fractions. Overall, 65 miRNAs were detectable above threshold in young and old EV fractions, with 52 present in both, nine only in old EVs, and four only in young EVs (Table S2). Moreover, of the 52 miRNAs that were expressed in both young and old EVs we F I G U R E 2 Aging alters plasma EV-like particle function. (a) Gene expression in young macrophages treated with plasma EVs for 24 hr; data are relative to PBS-treated cells (dashed line), n = 4-5/group.
identified nine miRNAs with significantly different expression between young and old EV fractions (Figure 3a and Figure S6a). We were interested in pursuing miRNAs that were elevated in old EVs, as this might help explain why old EVs acquire an inhibitory phenotype. Thus, we validated differentially expressed miRNAs in a sec-
| Effect of bone marrow (BM) and circulating cell age on plasma EV miRNA expression
Plasma EVs have been suggested to arise, in part, from circulating cells (Wen et al., 2017). Moreover, miR-21 and miR-146a are classically associated with inflammation (Mann et al., 2017;Sheedy, 2015); thus, changes in miRNA expression may reflect increased activation of circulating cells in old mice. We therefore examined whether the
Plasma EV
between YO and OO mice appear to be restricted to animals undergoing reconstitution, as old mice did not exhibit changes in the expression of these miRNAs compared to young mice ( Figure S8a). Interestingly, EV miRNA markers that were elevated with aging remained elevated in old reconstituted mice compared to young mice, regardless of BM donor age (Figure 4g). It is possible that old mice have surpassed some threshold and that reconstitution with young BM does not affect EV miRNA expression. Therefore, we reconstituted young mice with either young (YY) or old (OY) BM and assessed plasma EV fractions 3 months later (Figure 4h). We found a significant effect of recipient mouse age, as expression of miR-21 was significantly lower in YY and OY mice compared with YO and OO ( Figure 4h). Moreover, miR-223 expression was partially affected by donor age as levels in YY were significantly lower compared to OO mice, whereas OY levels were not.
In contrast, miR-146a and let-7a expression was comparable among all four groups. This suggests that the transplant procedure may also impact the expression of these miRNAs, as expression in young mice was similar to old mice following reconstitution. To further determine whether circulating cells secrete EVs with different miRNA expression related to aging, we assessed young and old peripheral blood mononuclear cells in vitro. Consistent with our findings, expression was similar in EVs isolated from young and old PBMCs ( Figure S8b). Collectively, these data demonstrate that although changes in miRNA expression were observed between YO and OO mice, these changes appear to be restricted to mice undergoing BMT. Moreover, while these data support that the recipient and donor age can influence EV miRNA expression these differences were relatively small, and donor age effect was limited to only miR-223. Therefore, donor age and by extension circulating cell age appear to have a minor effect on the miRNAs assessed in this study.
| Induction of senescence leads to changes in circulating EV miRNA expression
In light of these findings, we next examined whether other cellular sources contribute to changes in plasma EVs with aging. Senescent cells accumulate in multiple organs where they contribute to the development of age-associated diseases. Therefore, using sub-lethal total body irradiation (TBI) as an in vivo model of global senescence (Figure 5c). Mean size was also reduced by 2 months but returned to control levels by 4 months ( Figure S9a). Assessment of plasma EV levels using flow cytometry and Western blotting at 4 months post-TBI revealed significantly greater CD63 and CD81 levels, respectively, in TBI versus control mice (Figure 5d-f). Next, we examined whether the induction of senescence altered the expression plasma EV-like particle miRNAs identified in our analysis of young and old mice. Indeed, expression of miR-146a, miR-21, miR-223, and let-7a exhibited time-dependent increases with significantly higher expression in 4-month TBI mice compared to control mice (Figure 5g). In order to further establish that cellular senescence leads to changes in EV miRNA expression, we examined EV production by senescent cells in vitro. Human dermal fibroblasts (HDFs) were irradiated at 20Gy, cultured for 7 days, and senescence confirmed by increased β-galactosidase staining ( Figure S9c) as well as increased expression of p21, IL6, and Ccl2 compared to control cells ( Figure S9d). After 7 days, senescent cells were transferred to serum free media to generate conditioned media and EV secretion and cargo was assessed. Similar to what was observed in vivo, senescent HDFs exhibited a significant increase in EV secretion compared to control cells (Figure 5h). However, induction of senescence in vitro led to a significant increase in mean EV size compared to control cells (Figure 5h). Examination of EV miRNA expression revealed that senescent HDFs secrete significantly greater amounts of EV associated miR-146a, miR-21, and let-7a compared to control cell EVs (Figure 5i). Surprisingly, miR-223 was dramatically reduced in senescent cell EVs compared with control cell EVs.
Examination of cellular miRNA expression revealed that induction of senescence in HDFs leads to significantly increased cellular expression of mir-146a, miR-21, and let-7a compared to control cells ( Figure 5j). Expression of miR-223 was not different between senescent and control cells, suggesting that miR-223 sorting mechanisms may be altered in senescent cells. Collectively, these results demonstrate that induction of cellular senescence alters EV secretion and cargo in vivo and in vitro.
| D + Q therapy reduces expression of senescence cell-associated miRNAs in old plasma EVs
In order to establish that senescent cells contribute to plasma EVs with aging, aged mice were treated with an established senolytic combination therapy (Zhu et al., 2015), dasatinib and quercetin F I G U R E 4 Effect of bone marrow (BM) and circulating cell age on EV miRNA expression. (a) Quantification young and old reconstitution in aged mice, n = 8/group. (b) Representative flow cytometry images of donor CD45/GFP + cells in the blood. (c) Principle component analysis of miRNA expression in plasma EVs from young, old, YO and OO mice. (d) Differentially expressed miRNAs in plasma EVs from YO and OO mice identified by miRNA qPCR array, n = 4/group, *p < .05. (e) Bioinformatics analysis of the molecular function of miRNAs differentially expressed between YO and OO mice and (f) predicted KEGG pathways targeted. (g) Expression of age-associated miRNAs in YO and OO plasma EVs. Groups are relative to young plasma EV miRNA expression (dashed line), n = 6/group, † p < .05 versus young plasma EVs. (h) Plasma EV miRNA expression in YY, OY, YO, and OO mice, n = 4-6/group, *p < .05 versus YO and OO, and † † p < .05 versus OO. All values are mean ± SEM F I G U R E 5 Induction of cellular senescence alters EV secretion and cargo. Expression of p16 and p21 mRNA in the (a) liver and (b) lung of control, 2 and 4 month post-TBI mice, n = 4-6/group, *p < .05 versus control, **p < .05 versus all groups. (c) Quantification of plasma particle concentration by NTA, n = 4-7/group, *p < .05 versus control. (D + Q) or vehicle, biweekly for 2 months. Successful treatment was confirmed by reduced liver and lung p16 and p21 mRNA expression in D + Q-treated mice compared to vehicle-treated mice (Figure 6a).
Examination of plasma particles revealed that D + Q-treated mice had significantly greater mean particle size compared to vehicletreated mice (Figure 6b and Figure S9e). D + Q treatment did not affect overall particle concentration ( Figure S9f). Examination of EV levels revealed that D + Q treatment did not affect EV levels, as CD81 and CD63 expression were similar between vehicle and D + Q mice, when examined by Western blotting and flow cytometry, respectively (Figure 6c,d). Comparison of EV-like particle miRNA expression revealed that D + Q treatment significantly reduced the expression of miR-146a, miR-21, let-7a, and miR-223 compared to vehicle-treated mice (Figure 6e). This does not appear to be a generalized effect on EV miRNAs as miR-145, another miRNA which changed with age, was not affected by D + Q treatment ( Figure S9g).
Changes in plasma EV-like particle miRNAs may lead to changes in cellular miRNA expression. Since myeloid cells are major targets of circulating EVs (Akbar et al., 2017), we compared young and old spleen CD11b cells (myeloid cell marker). CD11b + cells isolated from old mice exhibited significantly greater expression of miR-146a, miR-21, miR-223, and let-7a compared to young CD11b + cells ( Figure 6g).
Next, we determined if a reduction in plasma EV-like particle miRNA levels in D + Q -treated mice were associated with changes in CD11b + cell miRNA expression. Consistent with changes at the plasma level, D + Q-treated mice exhibited a significant reduction in miR-146a, miR-21, miR-223, and let-7a compared to vehicle-treated mice ( Figure 6h), although levels were still elevated above young levels ( Figure S9h).
Lastly, we investigated whether reduced expression of senes-
| D ISCUSS I ON
Circulating plasma vesicles interact with a number of cell types including cells in the bone marrow, liver, and spleen. Although the exact physiological role of plasma EVs is poorly understood, one possible mode of action is that the continuous interaction with recipient cells influences cellular function. Here, we demonstrate that old mice exhibit increased levels of plasma EVs. We also show that miRNAs are differentially expressed in young and old EV-like particles, and that old EV-like particles or the miRNAs they carry can alter macrophage responses to LPS, increase macrophage phagocytosis, and reduce endothelial cell responses to VEGF. We show that old plasma particles have greater expression of miR-146a, miR-21, miR-223, and let-7a, and treatment of macrophages with old EVs leads to increased cellular expression in recipient cells, whereas young particles do not. Mechanistically, these miRNAs are predicted to target key signal transduction factors in the MAPK (Cheng et al., 2013) and AKT (Meng et al., 2007) pathways which play important roles in regulating cellular responses to stimuli.
Although changes in these miRNAs occur in immune cell aging, we find that circulating cells do not significantly contribute to changes in these miRNAs, as reconstitution of aged mice with young BM cells did not reduce the expression of age-associated miRNAs.
Alternatively, we identify a role of cellular senescence in regulating circulating EV miRNA levels.
The accumulation of senescent cells plays a major role in the development and progression of age-related pathologies (Robbins, 2017;Takasugi, 2018). This is due in part to the senescent-associated secretory profile (SASP) acquired by senescent cells. More recent descriptions of the SASP have been expanded to include the secretion of EVs (Robbins, 2017;Takasugi, 2018). Extracellular vesicles derived from senescent cells exhibit multiple changes in cargo compared to healthy cells including changes in protein (Takasugi et al., 2017), and miRNA abundance (Terlecki-Zaniewicz et al., 2018). To date, studies have focused on changes in EV cargo and functions in vitro. Using in vitro and in vivo senescence models, we find that the induction of senescence leads to increased expression of miR-146a, miR-21, and let-7a in secreted EVs. We further demonstrate a role of senescent cells in regulating plasma particle miRNA expression using senolytic treatment of aged mice. Through transcriptome analysis of senescent cells Kirkland and colleagues developed a dasatinib and quercetin (D + Q) combination therapy which targets the pro-survival pathways activated in these cells (Zhu et al., 2015). Here, we show that senolytic therapy using D + Q reduces the expression of age-associated miRNAs and shifts plasma particle function to a younger phenotype. Moreover, we show that removal of senescent cells affects miRNA expression in splenic CD11b + cells, and these changes correlate with changes in the circulation.
Interestingly, D + Q treatment did not reduce plasma EV levels but did reduce miRNA expression. It is possible that senolytics act to suppress the intracellular pathways activated in senescent cells to alter the secreted factors. Further examination into the effect of senolytics on EV secretion and cargo is needed. Furthermore, although we identified and followed changes in miRNA expression throughout the study it is important to note that EVs carry a diverse cargo (e.g., proteins and lipids) which may also play important functional roles. MicroRNAs are likely one of many functional molecules which change with age. Further investigation into the contribution of other components in the EVs is also needed.
The importance of our findings is twofold. First, our data suggest that changes in EV-like particle miRNA levels can be moni- younger phenotype may also have physiological benefits, as changes in miRNA expression play key roles in regulating changes in cell function with age. For example, increased miR-146a expression in aged macrophages limits cellular responses to LPS (Jiang et al., 2012).
Elevated miR-21 can also induce senescence and reduce angiogenic potential in endothelial cells (Dellago et al., 2013) as well as impair T-cell activation (Kim et al., 2018). Changes in miR-223 and let-7a expression have also been shown to affect a number of myeloid cell processes including the response to cell activation (Cho, Song, Oh, & Lee, 2015;M'Baya-Moutoula et al., 2018). It is possible that continuous exposure to circulating EVs carrying senescence cell derived factors promotes changes in cell function. This is why we chose a pretreatment paradigm as cells throughout the aged body would be continuously exposed to circulating EVs prior to activation over the course of days, months, or years depending on the cell type.
It is important to note that the circulating EV pool is diverse and derived from a number of different cell sources; the senescent signature is only one part of the EV pool. Direct and indirect effects of senolytics are likely detected when examined in vivo. This is illustrated by the expression of miR-223 in our study. Therefore, senescent cells may also influence plasma EV miRNA expression by controlling the activity of other cell types.
Although we used ~2 × 10 9 -1 × 10 10 particles/ml to assess the functional effects of plasma EV-like particles, this does not reflect the number of EVs used. Using cryo-EM and flow cytometry, Brisson and colleagues estimated EVs in human platelet-free plasma to be 1-10 × 10 7 EVs/ml (Arraud et al., 2014). However, the authors comment that estimates of the small EV populations, the most abundant EV sub-type, were likely underestimated using these techniques. This is consistent with data from other groups which suggest that the isolation and quantification methods have a major impact on plasma EV concentration and purity (Johnsen, Gudbergsson, Andresen, & Simonsen, 2019). By comparing 38 articles published from 2013 to 2018 which isolated plasma EVs using a number of methods, Johnsen and colleagues estimate that the number of circulating plasma EVs in humans is ~1 × 10 10 particles/ml. However, no current method exists to measure the exact number of EV particles in plasma. Our particle doses of up to 1 × 10 10 particles/ml does not reflect the number of bona fide EVs, rather a combination of multiple particle types. However, our doses used are not greater than the number of circulating particles at any given time based on our NTA estimates; therefore, our doses are unlikely to be supra-physiological.
Taken together, the findings in this study demonstrate that senescent cells contribute to the plasma EV pool in old mice. Changes in plasma EV miRNA expression and EV-like particle functions correlate with the induction and removal of senescent cells in vivo.
Collectively, this study suggests a novel mechanism by which senescent cells affect the function of cells throughout the body and contribute to changes in cell function with aging.
E XPERIMENTAL PROCEDURE S
Experimental procedures can be found in supporting information online methods. Biology. FJA is a recipient of a CIHR Post-Doctoral Fellowship.
CO N FLI C T O F I NTE R E S T
None declared. | 6,132 | 2020-01-21T00:00:00.000 | [
"Biology",
"Medicine"
] |
A Mechanical Sensor Using Hybridized Metamolecules
Hybridized metamaterials with collective mode resonance are usually applied as sensors. In this paper, we make use of one Mie-based hybridized metamolecule comprising of dielectric meta-atoms and an elastic bonding layer in order to detect the distances and applied forces. The hybridization induced splitting results in two new collective resonance modes, of which the red-shifted mode behaves as the in-phase oscillation of two meta-atoms. Owing to the synergy of the oscillation, the in-phase resonance appears as a deep dip with a relatively high Q-factor and figure of merit (FoM). By exerting an external force, namely by adjusting the thickness of the bonding layer, the coupling strength of the metamolecule is changed. As the coupling strength increases, the first collective mode dip red-shifts increasingly toward lower frequencies. By fitting the relationship of the distance–frequency shift and the force–frequency shift, the metamolecule can be used as a sensor to characterize tiny displacement and a relatively wide range of applied force in civil engineering and biological engineering.
Introduction
Coupling between meta-atoms causes a hybridization effect in metamaterials [1][2][3], which usually results from asymmetric elements in metamaterials, and results in energy level splitting [4][5][6][7][8][9][10]. Because of the sharp resonance dips and sensitivity to external factors, the frequency shift of such hybridized resonances can be used to characterize various external information [11][12][13], like the refractive index, and can thus be applied in biological and other engineering [14][15][16][17]. In addition, small displacements between the coupled meta-atoms lead to frequency shifts of the collective modes, which can be utilized for mechanical sensing based on electromagnetic spectra. The interplays between mechanics and electromagnetics have already been applied for advanced sensing, super-resolution imaging, and non-destructive detection. These functions are realized by specially designed artificial structures, like phoxonic crystal cavity [18], piezoelectriclike meta-atoms [19], and cavity-assisted optical lattice clock [20]. Such kinds of multi-physics metastructures have inspired our design, which involves both mechanical and electromagnetic factors. By combining coupling-sensitive metamaterials with flexible materials, tiny displacement and applied forces can be detected by frequency shifts. Pryce proposed one structure fabricated on an elastic substrate, of which the reflectance spectra can be shifted by stretching the substrate [21]. Zheng obtained one strain-sensitive flexible metamaterial by varying the asymmetry of the electric-magnetic coupling structure [22]. It can be inferred that by using an adjustable soft material to bond the meta-atoms together, the hybridized collective mode of the metamolecule can be actively tuned through external forces.
However, most previous research about such hybridization effects focuses on visible or infrared wavelengths [23][24][25][26][27]. Here, we take advantage of a Mie-resonance of a dielectric in a microwave range [28][29][30][31][32] to realize it. By combining the meta-atoms with an elastic silica gel layer, the distance between the two dielectric meta-atoms can be adjusted regularly, and the transmission spectrum shifts accordingly. Based on this, we can make use of a frequency shift to characterize the displacement and the external applied force. By placing two dielectric meta-atoms at certain observation points, the movement of the two points can be fitted via a frequency shift of the metamolecule; moreover, because of the natural properties of the bonding elastic layer, the amplitude of the applied force can also be calculated by a frequency shift of metamolecule. Compared with the higher frequency domain, a microwave is easier to obtain and has weaker radiation. Besides, the materials used in our experiment are all biocompatible, which is promising for application in vivo.
Some research has also successfully taken advantage of the coupling phenomenon of Mie-resonance in dielectric to characterize the strain and applied force [33]. However, compared with a single composite "homonuclear diatomic molecule", a metamolecule composed of two different meta-atoms has a well-performing combination of Q-factor and figure of merit (FoM) to external factors, like tiny displacement. The two indexes are vital for sensing, and they benefit from the synergy effect of the first hybridized collective mode.
Materials and Simulation
Figure 1a depicts the schematic configuration of the adjustable metamolecule. The dielectric meta-atoms are bonded by a silica gel layer, which is soft enough to be compressed by an external force. Meta-atoms are made of pure CaTiO 3 (CTO) and CaTiO 3 with 5% ZrO 2 (CTO-5%ZrO 2 ) cutting into cuboids of 2 × 1.8 × 1.8 mm 3 and 2.2 × 2.2 × 2 mm 3 in size, respectively. The complex relative permittivity of pure CTO and CTO-5%ZrO 2 are ε r = 160, tanδ = 0.001 and ε r = 119, tanδ = 0.002, respectively. The relatively high real part and low imaginary part of the permittivity results in a strong resonance amplitude and high Q-factor. However, most previous research about such hybridization effects focuses on visible or infrared wavelengths [23][24][25][26][27]. Here, we take advantage of a Mie-resonance of a dielectric in a microwave range [28][29][30][31][32] to realize it. By combining the meta-atoms with an elastic silica gel layer, the distance between the two dielectric meta-atoms can be adjusted regularly, and the transmission spectrum shifts accordingly. Based on this, we can make use of a frequency shift to characterize the displacement and the external applied force. By placing two dielectric meta-atoms at certain observation points, the movement of the two points can be fitted via a frequency shift of the metamolecule; moreover, because of the natural properties of the bonding elastic layer, the amplitude of the applied force can also be calculated by a frequency shift of metamolecule. Compared with the higher frequency domain, a microwave is easier to obtain and has weaker radiation. Besides, the materials used in our experiment are all biocompatible, which is promising for application in vivo.
Some research has also successfully taken advantage of the coupling phenomenon of Mieresonance in dielectric to characterize the strain and applied force [33]. However, compared with a single composite "homonuclear diatomic molecule", a metamolecule composed of two different meta-atoms has a well-performing combination of Q-factor and figure of merit (FoM) to external factors, like tiny displacement. The two indexes are vital for sensing, and they benefit from the synergy effect of the first hybridized collective mode. Figure 1a depicts the schematic configuration of the adjustable metamolecule. The dielectric meta-atoms are bonded by a silica gel layer, which is soft enough to be compressed by an external force. Meta-atoms are made of pure CaTiO3 (CTO) and CaTiO3 with 5% ZrO2 (CTO-5%ZrO2) cutting into cuboids of 2 × 1.8 × 1.8 mm 3 and 2.2 × 2.2 × 2 mm 3 in size, respectively. The complex relative permittivity of pure CTO and CTO-5%ZrO2 are Ɛr = 160, tanδ = 0.001 and Ɛr = 119, tanδ = 0.002, respectively. The relatively high real part and low imaginary part of the permittivity results in a strong resonance amplitude and high Q-factor. We performed a simulation using the commercial software package CST (Computer Simulation Technology) microwave studio to calculate the transmission spectra of both the single meta-atoms and the assembled metamolecule. As Figure 1a shows, the incident plane wave propagates along the x-axis, with the magnetic field acting along the z-axis and the electric field along the y-axis. The dash lines in Figure 1b show the transmission spectra of single meta-atoms. When we only put a single meta-atom in the center of the waveguide, the two meta-atoms produced dramatic transmission dips at very similar frequencies of ca. 10.72 GHz.
Materials and Simulation
To simulate the transmission behavior of the metamolecule, we set the spacing δ between the two meta-atoms (i.e., the thickness of the elastic layer) as 1.2 mm; the value is smaller than the size of the working wavelength, and it is expected that the coupling effect between the two meta-atoms is strong enough to affect the resonance of the metamolecule. The solid lines in Figure 1b show the transmission response S 21 of the metamolecule. One noticeable transparent window is supplanting the initial dip of the meta-atoms; furthermore, two significant dips are found to appear. The coupling within the metamolecule results in the transmission dip of the meta-atoms splitting, and induces a hybridization transparency.
When the spacing δ decreases from 1.2 to 0.8 mm with a step of 0.1 mm, the transparent window widens. The first and second collective modes of metamolecule accelerate to a lower and higher frequency, as shown in Figure 1b. The transparency occurs because of the strong coupling between the two meta-atoms and the formation of new collective resonance dips. Two meta-atoms with the coincident first-order Mie resonance frequency can be regarded as two oscillators. Their combination is characterized by simultaneous "two-particle model" equations [34]. As the effective susceptibility χ approaches infinity, the two edge frequencies that define the stopband of our metamolecule are obtained as follows: where ω 0 is the first-order Mie resonance frequency of the two meta-atoms, and κ represents the coupling strength between the meta-atoms. Thus, the width of the transparent window can be calculated as follows: where the coupling strength κ is positively correlated to the coupling distance δ. By exerting an external force to adjust δ between two meta-atoms, we can create an actively tunable hybridization induced transparent window of metamolecules. When the transparent window becomes wider, the in-phase collective mode of the metamolecule red-shifts to a lower frequency, while the out-of-phase collective mode blue-shifts to a higher frequency. As Figure 1b shows, the resonance dip of the first collective mode is much stronger than that for either of the individual meta-atoms, while the dip of the second collective mode is much weaker. This can be explained by the mechanism of the dielectric "diatomic molecules" hybridization model. In our previous work [35], we explained the hybridization phenomenon of Mie-based metamolecule by observing the distribution of the displacement electric current at both collective resonance frequencies.
The first order Mie resonance behaves as a magnetic resonance, formed by the circular arrangement of displacement electric currents inside a dielectric [36]. For the in-phase mode, magnetic dipoles of both meta-atoms oscillate in the parallel direction, and thus strengthen the response by inducing the magnetic fields of the same direction, as Figure 1c shows. While for the out-of-phase mode, the displacement electric current of the two meta-atoms oscillate in anti-parallel directions, and the response is weakened by the counteracting effect of the anti-direction oscillation, as Figure 1d shows.
The first collective mode is formed by the synergy of two in-phase magnetic oscillations, and appears as a stronger resonance dip compared with either of the two meta-atoms. The dip shifts as the coupling strength changes, thus it can be taken advantage of to reflect the coupling displacement. Considering that Q-factor and figure of merit (FoM) are two vital indexes of sensing devices. We calculated the Q-factors of the resonance dips of different spacing δ from 1.2 mm to 0.8 mm, as Figure 2a shows. The Q-factor can be expressed as follows: where FWHM represents the full-width at half maximum. Using a metamolecule with δ = 0.8, 0.9, 1.0, 1.1, and 1.2 mm as examples, the average Q-factor of the metamolecule is 61.31, and average transmission amplitude is −21.1 dB. Sensors with a high Q-factor and strong resonance amplitude are expected to have an outstanding performance in detection fields, owing to their sharp and deep dips.
where FWHM represents the full-width at half maximum. Using a metamolecule with δ = 0.8, 0.9, 1.0, 1.1, and 1.2 mm as examples, the average Q-factor of the metamolecule is 61.31, and average transmission amplitude is −21.1 dB. Sensors with a high Q-factor and strong resonance amplitude are expected to have an outstanding performance in detection fields, owing to their sharp and deep dips. Furthermore, as Figure 1b shows, the frequency shifting rates change with spacing δ. We put forward a concept called figure of merit (FoM) to characterize it, as follows: where ƒ and ƒ′represent for the frequency of the first hybridized resonance mode when the spacing is δ and δ . This parameter illustrates the sensitivity to coupling spacing of the metamolecule. We calculated the FoM for the different δ of this metamolecule, and the results are shown in Figure 2b. It can be seen that when the spacing is reduced, the dip red-shifts at increasing rates. This is because the collective mode is a response to the interaction effect, and this effect becomes even more obvious for coupled meta-atoms with smaller spacing. It can be used to measure spatially distributed information over very small distances. As both Q-factor and FoM should be taken account of for sensing, we make a comparison of these two indexes among metamolecules and their constituent "homonuclear diatomic molecule". As shown in Table 1 and Table 2, our metamolecule makes a good trade-off between Q-factor and FoM, compared with (CTO-5%ZrO2)-(CTO-5%ZrO2) and CTO-CTO "homonuclear diatomic molecule". Owing to the sharpness and sensitivity of the in-phase resonance dip of the metamolecule, it is suitable to be used as a sensor to characterize distance and mechanical information. Table 1. Q-factors of CaTiO3 (CTO) "homonuclear diatomic molecule", CaTiO3 with 5% ZrO2 (CTO-5%ZrO2) "homonuclear diatomic molecule", and metamolecules with various distances and their average values of these distances. Furthermore, as Figure 1b shows, the frequency shifting rates change with spacing δ. We put forward a concept called figure of merit (FoM) to characterize it, as follows:
Q-Factors (mm)
where f and f represent for the frequency of the first hybridized resonance mode when the spacing is δ and δ . This parameter illustrates the sensitivity to coupling spacing of the metamolecule. We calculated the FoM for the different δ of this metamolecule, and the results are shown in Figure 2b. It can be seen that when the spacing is reduced, the dip red-shifts at increasing rates. This is because the collective mode is a response to the interaction effect, and this effect becomes even more obvious for coupled meta-atoms with smaller spacing. It can be used to measure spatially distributed information over very small distances. As both Q-factor and FoM should be taken account of for sensing, we make a comparison of these two indexes among metamolecules and their constituent "homonuclear diatomic molecule". As shown in Tables 1 and 2, our metamolecule makes a good trade-off between Q-factor and FoM, compared with (CTO-5%ZrO 2 )-(CTO-5%ZrO 2 ) and CTO-CTO "homonuclear diatomic molecule". Owing to the sharpness and sensitivity of the in-phase resonance dip of the metamolecule, it is suitable to be used as a sensor to characterize distance and mechanical information. Table 1. Q-factors of CaTiO 3 (CTO) "homonuclear diatomic molecule", CaTiO 3 with 5% ZrO 2 (CTO-5%ZrO 2 ) "homonuclear diatomic molecule", and metamolecules with various distances δ and their average values of these distances.
Results and Discussion
Assembling the metamolecule with an elastic layer ensures that the spacing between the coupled meta-atoms can be adjusted actively and freely. Silica gel is used to bond two dielectric meta-atoms to form a metamolecule. Considering the natural mechanical properties of the silica gel, not only the distance, but also the applied force can be characterized indirectly by a frequency shift. An experiment setup was established to observe the transmission spectra of the metamolecule under different forces, as shown in Figure 3a. Two XB-WA-90-N horn antennas connected to an Agilent N5230C vector network analyzer (Santa Rosa, CA, USA) was used as microwave emitter and receiver. A dynamometer was utilized to exert and measure forces on our metamolecule sample, with two wooden sticks touching the metamolecule and exerting the force. Meanwhile, the dynamometer displayed the compressed dimension of the metamolecule and the magnitude of force applied on the metamolecule. We measured the transmission spectra of the metamolecule with various thickness δ from 1.10 mm to 0.85 mm, as shown in Figure 3b.
Results and Discussion
Assembling the metamolecule with an elastic layer ensures that the spacing between the coupled meta-atoms can be adjusted actively and freely. Silica gel is used to bond two dielectric meta-atoms to form a metamolecule. Considering the natural mechanical properties of the silica gel, not only the distance, but also the applied force can be characterized indirectly by a frequency shift. An experiment setup was established to observe the transmission spectra of the metamolecule under different forces, as shown in Figure 3a. Two XB-WA-90-N horn antennas connected to an Agilent N5230C vector network analyzer (Santa Rosa, CA, USA) was used as microwave emitter and receiver. A dynamometer was utilized to exert and measure forces on our metamolecule sample, with two wooden sticks touching the metamolecule and exerting the force. Meanwhile, the dynamometer displayed the compressed dimension of the metamolecule and the magnitude of force applied on the metamolecule. We measured the transmission spectra of the metamolecule with various thickness δ from 1.10 mm to 0.85 mm, as shown in Figure 3b.
When there is no external force, the thickness δ of the silica gel is 1.10 mm, and the transmission dip of the first collective mode is found at 10.463 GHz, agreeing well with simulation results in Figure 1b. When external forces of 3.5 N, 4.7 N, 5.4 N, and 7.9 N are applied by dynamometer gradually, δ are read to be 1.00 mm, 0.95 mm, 0.92 mm, and 0.85 mm, respectively, by scale meter after stabilization, as shown in Figure 3c. The transmission dips of the in-phase mode, meanwhile, are measured to have become 10.438 GHz, 10.422 GHz, 10.406 GHz, and 10.372 GHz using a vector network analyzer, respectively; these results can be seen in Figure 3b. After a loading force of more than 7.9 N, the thickness of the silica gel almost cannot be compressed easily because of the natural hardening effect of the material itself. Thus, we study the behavior of the metamolecule when the applied force, F, ranges from 0 N to 7.9 N, and spacing δ varies from 1.10 mm to 0.85 mm. .372 GHz using a vector network analyzer, respectively; these results can be seen in Figure 3b. After a loading force of more than 7.9 N, the thickness of the silica gel almost cannot be compressed easily because of the natural hardening effect of the material itself. Thus, we study the behavior of the metamolecule when the applied force, F, ranges from 0 N to 7.9 N, and spacing δ varies from 1.10 mm to 0.85 mm.
From the measured results above, we found that the resonance frequency shift can be taken advantage of to reflect the external applied force and the distance between the two meta-atoms. To make the metamolecule a sensor for general applications, we fitted the relationship between the frequency shift (∆ƒ) and the applied force (F), as well as ∆ƒ and distance δ, using the Origin Software package. We also tested five other data points, which are shown in Figure 4a,b.
the relationship between Δƒ and δ using a quadratic equation (4), as follows: 2 1.09 4. 11 16.27 f f δ = + ×Δ + ×Δ (4) We calculated the errors between our fitted results and the measured results to see how accurately this equation can predict the tiny displacement from the frequency shift. From the inset of Figure 4a, it can be seen that the equation fits the experimental results well. The errors of the fitting values and measured values are less than 1.0%.
With regard to the relationship between Δƒ and F, we need to consider both the relationship of δ−Δƒ and F−δ; the former has already been discussed as being quadratic. For F-δ, because silica gel is a soft material, δ for the metamolecule changes in a non-linear way as a function of F. If the relationship between δ and Δƒ is considered to be quadratic in nature, then F-Δƒ should be biquadratic, the following equation can be obtained by fitting: 0.069 2.08 10 3.51 10 3.12 10 8. 07 10 As Figure 4b shows, this biquadratic equation fits well with the results from the experiment, and the errors between the fitting and measured values are found to be less than 8%. It is worth mentioning that the applied forces can cover the range from 0 N to 7.9 N, signifying that a tiny applied force can also be detected. When the force increases, because of the hardening effect of the silica gel itself, the thickness cannot be compressed and the spacing is hard to change. So, we studied the displacement mainly in the range from 0.8 mm to 1.2 mm. By changing the thickness or type of the soft bonding materials, we can obtain a sensor of even smaller detecting distances. As mentioned in Figure 2b, the smaller δ is, the faster a transmission dip red-shifts. We therefore used a high-order equation to fit the non-linear law of δ to ∆ƒ. However, it should be noted that too great an order of a polynomial increases the computational workload, which limits the applicability and efficiency of such a method. With considerations to both accuracy and practicality, we have fitted the relationship between ∆ƒ and δ using a quadratic Equation (5), as follows:
Conclusion
We calculated the errors between our fitted results and the measured results to see how accurately this equation can predict the tiny displacement from the frequency shift. From the inset of Figure 4a, it can be seen that the equation fits the experimental results well. The errors of the fitting values and measured values are less than 1.0%.
With regard to the relationship between ∆ƒ and F, we need to consider both the relationship of δ-∆ƒ and F-δ; the former has already been discussed as being quadratic. For F-δ, because silica gel is a soft material, δ for the metamolecule changes in a non-linear way as a function of F. If the relationship between δ and ∆ƒ is considered to be quadratic in nature, then F-∆ƒ should be biquadratic, the following equation can be obtained by fitting: F = 0.069 − 2.08 × 10 2 × ∆ f − 3.51 × 10 3 × ∆ f 2 − 3.12 × 10 4 × ∆ f 3 − 8.07 × 10 4 × ∆ f 4 As Figure 4b shows, this biquadratic equation fits well with the results from the experiment, and the errors between the fitting and measured values are found to be less than 8%.
It is worth mentioning that the applied forces can cover the range from 0 N to 7.9 N, signifying that a tiny applied force can also be detected. When the force increases, because of the hardening effect of the silica gel itself, the thickness cannot be compressed and the spacing is hard to change. So, we studied the displacement mainly in the range from 0.8 mm to 1.2 mm. By changing the thickness or type of the soft bonding materials, we can obtain a sensor of even smaller detecting distances.
Conclusions
Based on the discussion above, by combining two meta-atoms together, we have designed a hybridized metamolecule. Considering the trade-off between Q-factor and FoM, it is more outstanding to characterize the distances and applied forces than single component "homonuclear diatomic molecule". By bonding two meta-atoms with an elastic material layer, we can actively tune the distances between two coupled meta-atoms via external forces, thus leading to transmission spectra shifting. By fitting the relationship of the resonance frequency shifts with distances and applied forces, we obtained a wireless and telemetric sensor to detect forces and distances for more general use.
Such devices have obvious advantages for detecting tiny displacements and forces. By altering the thickness or type of soft material, we can get an even wider detection range, as needed. Meanwhile, considering the working frequency and fabrication processing, it has advantages of low radiation, is simple-equipped, and is low cost. In addition, both dielectrics and silica gels are biocompatible, which makes the metamolecule promising for sensing in vivo. Our findings have great potential for the detection of tiny displacements and applied forces in civil engineering and biological fields. | 5,552.4 | 2019-02-01T00:00:00.000 | [
"Physics",
"Materials Science",
"Engineering"
] |
SOS2 Comes to the Fore: Differential Functionalities in Physiology and Pathology
The SOS family of Ras-GEFs encompasses two highly homologous and widely expressed members, SOS1 and SOS2. Despite their similar structures and expression patterns, early studies of constitutive KO mice showing that SOS1-KO mutants were embryonic lethal while SOS2-KO mice were viable led to initially viewing SOS1 as the main Ras-GEF linking external stimuli to downstream RAS signaling, while obviating the functional significance of SOS2. Subsequently, different genetic and/or pharmacological ablation tools defined more precisely the functional specificity/redundancy of the SOS1/2 GEFs. Interestingly, the defective phenotypes observed in concomitantly ablated SOS1/2-DKO contexts are frequently much stronger than in single SOS1-KO scenarios and undetectable in single SOS2-KO cells, demonstrating functional redundancy between them and suggesting an ancillary role of SOS2 in the absence of SOS1. Preferential SOS1 role was also demonstrated in different RASopathies and tumors. Conversely, specific SOS2 functions, including a critical role in regulation of the RAS–PI3K/AKT signaling axis in keratinocytes and KRAS-driven tumor lines or in control of epidermal stem cell homeostasis, were also reported. Specific SOS2 mutations were also identified in some RASopathies and cancer forms. The relevance/specificity of the newly uncovered functional roles suggests that SOS2 should join SOS1 for consideration as a relevant biomarker/therapy target.
Ras GEFs and the SOS Family
The proteins of the RAS superfamily are small GTPases known to shift between inactive (GDP-bound) and active (GTP-bound) conformations in a cycle regulated by activating Guanine nucleotide Exchange Factors (GEFs) that facilitate GDP/GTP exchange, and deactivating GTPase activating proteins (GAPs) that multiply their intrinsic GTPase activity ( Figure 1A) [1][2][3][4].
Three main Ras-GEF families (RasGRF 1/2, SOS 1/2, and RasGRP l-4) have been described in mammalian cells with the ability to promote GDP/GTP exchange on the members of the RAS subfamily, and also some members of the RAC subfamily of small GTPases [5][6][7][8]. All mammalian Ras-GEFs share the presence of catalytic CDC25H and REM modules in their primary sequences but, otherwise, each GEF family displays markedly distinct patterns of protein structure, function, regulation, and tissue expression. The members of the GRF family act preferentially, but not exclusively, in cells of the central nervous system [6,9,10], whereas the GRP family members function mostly in hematological cells and tissues [11,12]. In contrast, the members of the SOS (Son of sevenless) family are the most universal Ras-GEF activators, being recognized as the most widely expressed and functionally relevant GEFs with regards to RAS activation by various upstream signals in mammalian cells [5]. The SOS family encompasses two highly homologous, ubiquitously expressed members (SOS1 and SOS2) functioning in multiple signaling pathways involving RAS or RAC activation downstream of a wide variety of cell surface receptors [5,13].
Functional Redundancy/Specificity of SOS2 vs. SOS1
Despite the earlier lack of focus on the functional relevance of SOS2, many subsequent studies have uncovered specific functions unambiguously attributed to SOS2 in different physiological and pathological contexts that clearly document the functional specificity of this particular SOS GEF family member. The initial characterization of the first available constitutive knockout (KO) mouse strains of the SOS family showed that SOS1 ablation causes mid-embryonic lethality in mice [14,15], whereas constitutive SOS2-KO mice are perfectly viable and fertile [16]. Because of this and the stronger phenotypic traits associated to SOS1 ablation, most early functional studies of the SOS family focused almost exclusively on SOS1, and rather little attention was paid to analyzing the functional relevance of SOS2 [5]. The view that SOS1, but not SOS2, is the key GEF family member in RAS-signal transduction in metazoan cells was also probably behind the long search for, and development of, specific, smallmolecule SOS1 inhibitors that have recently reached preclinical and clinical testing against RAS-driven tumors [5,17,18].
Functional Redundancy/Specificity of SOS2 vs. SOS1
Despite the earlier lack of focus on the functional relevance of SOS2, many subsequent studies have uncovered specific functions unambiguously attributed to SOS2 in different physiological and pathological contexts that clearly document the functional specificity of this particular SOS GEF family member.
In particular, the development, about 8 years ago, of conditional, tamoxifen-inducible, SOS1-null mutant mice made it possible to bypass the embryonal lethality of SOS1-null mutants and opened the way to carry out relevant functional studies of SOS2 by allowing biological samples originated from adult mouse littermates of four relevant SOS genotypes (WT, SOS1-KO, SOS2-KO and SOS1/2-DKO) to be generated and functionally compared [19]. Somewhat surprisingly, adult SOS1-KO or SOS2-KO mice were perfectly viable, but double SOS1/2-DKO animals died very rapidly [19], demonstrating a critical contribution of the SOS2 isoform (at least when SOS1 is absent) at the level of full organismal survival and homeostasis, and thus opening new avenues for consideration of SOS2 as a functionally relevant player in mammalian RAS signaling pathways. In this regard, a number of recent functional studies of SOS1 and SOS2 using diverse genetic and pharmacological SOS ablation approaches have significantly clarified, during the last decade, the mechanistic details underlying the functional specificity/redundancy of the SOS1 and SOS2 GEFs in a wide array of tissues and cells, both under physiological and pathological conditions [20][21][22][23][24][25] (see [5] for a review).
Specifically, detailed functional comparisons between primary mouse embryonic fibroblasts (MEFs) extracted from SOS1-KO and/or SOS2-KO mice have documented a dominant role of SOS1 over SOS2 regarding the control of a series of critical cellular physiological processes, including cellular proliferation and migration [20,21], inflammation [22], and maintenance of intracellular redox homeostasis [20,26]. The functional prevalence of SOS1 is not limited to the above-mentioned physiological contexts, but has also been demonstrated under different specific pathological contexts. In particular, a specific, critical requirement of SOS1 was demonstrated for development of BCR-ABL-driven leukemia [24,27], as well as in skin homeostasis and chemically induced carcinogenesis [21,28]. Likewise, both SH2P and SOS1 have been shown to be essential signaling mediators in wild-type KRAS-amplified gastroesophageal cancer [5,29].
Hierarchy of Action of the SOS Family Members
As described above, most reports support the functional dominance of SOS1 over SOS2 regarding their participation in control of several major intracellular processes, such as proliferation, migration, inflammation, or regulation of intracellular ROS levels [20,25]. Remarkably, in all those processes, the defective cellular phenotypes observed in SOS1/2-DKO samples are always much stronger than in single SOS1-KO cells, while undetectable in single SOS2-KO contexts, suggesting a specific, ancillary role of SOS2 that only becomes easily visible in the absence of SOS1 [19][20][21]30].
Regarding the participation of SOS1 and SOS2 in Ras signaling pathways, the initial analyses of constitutive SOS1-KO mouse embryo fibroblast (MEF) cell lines indicated that SOS1 (but not SOS2) is required for long-term activation of the Ras-ERK pathway, with SOS1 participating in both short-term and long-term signaling, while SOS2-dependent signals are predominantly short-term [14]. More recent studies analyzing inducible SOS1-KO biological samples in mouse keratinocytes also support that view [25] and have also confirmed that SOS1 is the dominant player regarding the process and kinetics of RAS activation (GTP loading) upon cell stimulation by various upstream signals and growth factors [20,25]. Of relevance also are other recent studies in cell lines devoid of SOS1 and/or SOS2 that have described the specific, primary involvement of SOS2 in regulation of the PI3K/AKT signaling axis, whereas SOS1 appears to be the dominant player in the MEK/ERK signaling axis [25,[30][31][32]. Furthermore, regarding SOS2 functional specificities in cellular pathological contexts, a hierarchical requirement for SOS2 to mediate RAS-driven cell transformation has also been reported recently in certain cell populations [31,32].
Distinct Functional Roles of SOS2 and SOS1 in the Skin and Epidermal Cancers
EGF-dependent RAS-RAF signaling has been shown to be essential for epidermal development and carcinogenesis [33][34][35]. In this regard, it was also shown that SOS1 upregulation resulted in development of skin papillomas with 100% penetrance, supporting a critical role of SOS in this process in epidermal cells [28].
More recently, our laboratory has also characterized/analyzed in detail the specific involvement of SOS1 and/or SOS2 in homeostasis of the skin, as well as in tumoral and nontumoral skin pathologies [21,25]. Our initial studies in adult KO animal models showed that SOS1 ablation (but not SOS2 ablation) produced significant alterations of the overall layered structure of the skin in adult mice, although, interestingly, these skin architectural defects were markedly worsened when both SOS1 and SOS2 proteins were concomitantly ablated [21]. Furthermore, the skin of adult SOS1-ablated mice and, more markedly, SOS1/2-DKO mice showed a severe impairment of its physiological ability to repair skin wounds, as well as almost complete disappearance of the neutrophil-mediated inflammatory response in the injury site. In addition, SOS1 disruption (but not SOS2 ablation) delayed the onset of tumor initiation, decreased tumor growth, and prevented malignant progression of papillomas when using the known DMBA/TPA model of chemically induced skin carcinogenesis in mice [21].
While these observations demonstrated that SOS1 is clearly predominant with regards to skin homeostasis, wound healing, and chemically induced skin carcinogenesis, it still remained unclear whether the defective phenotypes observed in the skin of SOS1-deficient mice were cell-autonomous or depended on their local manifestation in specific cell compartments of the skin. We have addressed these questions in a recent report involving extensive detailed analyses of the specific subpopulation of keratinocytes present in the skin of both newborn and adult SOS1-KO and/or SOS2-KO mice [25]. While these studies confirmed the prevalent role of SOS1 over SOS2 in regulation of the proliferation of primary mouse keratinocytes, our detailed analyses of primary keratinocytes derived from newborn and adult mice of four relevant SOS genotypes (WT, SOS1-KO, SOS-KO, and SOS1/2-DKO) uncovered previously unrecognized functional contributions of SOS2 regarding skin architecture, as well as proliferation, differentiation, and survival of primary keratinocytes [25]. In particular, our analyses uncovered a specific, significant reduction of the stem cell population located in skin hair follicles of both newborn and adult SOS2-KO mice [21]. As this population is essential for replacing, restoring, and regenerating the mouse epidermis, these data confirm that SOS2 plays specific, cell-autonomous functions (distinct from those of SOS1) in keratinocytes, and reveal a novel, essential role of SOS2 in control of epidermal stem cell homeostasis [21,25].
Differential Involvement of SOS2 and SOS1 in Cellular Pathological Contexts
Growing experimental evidence has accumulated in recent years that supports the functional implication of SOS GEFs in human tumors and other RAS-related pathologies. So far, a predominant occurrence of SOS1 genetic alterations has been reported in most pathological contexts involving SOS GEFs. In this regard, a significant number of gainof-function SOS1 mutations (and, more rarely, SOS2 mutations), resulting in subsequent hyperactivation of RAS signaling, have been identified in inherited RASopathies, such as Noonan syndrome (NS) or hereditary gingival fibromatosis, as well as in various sporadic human cancers, including endometrial tumors and lung adenocarcinoma, among others [5]. However, during the last few years, a previously undetected but relevant involvement of SOS2 in some of these pathologies is also coming to light in a series of studies describing specific SOS2 gene alterations that have been identified in several forms of cancers and RASopathies, as well as the potential therapeutic effect of explicit SOS2 removal in certain tumor cell lines [36][37][38][39][40]. All in all, these observations and the above-described timeline of experimental evidence support the notion that, besides SOS1, SOS2 may also constitute a worthy therapy target for prevention and/or treatment of some specific tumor and nontumor pathologies with epidermal origin or dysregulated PI3K/AKT signal transmission [25].
SOS1/2 Inhibitors in Pathological Settings
RAS oncoproteins were sometimes considered "undruggable" in the past, but that notion has been proven wrong by the development of promising inhibitors that are currently being characterized at different stages of preclinical and clinical testing [5,41]. In addition, a renewed interest has recently emerged to target SOS proteins in an effort to attenuate oncogenic signaling in tumors harboring altered RTK-RAS-ERK signaling pathways (Table 1). In this regard, new small-molecule SOS1 inhibitors have been obtained in the last few years with the ability to either (i) interfere with the functional SOS:RAS interactions, or (ii) to limit the intrinsic GEF activity of SOS1 protein [5] (Table 1).
Regarding the group of small-molecule, direct SOS inhibitors, only drugs designed to act against SOS1 are available at this moment, whereas inhibitors specifically acting on SOS2 are not yet described [5,46] (Table 1). Within this group, BAY-293 has been shown to bind directly to the SOS partner of the RAS:SOS complex, thus preventing KRAS-SOS1 complex formation [47]. Recent reports have described the therapeutic effect of BAY-293 in EGFRmutated tumor cell lines, and also its synergistic action with Osimertinib [30] and KRAS G12C inhibitors [47]. A weakness of this compound is that its effect has been proven in vitro but not in vivo [47]. On the other hand, BI-3406, the first-in-class, orally bioavailable, in vivo tested, direct SOS1-inhibitor elicits activity against many KRAS variants, including all major G12 and G13 oncoproteins, and demonstrates synergistic therapeutical effects if combined with MEK inhibitor [23]. Moreover, a combination of BI-3406 and trametinib has potent activity against secondarily acquired resistance due to new KRAS mutations [48]. Finally, a phase I clinical trial (NCT04111458; https://clinicaltrials.gov/ct2/show/NCT041 11458 (accessed on 20 June 2021); Table 1) has also recently been started with BI-1701963 (a compound which exhibits high similarities in its mode of action with BI-3406) [46] that is focused on patients with advanced KRAS-mutated cancers, in order to evaluate safety, tolerability, pharmacokinetics, and pharmacodynamic properties (Table 1). It will be interesting to determine in the future whether SOS1 inhibitors can also block SOS2 function, and vice versa. The following sections in this review focus on different aspects of SOS2 function in various physiological processes and pathological contexts, and also pinpoint some remaining questions still requiring further clarification about potential, specific functional role(s) of SOS2. It is apparent that further, comprehensive functional analysis of specific tissue/cell lineages will be needed to fully unveil the specific functional contributions of SOS2 in various health and disease contexts. Although SOS2 was frequently considered in the past as the "ugly duckling" of the SOS family, the more recent and complete studies of the regulatory and functional aspects of the SOS family members support the notion that SOS2 may well become a "swan".
SOS2 and SOS1: So Similar but So Functionally Different. Some Mechanistic Considerations
As mentioned above, despite their remarkable homology, it is apparent that SOS1 is critically required for more functionally relevant roles than SOS2, but very little is known about the precise mechanistic reasons explaining the noticeable functional differences observed between both SOS isoforms in different physiological cellular contexts.
An initial, simplistic consideration in the search for mechanistic explanations might dwell on the analysis of potential differences of expression levels between SOS1 and SOS2 in different biological contexts. For example, the initial detection of high expression levels of SOS1 mRNA and protein in placental labyrinth trophoblasts, whereas SOS2 levels were significantly lower [14], offered a likely explanation for the observation that SOS2 presence is not sufficient to rescue the mid-gestation lethality caused by the absence of SOS1 in constitutive SOS1-KO mice [14]. In contrast, the fact that SOS1 and SOS2 are almost ubiquitously expressed at significant intracellular concentrations in most postembryonal organs/tissues/cells examined [5] indicates that mechanisms other than expression level may account for the dominant role of SOS1 regarding cellular proliferation, migration, inflammation, or control of intracellular redox homeostasis [19][20][21]26,30]. Interestingly, despite the seemingly prevalent functional contributions of SOS1 in comparison to SOS2, analysis of large database sets of available microarray hybridization expression data shows the presence of higher amounts of SOS2 transcripts than of SOS1 transcripts in different cellular settings [21,25]. In any case, it is apparent that a definitive quantitation of the steadystate, real intracellular concentration of SOS1 and SOS2 in different biological contexts can be achieved only by accurate mass-spectrometric determination and quantitation of the amounts of specific peptides unique for either SOS1 or SOS2 in each sample analyzed. In this regard, a recent proteomic study performed across different cell types has revealed that the absolute abundance of SOS1 and SOS2 proteins is quite similar [49].
Another relevant consideration regarding the mechanistic basis of the functional specificities shown by the SOS1 and SOS2 Ras-GEFs is the existence of distinct, specific transcriptional programs specifically linked to the expression in cells of each one of these two otherwise highly homologous family members. Curiously, most SOS-related transcriptional data accessible in public databases deal with SOS1-dependent transcriptomic alterations networks observed in various native or drug-treated tumors and pathologies [5,23,50,51], and much less information is available regarding the characterization of the specific transcriptional networks driven by the presence of SOS1 or SOS2 in different cellular physiological contexts (SOS1: https://www.ncbi.nlm.nih.gov/gds/?term=sos1 (accessed on 20 June 2021); SOS2: https://www.ncbi.nlm.nih.gov/gds/?term=sos2 (accessed on 20 June 2021)). In this regard, our comparison of transcriptional networks of primary cells derived from SOS1-KO and/or SOS2-KO mice has revealed a remarkably higher impact of SOS1 ablation than SOS2 ablation on the resulting transcriptomic profiles. Interestingly, we observed that SOS2 depletion resulted in practically negligible alterations as compared to SOS1 ablation in primary MEFs (unpublished) and keratinocytes [25]. Furthermore, as with other phenotypic alterations [19][20][21][22][23]25], concomitant ablation of SOS1 and SOS2 caused significantly higher alterations of the transcriptional patterns than single SOS1 depletion, suggesting a possible adjuvant role of SOS2 in this regard when SOS1 is already absent. These observations underscore a significantly prevalent role of SOS1 over SOS2 regarding the transcriptional regulation of cellular proliferation and differentiation processes, at least in primary cells of mice [25].
A number of biochemical differences between SOS1 and SOS2 GEF proteins that have been reported in the literature [5] are also likely to be highly significant factors contributing to the different functionalities exhibited by these two isoforms in different biological contexts. Among other functional aspects, these different biochemical properties are thought to impact on the protein half-life and the intracellular stability and homeostasis of the SOS1 and SOS2 GEF proteins, as well as on the various protein-protein interactions (PPI) in which they can engage under different biological conditions. For example, it has been reported that hSOS2 has a higher affinity for the adaptor protein GRB2 than hSOS1 [52], or that SOS1 proteins are more stable than SOS2 proteins since the latter seem to be degraded by a ubiquitin-and 26S proteasome-dependent process in mouse cells [53,54]. Separate studies have also shown that SOS2 binds less efficiently than SOS1 to EGFR and Shc in EGF-treated cells, and that SOS2-dependent signals are predominantly short-term, whereas SOS1 participates in short-and long-term signaling upon receptor stimulation [14,25]. Furthermore, a recent report has also described specific in vivo direct interactions of SOS1 with the CSN3 subunit of the COP signalosome and PKD, which may contribute to homeostatic control of intracellular RAS activation [55]. In this regard, it will be of interest to determine in future whether or not SOS2 may also bind to CSN3.
Differences in 3D structure and regulation may also contribute to the differential functionality of SOS1 and SOS2. The allosteric binding of RAS•GTP to the SOS1 REM domain was clearly shown to relieve SOS1 autoinhibition and create a positive feedback loop of RAS activation, thus altogether increasing the catalytic activity of SOS1 [5,56]. However, the scope and significance of the potential allosteric activation of SOS2 via its own REM domain remains undefined at this time [57]. More extensive analyses of fulllength SOS2 protein crystals are bound to provide additional valuable information in this regard in the future.
Is SOS2 a Bona Fide Rac-GEF In Vivo?
Many prior reports have documented the ability of the SOS GEFs to act as bifunctional GEF activators capable of activating not only all members of the RAS protein family, but also some members of the RAC family of proteins. In view of this, some functional disparities displayed by SOS1 and SOS2 in different cellular contexts might also be linked, at least in part, to their specific, potentially differential participation in processes of activation of RAS and/or RAC intracellular proteins upon cellular stimulation by different external signals [5,13,20,24].
Mechanistically, the SOS GEFs are known to promote signal internalization and subsequent RAS/RAC activation through a process involving their recruitment from the cytosol to the plasma membrane via complex formation with different adaptor proteins (refs). In the context of this mechanistic model, the differential activity of SOS over RAS or RAC targets in vivo appears to be mediated by mutually exclusive interactions with either GRB2 or E3B1, respectively [5]. Although the precise mechanistic details remain yet poorly understood, it is generally accepted that SOS-mediated activation of RAC requires recruitment of SOS-E3B1 complexes to actin filaments found within membrane ruffles, thus facilitating RAC activation by the DH (Dbl homology) domain. So far, only SOS1 has been formally demonstrated to act as a bona fide Rac-GEF [13,58], and the hypothetical function of SOS2 as an Rac-GEF awaits future, stronger experimental evidence. In any case, the high homology shared by SOS1 and SOS2 in their overall modular protein structure/sequence and, in particular, in their DH domains responsible for RAC activation (overall 84% similarity and 70.6% amino acid identity) [5], together with the experimental demonstration of physical interaction between hSOS2 and hE3B1 in COS cells [59], support the notion of postulating SOS2 as a potential RAC activator, at least in certain cellular contexts. Interestingly, direct analysis of primary SOS1/2 KO primary MEFs has shown that single ablation of either SOS1 or SOS2 did not impair the overall level of EGF-dependent RAC activation, whereas combined SOS1/2 depletion significantly reduced the levels of RAC activation [20], suggesting functionally redundant contributions of SOS1 and SOS2 with regards to RAC activation after EGF stimulation [20]. Additional mechanistic studies are needed to fully ascertain the potential role of SOS2 as an RAC-GEF activator in a variety of cellular contexts.
SOS2 as a Key Modulator of PI3K-AKT Signaling
After surface receptor stimulation and subsequent SOS-mediated RAS activation, the GTP-loaded RAS proteins are known to activate various downstream signaling pathways which are essential for the control of a wide variety of cellular processes. The RAF1/mitogen-activated protein kinase (MAPK)/extracellular signal-regulated protein kinase (ERK) signaling pathway is crucial for the control of many cellular events, including proliferation, transformation, or survival. Furthermore, downstream RAS signaling through phosphoinositide 3-kinase (PI3K) has also been shown to have an essential role in processes such as cell survival, cytoskeleton reorganization, cell motility or invasiveness, among others [60]. Since a well-regulated balance between the RAS-ERK and RAS-PI3K signaling axes is essential for adequate cellular signaling homeostasis, it will be relevant in this regard to elucidate the relative functional contributions of SOS1 and SOS2 to either signaling axis in different cellular contexts [5]. Interestingly, our functional analyses of RAS activation and downstream signaling in primary keratinocytes from WT and SOS1/2ablated genotypes has recently revealed a prevalent role of SOS1 regarding control of RAS activation (GTP loading), and a mechanistic overlapping of SOS1 and SOS2 regarding cell proliferation and survival in response to EGF, with dominant contribution of SOS1 to the RAS-ERK axis and SOS2 to the RAS-PI3K/AKT axis [25]. These recent observations in keratinocytes confirm and extend previous reports in primary MEFs and in a wide array of tumor cell lines that also demonstrated a preferential role of SOS1 in the control of cell proliferation and activation of the RAS-ERK pathway [5,20,21].
The notion of specific, relevant functional cellular roles played by SOS2 is firmly supported by studies from R. Kortum s lab demonstrating that SOS2 promotes EGFstimulated AKT phosphorylation in cells expressing mutant RAS. In particular, single SOS2 ablation or silencing in a variety of mouse and human cell lines results in significant reduction of AKT, but not ERK, phosphorylation and ability for anchorage-independent growth in RAS-mutant cells [31,32] (Table 2). The same lab has also reported the potential involvement of SOS2 in SHP2-mediated signaling pathways [30]. Overall, the observation that SOS2-dependent PI3K/AKT signaling appears to be crucial for transformation in cells harboring mutant KRAS genes [31,32] suggests that SOS2 could be considered as a potential therapy target in KRAS-driven oncogenic processes with dysregulated PI3K/AKT signal transmission ( Figure 1B).
SOS2 Functional Role(s) in Pathological Contexts
The specific, functional involvement of SOS2 in different pathologies has also been recently reported, although with lower incidence rates than for SOS1 [5]. Pathologies linked to SOS2 alterations include different inherited proliferative/developmental disorders (RASopathies), as well as sporadic tumors and other nontumoral diseases.
SOS2 in Noonan Syndrome
The RASopathies comprise a defined group of inherited developmental syndromes with partially overlapping clinical features linked to germline mutations affecting different members of the RAS-ERK pathway [61]. The most common RASopathy, Noonan Syndrome (NS), is an autosomal dominant condition whose features may include distinctive facial appearance, short stature, broad or webbed neck, congenital heart defects, bleeding problems, skeletal malformations, as well as physical and neurodevelopmental delays and cognitive deficits [62]. SOS1 is the second most frequently mutated gene in this syndrome (~16.5% of cases; up to 70 different mutations described [5,63]). Whereas early screenings reported only SOS1 mutations [62,63], more recent studies have identified a number of SOS2 mutations, including missense activating mutations in seven specific residues located in the SOS2 DH domain (T264K, T264R, E266_M267delins, M267K, M267R, M267T and T376S; Figure 1C) that have thus defined the SOS2-specific NS9 subtype of this syndrome (OMIM #616559) [5,40]. In general, the clinical findings of NS patients harboring SOS2 mutations are similar to those with SOS1-mutated genes, although some SOS2-realted variants appear as rare cases of NS with particular predisposition for lymphatic complications [39]. The most benign lymphatic pathologies in SOS2-mutated patients involve lymphedema of lower limbs and genitalia, as well as congenital chylothorax. More severe complications that may even cause the death of some patients included chronic, progressive lymphedema with associated chylothorax, pleural effusions, and chronic lymphopenia [39]. A recent phenotype-genotype correlation study has revealed the association between mutant variations of SOS2 in NS patients with lower diastolic and systolic blood pressures, and lower percent of body fat [64]. The first prenatal case of NS with SOS2 mutations was reported in a euploid fetus with a severe increase in nuchal translucency and other relevant anomalies noticeable at ultrasound study, as well as markers of aneuploidies, caused by a de novo heterozygous missense mutation in SOS2 gene (c.800 T > A; p.M267K) [40].
SOS2 in Sporadic Cancers
Although mutated SOS2 has not yet been recognized as a cancer driver, at least 253 mutations in the SOS2 gene (195 missense, 45 synonymous, 12 truncating, and one splice-site) have been detected so far in sporadic tumors (https://www.intogen.org/search? gene=SOS2 (accessed on 20 June 2021)). In this regard, direct exome sequencing has detected the presence of missense-activating mutations in SOS2 in a small percentage of gallbladder carcinomas [65], and also in a subtype of desmoplastic melanomas [66]. Recent analysis of gene expression profiles has also reported MuD-dependent upregulation of SOS2 expression in cohorts of TCGA glioblastomas (GBM), and a correlation between high expression of the two genes and longer survival of proneural GBM patients [67]. Finally, a whole-exome sequencing analysis carried out on non-small cell lung cancer samples demonstrated a direct correlation between SOS2 and resistance mechanisms to Osimertinib [38].
These observations certainly warrant further evaluation of SOS2 as a potential therapeutic target for oncogenic processes in vivo. In this regard, single SOS2 depletion did not show any therapeutic benefit in a model of chemically-induced skin carcinogenesis but combined SOS1/2 depletion exhibited significantly stronger beneficial effects in comparison to single SOS1-KO mice [21], supporting at least a partial functional involvement of SOS2, and its consideration as a potential therapy target, in RAS-driven tumors. Consistent with this notion, genetically-mediated silencing of the human SOS2 gene by means of miR-NAs or CRISPR/Cas9 also produces significant therapeutic benefits in different in vitro models, including human tumor cell lines (Table 2). SOS2 participates in anchorage-independent growth.
Reduce cell viability. [32] CRISPR/Cas9 SW620 (colorrectal cancer) SOS2 participates in anchorage-independent growth. [32] CRISPR/Cas9 NCI-H1299 NSCLC cells (lung cancer) SOS2 participates in anchorage-independent growth. [32] CRISPR/Cas9 YAPC cells (pancreatic cancer) Revert the transformed phenotype of KRAS oncogenic cells. [31] Although oncogenic RAS proteins are constitutively activated (not needing, in theory, the action of upstream GEFs to become GTP-loaded), different reports have demonstrated that the cross-activation of wild-type RAS (which is SOS-dependent) by oncogenic mutated RAS is of critical importance for tumorigenic development in mutant RAS-driven tumors [71,72]. Regarding this, it is highly relevant to mention recent experimental evidence indicating that SOS1 and SOS2 may play specific, nonoverlapping functions in RAS-driven oncogenic cells. In particular, it has been reported that RTK-SOS2-WT RAS signaling, but not allosteric SOS2 activation, is a critical mediator of mutant KRAS-driven transformation [31] by protecting KRAS-mutated cancer cells from anoikis [32]. Consistent with the notion that SOS1 and SOS2 may promote distinctive control of differential aspects of wild-type RAS signaling in oncogenic RAS-driven tumors, the same group has also reported a hierarchical requirement for SOS2 to drive mutant RAS-dependent transformation, with KRAS being the most SOS2-dependent form (KRAS > NRAS > HRAS) [57].
SOS2 in Non-Tumoral Pathologies
Reports linking SOS2 alterations with other non-tumoral pathologies are very limited but specific. Thus, SOS2 has been proposed as a susceptibility locus for initiation of Alzheimer's disease [73]. In particular, two single nucleotide polymorphisms (SNPs) were characterized in SOS2 that are significantly associated with late-onset Alzheimer's disease, suggesting that SOS2 may be a male specific risk factor for Alzheimer's disease [73]. Mutations in SOS2 have also been reported in association with metabolic cutis laxa disease [74]. GWAS analysis also supports genetic association between SOS2 and chronic periodontitis-related pathologies, especially in adults [75], as well as processes of elevated intraocular pressure (lead SNP rs72681869; G > C) [76]. | 6,865.6 | 2021-06-01T00:00:00.000 | [
"Biology"
] |
The rubber hand illusion in microgravity and water immersion
Our body has evolved in terrestrial gravity and altered gravitational conditions may affect the sense of body ownership (SBO). By means of the rubber hand illusion (RHI), we investigated the SBO during water immersion and parabolic flights, where unconventional gravity is experienced. Our results show that unconventional gravity conditions remodulate the relative weights of visual, proprioceptive, and vestibular inputs favoring vision, thus inducing an increased RHI susceptibility.
A control experiment was carried out to control if, in the 0g condition of the Parabolic flight experiment, a Jendrassik effect (i.e., a method for enhancing sluggish tendon-tap jerks evoked at medical examination) influenced body muscle tonus broadly during the RHI procedure. To investigate this aspect, in this control experiment, participants underwent the RHI while holding a strap belt, fixed to the floor, in their left (not-illuded) hand in two conditions: in the No force condition, participants naturally held the strap belt without putting any force; in the Force condition, they were asked to held the strap putting more power and keeping tight the muscles. This latter condition was employed to elicit the Jendrassik effect.
Participants
Sixteen (5 men, mean age -SD: years = 24.8 -2.6; years of education = 17.8 -1.4) healthy volunteers were recruited for the control experiment. All participants were right-handed, as assessed with the Edinburgh Handedness Inventory (Oldfield, 1971), naive to the experimental procedure, and before taking part in the study, they gave written informed consent. None of them had a history of neurological, major medical, or psychiatric disorders. The experimental procedure was approved by the local Ethics Committee of the University of Turin (prot. n. 133278, 07/03/19).
Experimental set-up and procedures
The very same set-up employed in the Parabolic flight experiment was used in the present experiment aiming at controlling a possible elicitation of a Jendrassik effect, that would have potentially influenced body muscle tonus broadly during 0g in the Parabolic flight experiment. In the present control experiment, the same experimental apparatus and automated tactile stimulation were used. The experimental procedures mirrored the Parabolic flight experiment's ones, with the exception that the control experiment was carried out in an ordinary laboratory (with normal gravity). The first trial was dedicated to baseline proprioceptive judgments, to obtain baseline data regarding participants' perceived position of their right index finger without any experimental manipulation (i.e., No force condition). Five proprioceptive judgments were collected. The second trial was dedicated to baseline proprioceptive judgments in the Force condition, the one employed to elicit a Jendrassik effect. To do so, participants were asked to hold a strap with the left hand, and to pull it with their maximum force. Importantly, participants hold the same strap in the No force condition, but without using any force (i.e., the strap was passively held in the hand). In the remaining 56 trials (i.e., 28 trials of No force and 28 trials of Force condition), the RHI was induced, and the experimental procedures were identical to the Parabolic flight experiment' ones (see main text for details on how the RHI was performed). After each trial of RHI administration, we collected objective and subjective RHI measurements. In other words, we collected a total of 56 proprioceptive judgments and 56 body ownership ratings per subject: 14 after the synchronous stimulation in No force condition, 14 after the asynchronous stimulation in No force condition, 14 after the synchronous stimulation in Force condition, and 14 after the asynchronous stimulation in Force condition. Electromyographic (EMG) activity was recorded during each trial to assure that participants passively (in the No force condition) or actively (in the Force condition) held the strap.
EMG recording
EMG activity was recorded from the flexor digitorum communis of participants' right arm by pairs of Ag-AgCl surface pregelled electrodes (24mm diameter), following standard skin preparation. The electrodes were connected to a Biopac MP-150 electromyograph (Biopac Systems Inc., Santa Barbara, CA). The EMG signal was acquired at 10 kHz sampling rate, amplified, filtered with a band-pass (10-500 Hz) and a notch (50 Hz) filter and stored on a PC for offline analysis. Each recording epoch lasted about 18 seconds: the automated system device for RHI stroking triggered the EMG at the beginning of the (12 seconds) tactile stimulation, and an experimenter manually ended the EMG epoch recording after the participant's rating about body ownership.
As for the Parabolic flight experiment, single subjects' answers (proprioceptive judgments, questionnaire ratings) were analyzed. The proprioceptive drift was calculated separately for No force and Force conditions as the difference between the mean of the proprioceptive judgments collected in the first and in the second trials, respectively, and each of the proprioceptive judgments collected after each RHI procedure. All the observations were normalized in z-scores separately for the proprioceptive drift and the body ownership question, calculated within-subjects across conditions (i.e., synchronous No force, asynchronous No force, synchronous Force, asynchronous Force), and entered in a Linear Mixed Model (LMM) analysis. Hence, we ran separate LMM analysesone for proprioceptive drift and one for the body ownership questionin R (version 4.0.0, https://www.r-project.org/), using the lme4 package28. In both LMM models, we included the proprioceptive drift and the body ownership question as dependent variables, and we parameterized them into the combined variable Strength (No force; Force) and Condition (synchronous; asynchronous), resulting in the following conditions: synchronous No force, asynchronous No force, synchronous Force, asynchronous Force. For the proprioceptive drift and for the body ownership question, we separately investigated the main effects of Strength (No force vs Force, irrespective of the condition) and Condition (synchronous vs asynchronous, irrespective of the strength), and then the specific effects within the Strength Condition parameterization. Hence, we ran, between conditions, simultaneous tests for general linear hypotheses with multiple comparisons of the means by employing Tukey contrasts (Bonferroni corrected). Participants' age and gender were added as fixed effects and subjects' ID as a random effect. We used LMMs to mirror the Parabolic flight's experiment analyses. The EMG activity of the flexor digitorum communis was recorded in each subject and represented as a root-mean-square (RMS) value. Again, a LMM model was performed including the RMS as the dependent variable. Since the aim of this analysis was to ensure that participants passively (No Force condition) or actively (Force condition) held the strap belt, we compared the EMG activity acquired in these two conditions, regardless of the RHI tactile stimulation (synchronous vs asynchronous).
Results
Both in the objective (i.e., proprioceptive drift) and subjective (i.e., embodiment question) measurements, a significant effect of Condition was found (proprioceptive drift: z=24.28; p< 0.0000001; embodiment question: z= 39.59; p\textless 0.0000001), with, as expected, higher values after the synchronous than after the asynchronous stimulation. Therefore, the classical RHI effect emerged. Importantly, we did not find a Strength effect (proprioceptive drift: p=0.717; embodiment question: p=0.428) and effect in Strength Condition parameterization (proprioceptive drift: No force syn vs Force syn: p=1; No force asyn vs Force asyn: p=1; embodiment question: No force syn vs Force syn: p=1; No force asyn vs Force asyn: p=1;), suggesting that the recruitment of body muscles in the Force condition did not affect the classical RHI experience (as in the No Force condition). Importantly, the EMG results confirmed that participants recruited body muscles in the left (not illuded) hand only in the Force condition (i.e., when they actively held the strap belt, as to elicit a Jendrassik effect; No Force vs Force: <0.0001). Altogether the results of this control experiment suggest that even if, in the 0g condition of the Parabolic flight experiment, a Jendrassik effect had been elicited, it did not influence body muscle tonus broadly during the RHI procedure. Therefore, the modulation of RHI measurements obtained in the microgravity condition of the Parabolic flight experiment cannot be ascribed to a pure "muscular" effect. | 1,806 | 2022-05-06T00:00:00.000 | [
"Environmental Science",
"Physics",
"Psychology"
] |
Activity Graph Feature Selection for Activity Pattern Classification
Sensor-based activity recognition is attracting growing attention in many applications. Several studies have been performed to analyze activity patterns from an activity database gathered by activity recognition. Activity pattern classification is a technique that predicts class labels of people such as individual identification, nationalities, and jobs. For this classification problem, it is important to mine discriminative features reflecting the intrinsic patterns of each individual. In this paper, we propose a framework that can classify activity patterns effectively. We extensively analyze activity models from a classification viewpoint. Based on the analysis, we represent activities as activity graphs by combining every combination of daily activity sequences in meaningful periods. Frequent patterns over these activity graphs can be used as discriminative features, since they reflect people's intrinsic lifestyles. Experiments show that the proposed method achieves high classification accuracy compared with existing graph classification techniques.
Introduction
The advances in sensor technology make activity recognition possible.Activity recognition is a technique that automatically recognizes human activities by analyzing senor data [1][2][3][4][5][6].Recently, several studies [7][8][9][10] have been performed to analyze activity patterns from an activity database gathered by activity recognition.Activity pattern classification is a technique that predicts class labels of people based on activity patterns.The class labels can be not only individual identification but also meaningful groups such as nationalities, genders, jobs, and hobbies.Therefore, activity pattern classification has many applications which vary the class labels accordingly.For example, recommender systems can recommend similar items to users with the same hobbies.
Accurate classification of activity patterns requires a deep understanding of activity patterns.Activity patterns are styles in which people perform their activities and they reflect people's lifestyles.People have both intrinsic and common activity patterns.The intrinsic patterns play a key role in distinguishing each individual from the others.They can appear in individuals' specific activities in different frequencies [11,12], orders, days, and periods [13].In order to find these intrinsic activity patterns, we need to explore activity patterns by adjusting the frequencies in all combinations of days.For example, certain people perform their hobbies on weekends, but others may perform the same hobby once a month.
However, it is hard to explore frequent activity graphs in all combinations of days.This naïve approach requires a very long running time and produces redundant frequent patterns.The lifestyle of people is itself a solution to solve this problem.People usually repeat similar activity patterns in specific periods such as daily, weekly, monthly, and yearly.Therefore, we only need to explore activity patterns from every combination of days in specific periods.By avoiding the exploration of meaningless combinations, we can reduce the search space in order to find discriminative features.From a classification viewpoint, these features are as effective as the features from all the combinations of days.
The other important point is to determine pattern types for activity pattern classification.The types are determined depending on activity data models.Various activity data modeling studies have been proposed to represent activity data collected from sensors such as statistics, sequence, and graph-based modeling.Statistics-based activity models [7,8] calculate the average frequency or duration of each activity.
International Journal of Distributed Sensor Networks
Sequence-based activity models [9] represent activities as daily sequences based on the occurrence time.Graph-based activity models [10] generate activity graphs by combining activity sequences in various periods such as daily, weekly, monthly, and yearly.Nodes and edges of the activity graphs represent activities and the occurrence order between two activities, respectively.Among these modeling techniques, the graph-based activity model is suitable for the activity pattern classification problem.This model can maintain sufficient information to mine the intrinsic activity patterns such as frequencies and occurrence orders and various meaningful periods of activities.
In this paper, we propose a novel feature selection framework for classifying activity patterns effectively.The proposed framework generates activity graphs by combining all combinations of daily activity sequences in each meaningful period.By performing frequent subgraph mining over these graphs, we can find all discriminative frequent activity patterns efficiently, which can reflect individuals' intrinsic lifestyles.In order to remove redundant frequent patterns, we select highly discriminative features by adopting topological similarity based feature selection (TSFS) [14].Since topologically similar graphs involve similar activity patterns, we can effectively remove redundancy.Through experiments, we show that the proposed framework can achieve a high performance in classifying activity patterns.
The remainder of this paper is organized as follows.We briefly introduce the existing activity modeling and graph classification studies as related work in Section 2. We analyze the effectiveness of various types of activity patterns in Section 3. In Section 4, we define the activity pattern classification problem and propose our feature selection framework in detail.Section 5 presents the experimental results of the proposed framework, and Section 6 concludes this paper.
Related Work
Activity recognition has gained a lot of interest in recent years due to its potential and usefulness in context-aware computing such as smart homes [3][4][5][6] and aged care monitoring [7,8].The purpose of activity recognition is to infer people's behaviors from low level data acquired through sensors in a given setting, from which other critical decisions are made.There are two approaches to acquire human activities using sensor systems.First, sensors are attached on the body and the signal readings are interpreted [1,2].This approach can recognize low level activities such as "sitting, " "running, " and "walking." Second, sensors are deployed to objects and devices in the environment and the sensor readings are interpreted [3][4][5][6].This approach can recognize high level activities such as "eating, " "sleeping, " "showering, " and "leaving home." The low level activities are used for shortterm activity monitoring such as the elderly falling down.The high level activities are used for long-term activity pattern monitoring such as healthcare.
Mining techniques generally require appropriate data models to find informative patterns to improve effectiveness or efficiency.Various studies have been proposed to represent activity data collected from sensors such as statistics, sequence, and graph-based modeling.Statistics-based activity models [7,8] calculate the average frequency and duration of each activity.Large deviations from the average time or number are considered abnormal activity patterns.Sequence-based activity models [9] represent activities as daily sequences based on the occurrence time.Sequential pattern mining techniques can be applied to activity sequences to mine informative patterns.The graph-based activity model [10] generates activity graphs by combining the daily activity sequences in every monitoring period.The activity graphs can maintain various activity related information through multilabels of nodes and edges such as frequencies, time, durations, and locations of activities.The main advantage of this activity model is that we can analyze activity patterns in various frequencies and periods.
Graph classification studies [14][15][16][17][18] have been proposed to classify graph modeled data such as chemical compounds, social networks, and XML documents.The techniques represent graphs as feature vectors with values indicating the presence or absence of graph structural features, and a discriminative power of each feature is estimated by feature evaluation criteria including G-tests and information gain (IG).The graphs are then classified by using a machine learning classifier.
The existing techniques mostly adopt frequent subgraphs as graph structural features for classification.Many efficient frequent subgraph mining algorithms have been proposed such as FSG [11] and gSpan [13].These algorithms enable us to extract frequent subgraph features in practical time.TSFS [14], M b T [15], and LEAP [16] use frequent subgraph features.COM [17] has shown that cofrequent patterns can have high discriminative powers.Structure feature selection [18] has shown that frequent subgraphs have different discriminative powers according to their spatial distribution in a graph database.However, they cannot achieve the high classification performance for activity graphs since they only consider the frequency of the features.
Analyzing Activity Patterns in Various Activity Data Models
Individual lifestyles are hidden in one's frequently performed activity patterns.It has also been shown in literature [19] that frequent patterns are highly discriminative in various classification problems.Therefore, we adopt frequent patterns as features for this activity pattern classification problem.
Frequent patterns involve different information depending on data models.The representative models for activity data are statistics-, sequence-, and graph-based models.In these models, frequent activity patterns include the frequency of activities, frequent activity sequences, and frequent subgraphs, sequentially.We analyze a discriminative power of each type of frequent activity pattern.
The frequency of activities does not involve enough information for activity pattern classification, because it cannot express the occurrence order among activities.People can have similar frequencies but different activity orders.Therefore, the occurrence order is very valuable information.For example, Figure 2 shows parts of activity sequences in two consecutive days.The frequencies of "sleeping, " "eating, " and "toileting" are 4, 1, and 1 in the activity sequences.From this information, we cannot perceive that the person is suffering from insomnia.This kind of distinct patterns can be helpful in distinguishing people.
Frequent activity sequences involve the occurrence order of activities.However, the orders are valid only in their own sequences, because we have fractions of sequences.Occurrence order relationships among activity sequences, especially when a sequence shares common segments, can provide a more precise occurrence order among activities of different sequences.For example, we can interpret the activity sequences as two different meanings in Figure 1, that is, "eating or toileting" or "eating and toileting" in the middle of sleeping.
Frequent activity subgraphs represent the occurrence order of activities together in multiple activity sequences from various periods.From these graphs, we can get the occurrence order of activities at a similar time in different periods, which is useful knowledge for activity pattern classification.For example, we can accurately interpret an individual's activity patterns as "eating or toileting in the middle of sleeping" from the activity graph in Figure 2.
Classifying Activity Patterns Based on Activity Graphs
The proposed method uses activity database accumulated with unit activities recognized from various sensors.In this paper, we assume unit activities are recognized exactly.We present the proposed feature selection framework for activity pattern classification.In Section 4.1, we define an activity pattern classification problem.We analyze discriminative activity graph features in Section 4.2 and suggest a method that mines the discriminative features efficiently in Section 4.3.In this paper, our scope is limited to knowledge discovery process in Figure 2.
Notation.
In this section, we present notations related to the activity graph approach [8] and the formal definition of the activity classification problem.
Definition 1 (unit activity).A unit activity, , is an activity performed in a certain continuous time. represents a unit activity, , performed in time .Unit activities are recognized from sensor data and become nodes in an activity graph.Definition 2 (activity sequence).An activity sequence, = , is a sequence of unit activities, where < +1 .Any unit activity, , can appear multiple times in different .The sequence of activities in a day is regarded as a daily activity sequence, = Definition 3 (modeling period).A modeling period, ( ∈ ), is a time unit used to generate activity graphs. is generally set to a meaningful number of days such as a week ( = 7), a month ( = 30), or a year ( = 365).
Definition 4 (combination days).Given the modeling period, , the combination days of are the all possible combinations of days in , that is, (1 ≤ ≤ ).We denote them as For example, the combination days are 1 2 , 1 3 , . . ., −1 , when Definition 5 (activity graph).An activity graph, = (, , , Σ), is a graph that consists of a set of activity nodes, , and a set of edges, , where an edge, ∈ , represents the order between two activity nodes in . is a set of node and edge labels and Σ is a function assigning labels to nodes and edges.
Activity sequences are generated every day.In order to represent activities of more than one day concisely, we combine corresponding activity sequences and generate activity graphs using multiple sequence alignment (MSA) [18].The number of combined sequences is determined depending on the modeling period, , and combination days, . Figure 3 is an example of an activity graph generated using MSA, when is 3. MSA first combines the activity sequences ( 2 and 3 ) that share the greatest number of common activity nodes.The common activity nodes are represented as a single node and increase the edge label by one.In the same way, activity sequence 1 is combined with 2 and 3 .In this paper, we focus on mining discriminative features for activity pattern classification.In order to mine these features efficiently, we represent activity data as activity graphs by combining every combination of daily activity sequences in each meaningful period and find frequent subgraphs, = { 1 , 2 , . . ., }, from the activity graphs.Among = { 1 , 2 , . . ., }, we finally select highly discriminative features, * = argmax (), to remove redundancy, where (⋅) is a feature evaluation function.
Discriminative Activity Graph Features.
Based on the activity pattern analysis in Section 3, we adopt a graph-based activity data model for classifying activity patterns.In this section, we present the way in which discriminative activity graph features are mined efficiently.
People have their own intrinsic lifestyles that are expressed by activity patterns in different frequencies, orders, days, and periods.Therefore, we should generate activity graphs by combining all combinations of daily activity sequences and perform frequent subgraph mining on the activity graphs so that we can find all discriminative frequent subgraphs.
However, exploring frequent subgraphs from all of these activity graphs is very inefficient, since it requires a long running time and many redundant frequent subgraphs are mined.In order to solve this efficiency problem, we observe people's lifestyles.People repeat similar activity patterns in specific periods according to their own lifestyles.For example, office workers go to their companies on weekdays and have religious activities on the weekends.They may climb a mountain as a hobby every month.Through this observation, we claim that the frequent subgraphs of the combinations of days in a specific period have similar discriminative powers compared with the frequent subgraphs of all combinations of days.We present an example explaining Theorem 9. Figure 4 shows two sets of frequent subgraphs, 7 and 30 , mined in 7 and 30 , which denote weekly and monthly patterns, respectively.Suppose that activity patterns 3 and 4 are for two months. 3 is the activity pattern performed the first week in each of the two months.The support of 3 is 0.25 (= 2/8) in 7 but 1 (= 2/2) in 30 .As shown in this case, the supports of frequent subgraphs increase in larger modeling periods.Conversely, suppose that 4 is the activity pattern performed every week for a month and then performed for two weeks in the second month.For two months, the support of 3 is 0.75 in 7 but 0.5 in 30 . 4 appears to have two different activity patterns in 30 because the activity graph involves the frequency of activities in a modeling period.
Observation 1.People repeat similar daily activity sequences in specific periods according to their own lifestyles; that is, ≅ + .
Claim 1.Given all the activity sequences for days, = { 1 , 2 , . . ., }, and the set of modeling periods, As we observed above, people repeat similar activity patterns in very specific periods; that is, ≅ + .Therefore, Claim 1 is convincing.We find discriminative features by 1 ∪ 2 ∪ ⋅ ⋅ ⋅ rather than .
In each subset of activity sequences, we construct the sets of activity sequences for every .Activity graphs are generated by combining activity sequences in every .We then mine frequent subgraphs in the activity graphs.
Though we generate activity graphs with all possible combinations within the periods, 2 graphs are generated for the period, .Graphs generated in a smaller period are generated again in a larger period.Duplicate generation is a particularly severe problem for combinations of a small number of sequences, since these combinations are involved in most of the larger periods.In order to avoid generating duplicate graphs, we generate graphs from the largest period to the smallest period.In each period, we generate graphs beginning with those that have the largest number of sequences to the smallest number of ones in a combination.In this way, we can avoid generating graphs that were already generated in a larger period.We can efficiently remove a lot of duplicated graphs.Figure 6 shows an example in which duplicate graphs are generated with all possible combinations in a set of modeling periods, = {3, 5}.When we generate activity graphs with all possible combinations in modeling periods 3 and 5, the combinations of sequences, 1 , 2 , and 3 , are duplicated.We generate graphs from the combination days of period 5, 5 , to period 3, 3 .In this way, we can avoid generating graphs that were already generated in 5 .
After generating activity graphs, frequent subgraph mining is performed on the activity graphs to extract frequent activity patterns.A number of duplicate frequent subgraphs occur, since activity graphs are generated from the same activity sequences.Many of them have similar graph structures to each other.These redundant patterns degrade the performance in both the accuracy and running time of the classification.
We select highly discriminative features by removing the redundant frequent subgraphs.A number of feature selection methods [14][15][16][17][18][19][20] have been proposed.Among them, we adopt the TSFS approach [14].This approach has proved that topologically similar graphs have similar discriminative powers and a method has been proposed that selects discriminative subgraphs by clustering frequent subgraphs based on their similarity.TSFS is very suitable for an activity pattern classification problem since similar activity graphs involve similar activity patterns.
We generate frequent subgraph clusters, 1 , 2 , . . ., , by clustering the set of frequent subgraphs, = { 1 , 2 , . . ., }.For the clustering, we can use any clustering algorithm such as -means and any graph similarity measure such as graph edit distance [21] or maximum common subgraph [22,23].The highest discriminative frequent subgraph is selected in each cluster.The discriminative power of each frequent subgraph can be estimated by feature evaluation functions such as information gain, mutual information, and 2 statistic.
Figure 7 shows a processing step that selects highly discriminative features from a set of frequent subgraphs, = { 1 , 2 , . . ., }.For example, the frequent subgraphs, 1 , 2 , 7 , and 10 , are topologically similar to each other.These activity patterns also have similar semantics, "sleeplessness." Therefore, they are clustered into the same cluster, 1 .The set of highly discriminative features, * , is selected by selecting only one highly discriminative feature in each cluster.
We then convert each activity graph, = { 1 , 2 ,. . ., }, into a set of feature vectors.Equation ( 1) is a feature-graph matrix, , , that indicates whether each feature, , is present or absent in .We build a classification model by training machine learning classifiers with the converted activity graphs and class labels, ∈ { 1 , 2 , . . ., }.Consider Algorithm 1 is the pseudocode for mining highly discriminative features and training an activity pattern classifier.Algorithm 1 takes the set of activity sequences, , the minimum support, min sup, and the set of modeling periods, , as input and returns a set of highly discriminative features, * , and the activity pattern classification model, .It first mines frequent subgraphs and stores them in in each period (line 2-4).A set of activity pattern clusters, , is generated by clustering (line 5-9).Finally, each graph set, , is represented as feature vectors by generating the feature-graph matrix, , and the activity pattern classification model, , is built (line 10-11).
Experiments
In this section, we experimentally evaluate the effectiveness of our feature selection framework on a real dataset.system.Figure 8 shows the interface of the system, where each activity is inputted with properties such as activity type, duration, and time.The dataset for each student contains more than 2,000 activities and consists of 19 target activities, as shown in Tables 1 and 2.
In order to show the effectiveness of our method, we conduct the following experiments.Modeling periods, , are set to 1, 2, 5, and 7, meaning a day, a weekend, a weekday, and a week, sequentially.To classify activity patterns, we use user IDs for class labels and perform fivefold cross validation using an SVM classifier: (1) effectiveness of graph features: comparison of classification accuracy among features from various activity data modeling methods, (2) the best minimum support threshold for frequent activity subgraphs: comparison of classification accuracy in various minimum supports, (3) discriminative features in each period: comparison of classification accuracy between proposed method and naïve method extracting features in a single period, (4) effectiveness of TSFS: comparison of classification accuracy among feature selection algorithms, top-k, MMRFS [19], and TSFS, (5) ineffectiveness of conventional graph classification: comparison of classification accuracy between the proposed method and conventional graph classification algorithms such as the model-based search tree (M b T) [15] and maximal marginal relevance feature selection (MMRFS) [19].
Effectiveness Evaluation.
The first experiment aims to show the effectiveness of graph features.We compare classification accuracy among features mined from statistics-, they do not consider the topological similarity for feature selection.
Conclusion
Classification is an important technique for analyzing activity data.We have defined the activity pattern classification problem and proposed an effective feature selection framework for classifying activity patterns.We have shown that a graph model is effective for activity pattern classification because the activity graphs reflect individuals' specific activities in different frequencies, orders, days, and periods.By analyzing the lifestyles of people that repeat similar activity patterns in specific periods, we have proposed an effective and efficient feature selection technique.We select frequent activity patterns in all combinations of daily sequences in meaningful periods.
Experimental results have shown that the proposed method achieves (1) suitable model for activity pattern classification, (2) better discriminative power extracting combinational features than that extracting features in a single period, and (3) higher accuracy than that of existing graph classification methods.In addition, we discussed the optimal
Figure 1 :
Figure 1: Sequence-and graph-based activity data models.
Figure 2 :
Figure 2: Overview of the proposed method.
Figure 6 :
Figure 6: Example of duplicate graphs generated with a set of modeling periods.
Figure 13 :
Figure 13: Comparison of the accuracy between graph modeling techniques.
).Given the set of activity sequences, = { 1 , 2 , . . ., }, for people, and a set of class labels, { , , } =1 , ∈ { 1 , 2 , . . ., }, an activity pattern classification is a problem that predicts the class label, , for the subset of an activity sequence, .Definition 7 (frequent activity pattern).A frequent activity pattern, f, is a subgraph of that occurs no less than the minimum support.A support of subgraph f, denoted by support (f), is calculated as | ⊆ |/||, where | ⊆ | is the number of graphs containing subgraph and || is the total number of activity graphs in an activity graph DB.Definition 8 (discriminative feature).Given the set of activity graphs, = { 1 , 2 , . . ., }, with and = { 1 , 2 , . . ., }; a discriminative feature is a frequent subgraph that can be mined only in a specific set of activity graphs, .
Theorem 9 .
For each , given the set of activity graphs, = { 1 , 2 , ..., }, with , can have discriminative frequent subgraphs, = { 1 , 2 , ..., }.Proof.Suppose that we generate all possible combinations within a modeling period; each combination of an activity sequence is unique.Moreover, daily activity sequences are different in a specific period.Therefore, we generate different graphs with each combination of days of and each graph set, , with can have discriminative frequent subgraphs, = { 1 , 2 , . .., }.Given the set of activity graphs, = { 1 , 2 , . .., }, with a set of modeling periods, = { 1 , 2 , . .., }, each graph set, , with can have discriminative frequent subgraphs, = { 1 , 2 , . .., }.Proof.We prove it by contradiction.Suppose that a set of frequent subgraphs, = { 1 , 2 , . . ., }, is mined in = { 1 , 2 , . . ., } by varying .Assume that, for all and ( < ), two conditions, − = 0 and − = 0, are always established.We show the case, not establishing these conditions.Suppose a certain frequent subgraph, , is in .It is possible that does not appear in , since we have < .Therefore, we have ̸ ⊂ .This also means that the support of in is no less than the support of in .Therefore, ∈ ∧ ∉ is also possible when we mine frequent subgraphs with the same minimum support in and .Since − = 0 and − = 0 are not always established, can have frequent subgraphs, not containing , and vice versa. | 5,791.8 | 2014-04-01T00:00:00.000 | [
"Computer Science"
] |
Identification of Heparan Sulfate in Dilated Cardiomyopathy by Integrated Bioinformatics Analysis
Objectives Heparan sulfate (HS) forms heparan sulfate proteoglycans (HSPGs), such as syndecans (SDCs) and glypicans (GPCs), to perform biological processes in the mammals. This study aimed to explore the role of HS in dilated cardiomyopathy (DCM). Methods Two high throughput RNA sequencing, two microarrays, and one single-cell RNA sequencing dataset of DCM hearts were downloaded from the Gene Expression Omnibus (GEO) database and integrated for bioinformatics analyses. Differential analysis, pathway enrichment, immunocytes infiltration, subtype identification, and single-cell RNA sequencing analysis were used in this study. Results The expression level of most HSPGs was significantly upregulated in DCM and was closely associated with immune activation, cardiac fibrosis, and heart failure. Syndecan2 (SDC2) was highly associated with collagen I and collagen III in cardiac fibroblasts of DCM hearts. HS biosynthetic pathway was activated, while the only enzyme to hydrolyze HS was downregulated. Based on the expression of HSPGs, patients with DCM were classified into three molecular subtypes, i.e., C1, C2, and C3. Cardiac fibrosis and heart failure were more severe in the C1 subtype. Conclusion Heparan sulfate is closely associated with immune activation, cardiac fibrosis, and heart failure in DCM. A novel molecular classification of patients with DCM is established based on HSPGs.
INTRODUCTION
Dilated cardiomyopathy (DCM) is a kind of cardiomyopathy characterized by dilatation of ventricular cavities and impaired systolic function. This cardiovascular disorder is closely associated with progressive heart failure and even sudden cardiac death. The pathogenesis of DCM is a complicated process with high heterogeneity (1). Notably, cardiac inflammation and fibrosis participate in the pathological process of DCM (2). However, the molecular mechanism of this process is still unclear.
Heparan sulfate (HS) is a linear polysaccharide widely expressed on the cell membrane and in the extracellular matrix. It participates in large scale of biological processes that include cell-cell adhesion and transduction of intracellular signaling pathways (3). HS functions through binding to different core proteins and forming various kinds of heparan sulfate proteoglycans (HSPGs). HSPGs include syndecans (SDC1-4), glypicans (GPC1-6) on the cell membrane and HSPG2, agrin (AGRN), and COL18A1 in the extracellular matrix (3). Over the decades, certain HSPGs, such as SDC1 and SDC4, have been identified to be associated with worse cardiac function and poor prognosis of DCM (4,5). However, these studies did not provide an overview of HS and its underlying mechanism. Since HS participates in the component of the extracellular matrix and is widely expressed in cardiac fibroblasts (2), it is valuable to investigate the role of HS on cardiac fibrosis in DCM.
In this study, we integrated high throughput sequencing, microarray, and single-cell RNA sequencing datasets to illustrate the role of HS in DCM. We further explored a molecular classification of DCM based on HS heterogeneity among patients with DCM. (D) Correlation between HSPGs, cardiac fibrosis and heart failure. Red represents higher correlation and green represents lower correlation, respectively. *p < 0.05, ***p < 0.001.
Data Acquisition and Preprocessing
Raw data of DCM and normal heart tissues were acquired from the Gene Expression Omnibus (GEO) database (https:// www.ncbi.nlm.nih.gov/geo/), including GSE116250 (6), GSE141910, GSE42955 (7), GSE5406 (8), and GSE121893 (9). The details of the transcriptome datasets are shown in Supplementary Table S1. The downloaded data underwent a probe-gene symbol transformation with Perl (strawberry-Perl-5.32.1.1, https://strawberryperl.com/) according to their platform files. The high throughput RNA sequencing datasets were used as the training data. These two datasets were batched and normalized to eliminate the batch effects caused by different experiments and platforms through R package "sva" (version 3.42.0) (10). R function "ComBat" was used to eliminate the batch effects and R functions "rbind" and "normalizeBetweenArrays" were conducted to bind and normalize these datasets. The
Principal Component Analysis (PCA)
The pre-and post-batch-normalized data were tested by sample clustering analysis via PCA (11). R function "prcomp" was used to perform PCA. Dot plots were presented by R package "ggplot2" (version 3.3.5).
Identification of Differential Expressed Genes (DEGs)
After a distribution analysis, the expression of most genes was non-normally distributed. Thus, differential analysis of the transcriptome data and immunocytes between groups was conducted based on a non-parametric test using the R function "pairwise.wilcox.test" under a Bonferroni method. The expression levels of HS genes in DCM and normal heart were shown in the heatmap using the R function "pheatmap."
Pathway Enrichment Analysis
In this study, the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway gmt file was downloaded from gene set enrichment analysis (GSEA, http://www.gsea-msigdb.org/gsea/ index.jsp). This file was further used for gene set variation analysis (GSVA) to calculate the enrichment score of pathways with R package "GSVA" (version 1.42.0) and "GSEABase" (version 1.56.0) (14). After a distribution analysis, the enrichment score of most pathways was non-normally distributed. Then the differential analysis of the enrichment score was conducted with the R function "pairwise.wilcox.test" under a Bonferroni method according to the group division. The enrichment score was shown in the bar plot and heatmap.
Identification of DCM Subtypes
Heparan sulfate proteoglycans were chosen as candidate genes to identify the DCM subgroup through a non-negative matrix factorization (NMF) clustering using the R package "NMF" (version 0.23.0) (15). The value of k where the magnitude of the cophenetic correlation coefficient began to fall at a steep amplitude was taken as the optimal number of clusters. R function "consensusmap" was used to plot the heatmap of clustering with the k value from 2 to 10. R function "pairwise.wilcox.test" was applied to compare the index of cardiac fibrosis and heart failure between multiple subclusters using a Bonferroni method.
Single-Cell RNA Sequencing Analysis
GSE121893 was preprocessed and analyzed using the R package "Seurat" (version 4.0.5) (18). This dataset contains heart specimens with coronary heart disease (n = 2), DCM (n = 3), and normal heart (n = 2). The cardiac fibroblasts from DCM patients were selected due to the cell annotation in the supplementary files of the original draft (9). Analysis of single-cell clustering and cell trajectory was conducted with R packages "Seurat", "monocle" (version 2.22.0), and "celldex" (version 1.4.0).
Correlation Analysis
The correlation analyses between genes, immunocytes, and pathways were conducted with the R function "cor.test" using Spearman's method. The results of correlation analyses were shown with the R package "corrplot" (version 0.90), "ggplot2" and "igraph" (version 1.2.7).
Protein-Protein Interaction (PPI) Network
Protein-protein interaction network of HSPGs was downloaded from the STRING database (https://www.string-db.org/) and visualized by Cytoscape (version 3.9.1).
HSPGs and Cardiac Fibrosis and Heart Failure in DCM
Multiple datasets were combined and normalized to eliminate batch effects (Supplementary Figures S1A-D). Previous reported HSPGs were extracted and shown in a PPI network ( Figure 1A). Via a differential analysis, we found that the enrichment score of HSPGs in DCM was significantly upregulated both in the training and validation datasets (Supplementary Figures S1E,F). Specifically, the expression of most SDCs and GPCs was significantly upregulated in DCM when compared to normal heart (logFC > 1 and p < 0.05, Figure 1B and Table 1). GPC5 was the only downregulated HSPG in DCM with relatively low expression abundance ( Figure 1C). The expressions of HS in the extracellular matrix were all significantly upregulated, such as HSPG2, AGRN, and COL18A1. The level of cardiac fibrosis and heart failure in DCM was significantly increased (logFC > 1 and p < 0.05, Figure 1B), which were indicated by collagen type I alpha 1 chain (COL1A1), collagen type I alpha 2 chain (COL1A2), collagen type III alpha 1 chain (COL3A1), natriuretic peptide A (NPPA), and natriuretic peptide B (NPPB). Via a correlation analysis, we found that SDC2, SDC3, GPC6, HSPG2, AGRN, and COL18A1 were statistically correlated with above five indexes ( Figure 1D).
HSPGs in Cardiac Fibroblasts of DCM
To further understand the association between HSPGs and cardiac fibrosis, a single-cell RNA sequencing analysis of cardiac fibroblasts was performed. A total of 973 cardiac fibroblasts were classified into seven clusters, indicating the heterogeneity of fibroblasts in DCM (Figure 2A). The cell trajectory plot is shown in Figure 2B and C. The expression abundance of SDC2, SDC4, and GPC1 was among the top three ( Figure 2D). By visualizing SDC2, SDC4, GPC1, COL1A1, COL1A2, and COL3A1, we found that SDC2 was highly correlated with the index of cardiac fibrosis when compared to SDC4 and GPC1 (Figures 2E-K). These results revealed that SDC2 may play a key role in the cardiac fibroblasts of DCM.
Immunocyte Infiltration in DCM and Normal Hearts
Immunocyte infiltration is an important driver of cardiac fibrosis.
To investigate whether HSPGs are associated with inflammation and the immune microenvironment in DCM, immunocyte infiltration was first calculated by two independent methods that include CIBERSORT and MCPcounter. For the CIBERSORT manner, the fractions of 22 kinds of immunocytes in DCM and normal heart are shown in Figure 3A. Naïve B cells, CD8 + T cells, M0 macrophages, M1 macrophages, and dendritic cells were significantly higher in DCM than in normal heart. Memory resting CD4 + T cells, activated NK cells, M2 macrophages, and eosinophils were downregulated in DCM. For the MCPcounter manner, the abundance of 10 kinds of immunocytes in DCM and normal heart is shown in Figure 3B. CD8 + T cells, cytotoxic lymphocytes, and fibroblasts were upregulated, while NK cells, myeloid dendritic cells, neutrophils, and endothelial cells were downregulated in DCM. Via a correlation analysis, we found that HSPGs were widely correlated with immunocytes in DCM (Figures 3C,D).
HS Metabolic Genes and Immunocytes, Cardiac Fibrosis and Heart Failure in DCM
The mechanism of HSPG overexpression was investigated. First, the process of HS polysaccharide metabolism is visualized in Figure 4A. Via GSVA, we aimed to explore whether the HS metabolic pathway was altered in DCM. Significantly altered KEGG metabolic pathways are shown in Figure 4B. HS biosynthetic pathway was ranked third, which indicated that HS Heatmap of HS metabolic genes in DCM and normal specimens. Red represents higher expression and blue represents lower expression, respectively. (D) Correlation between HS biosynthesis genes, cardiac fibrosis, and heart failure. Red represents higher correlation and green represents lower correlation, respectively. *p < 0.05, **p < 0.01, ***p < 0.001.
biosynthesis was remarkably activated in DCM. Available HS metabolic genes were obtained and are shown in Figure 4C. Via a correlation analysis, we found that three important enzymes that include Exostosin 1 (EXT1), Exostosin 2 (EXT2), and SULF1 to synthesize HS were significantly correlated with abovementioned cardiac fibrosis and heart failure indexes (p < 0.05, Figure 4D). Notably, the expression of heparanase (HPSE) was downregulated (logFC = −0.51, p < 0.05), which was the only enzyme to hydrolyze HS.
Identification of DCM Subtypes Based on HSPGs
Considering the importance of HSPGs in DCM, this set of genes was then applied to identify molecular subtypes using an NMF manner. NMF clustering heatmaps and cophenetic correlation coefficient plots are shown in Supplementary Figures S2, S3.
According to the cophenetic correlation coefficients, k was chosen as 3 for the optimal number of clusters ( Figure 5A). Then, 203 patients with DCM in the training dataset were clustered into three subtypes, i.e., C1 (n = 79), C2 (n = 40), and C3 (n = 84). Expression levels of COL1A1, COL1A2, NPPA, and NPPB were significantly higher in C1 than in C2 and C3 (Figures 5B-F). Furthermore, the clustering of DCM was validated using the microarray datasets (Supplementary Figures S4A-F). Similarly, cardiac fibrosis and heart failure were more severe in C1 subtype of the validation dataset. Subsequently, HS-related genes, immune-related pathways, and immunocytes in the subtypes of DCM were shown in the heatmap (Figure 5G). For HSPGs, the expressions of SDC1, SDC3, GPC1, HSPG2, and AGRN were significantly higher in C1 subtype than in C2 and C3 subtypes (p < 0.05). GPC4 and GPC5 were mainly expressed in C2 subtype. No significant alteration was observed in C3 subtype. For immune pathways, complement and coagulation cascades and antigen processing and presentation pathways were enriched Red represents a higher expression/score and blue represents the opposite. C1* represents a significant higher expression in C1 subtype. C2* represents a significant higher expression in C2 subtype. The name of immunocytes originated from official files and software of CIBERSORT and MCPcounter. *p < 0.05, **p < 0.01, ***p < 0.001, ns = not significant.
in C1 subtype, while no immune pathway was enriched in C2 and C3 subtypes. For immunocytes, the abundance of T cells and monocytic lineage were higher in C1 subtype. No immunocytes were significantly infiltrated in C2 and C3 subtypes.
Screening Hub HSPGs and Immunocytes in C1 Subtype
Since C1 subtype was identified as the most severe subtype of DCM, we screened the hub HSPGs and immunocytes in C1 subtype via LASSO regression and the SVM-RFE algorithm (Figures 6A,B,F-G). For the HSPGs, SDC1, SDC2, GPC3, GPC4, GPC5, GPC6, and AGRN were the intersection genes calculated by LASSO and SVM-RFE. The receiver operating characteristic (ROC) curves of these genes (Figures 6C-E) showed that SDC1 was the best HSPG for identifying C1 subtype [area under the curve (AUC) = 0.887]. For the immunocytes, naïve B cells, plasma cells, eosinophils (CIBERSORT), T cells, NK cells, neutrophils, and fibroblasts (MCPcounter) were the intersection immunocytes. The ROC curves of these immunocytes (Figures 6H,I) revealed that the abundance of T cells (MCPcounter) was the best immunocyte for identifying C1 subtype (AUC = 0.802). The predicted effectiveness of SDC1 and T cells was confirmed using the validation datasets (Supplementary Figures S4G,H). Further, the correlation networks between SDC1, T cells, and certain immune-associated KEGG pathways are shown in Figure 6J.
DISCUSSION
Heparan sulfate has been identified as an important regulator of cardiac inflammation and fibrosis. In this study, an overall upregulation of HS was found to be closely associated with immune activation, cardiac fibrosis, and heart failure in DCM. Moreover, we established a novel molecular classification of DCM based on HSPGs.
Heparan sulfate regulates intracellular signaling through binding to signal molecules with its polysaccharide chains. Certain fibrogenic pathways can be controlled through HS in tissue remodeling. For example, transforming growth factor beta (TGF-β) can induce fibrosis through binding to HS on the cell membrane of fibroblasts. On the contrary, inhibition of HS blocks TGF-β signaling pathway and alleviates fibrosis (19). In addition, HS is widely expressed in the extracellular matrix and serves as a critical component in fibrotic tissues. Consistently, our present study revealed overexpression of HS both on the cell membrane and in the extracellular matrix through analyses of DCM transcriptome datasets. Moreover, HS was closely associated with collagen expression of cardiac fibroblasts at the single-cell level. These results emphasized potential role of HS in cardiac fibrosis of DCM.
Although upregulation of HS was found in DCM, the potential mechanism was largely unclear. A previous study hypothesized that HS was induced in pressure overload hearts through transcription of core protein SDC4 (20). However, the expression of core proteins is mainly determined by the HS polysaccharide content (21). Here, our data showed that the HS polysaccharide biosynthesis KEGG pathway was ranked third among all altered pathways in DCM. HS biosynthesis is a complex biological process and EXTs and exostosin like glycosyltransferase (EXTLs) are key enzymes of this process (3). We found that significant upregulation of EXT1 and EXT2 was closely associated with cardiac fibrosis and heart failure in DCM, which partly explained the potential molecular mechanism of HS biosynthetic activation in DCM. Moreover, HPSE is the only enzyme to hydrolyze HS and was downregulated in DCM. In summary, we hypothesized that HS overexpression in DCM may be related to both the transcription of core proteins and the alteration of HS metabolism.
Heparan sulfate overexpression in DCM indicated a potential treatment by targeting HS. Currently, PI-88 (HS analog) has been already used to treat hepatocellular carcinoma and melanoma (22). In addition, our previous work reported an anti-diabetic effect by targeting HS. OGT2115 protected islet β cells against streptozotocin-induced inflammation and apoptosis in mice by regulating intra-islet HS (23). Moreover, inhibition of HS was reported to protect against fibrosis in kidney (24), lung (25), and cornea (26). Whether anti-fibrosis through targeting at HS is also effective in DCM deserves in-depth investigation.
In this study, we further found HS heterogeneity between patients with DCM and identified three molecular subtypes (C1, C2, and C3) based on HS. Cardiac fibrosis and heart failure were more severe in the C1 subtype, suggesting that these patients with DCM may have a poor prognosis. Thus, inhibition of HS may be a potential therapy, especially for C1 subtype of patients with DCM. Moreover, the characteristics of subtype C1 were further identified via LASSO regression and SVM-RFE. The expression of SDC1 and the abundance of T cells were screened as hub gene/immunocyte in C1 subtype. However, the prediction effectiveness of the validation dataset was not as high as the training data probably due to the insufficient sample capacity of microarray datasets.
This study has some limitations. First, it only revealed that HSrelated genes were statistically associated with immune activation through bioinformatics analyses. The causal relationship needs further in vivo and in vitro experiments. Second, it is difficult to confirm these subtypes of DCM by endocardial biopsy in clinical practice. Thus, it required other ways to further confirm C1 patients, such as biomarkers in the peripheral blood. Last, DCM is highly heterogeneous, thus the molecular classification for DCM may vary among different pathobiology.
Despite the limitations, this study has some potential aspects for clinical translation. First, HS has been identified as a potential candidate for DCM biomarker through hundreds of specimens from existing datasets. This biomarker in peripheral blood may be an alternative to cardiac biopsy for DCM. Second, the close association between HS and cardiac fibrosis facilitates in-depth understanding of the pathogenesis of DCM. Next, targeting overexpressed HS in DCM may be a potential way to inhibit cardiac inflammation and fibrosis. Since PI-88 has been already used to treat the malignant tumor, further investigation might enlarge the clinical application of this old drug on DCM. Moreover, classification of patients with DCM based on the heterogeneity of HS could offer a novel sight into personalized management of DCM in the future.
CONCLUSION
Heparan sulfate is closely associated with immune activation, cardiac fibrosis, and heart failure in DCM. The overexpression of HS may be related to the alteration of HS metabolism. A novel molecular classification of DCM is established based on HSPGs.
DATA AVAILABILITY STATEMENT
Publicly available datasets were analyzed in this study. The original contributions presented in the study are available in the open source GEO database (https://www.ncbi.nlm.nih.gov/ geo/), including GSE116250, GSE141910, GSE42955, GSE5406, and GSE121893. | 4,299.4 | 2022-05-27T00:00:00.000 | [
"Biology",
"Medicine"
] |
Indigenous Sustainable Development
This concluding chapter summarizes the findings of the book. This involves presenting somewhat of a guide to academics, communities, and practitioners that seek to support indigenous concepts of sustainable development. The chapter begins by describing what indigenous sustainable development is. Then the discussion changes towards modes of implementation. A central focus is given to items such as the United Nation’s Sustainable Development Goals and their resonances, and lack thereof, with indigenous ideas of development.
In the last chapter, interacting indigenous ideas with "Western" ideas of development led to some interesting tensions and insights. The relation between indigenous thought and sustainability was not overtly addressed, however. In this chapter, we will explore this relation specifically. A preliminary description would be something like this: indigenous cosmovision does not create a separation between the natural world and a human one. Therefore, equity in material and non-material forms, insomuch as it reduces the marginality of indigenous peoples, will bend economy, politics, and culture towards ecological sustainability. Once indigenous perspectives are given their appropriate weight, related ideas such as deep ecology, biocentrism, or de-growth take a less radical, more pragmatic, aura.
To understand the implications of indigenous ideas to sustainable development, it is important to avoid two countervailing tendencies. One of these ebbs towards discounting indigenous thought as being "not real" or an "invented tradition." The other tendency reifies, freezes, or essentializes indigenous ideas-framing them as immutable, time-transcendent, and unrelated to other modes of thought, such as modernity, that may corrupt them. In truth, indigenous tradition, just like any tradition, is at once rooted and mutable. This becomes most apparent when we consider the origins in indigenous sustainable development, address issues of essentialism, and then explore indigenous sustainable development and its possible implementation.
Where Does InDIgenous sustaInable Development Come From?
A central concern of this book was to locate the cultural, political, and economic origins of indigenous sustainable development. Discerning this required the interpretation of "grassroots," "bottom-up," largely inductively devised indigenous theories of development. Although indigenous development has become popularized in recent decades, it is obviously not a new idea. Rather, ideas of indigenous development have materialized through deep histories of thought and experience. The ideas of progress, development, and human rights that are so often associated with "Western" or "enlightenment" thought are present in this history. So too are the remnants of historical experience of Spanish conquest and association with an "indigenous" identity that was created as an "other" to the Spanish.
More recent histories of civil war, dispossession, marginalization, genocide, racism, and persecution are also included in this history. Contact with Marxist and postcolonial thought as well as the anticolonial pedagogies and assertions of Friere and Fanon are traceable in the discourse and practice of Maya and Andean development programmes especially. Current discursive structures such as International Human Rights, and Indigenous Rights law, as well as interaction with indigenous and environmental movements, were cited as key factors that assisted the historical emergence of Maya, Garifuna, and Andean ideas of development. These ideas were equally born of involvement with local, communitylevel interactions with and oppositions to the impositions of externally devised development projects, whether they be from international companies, development organizations, or national governments. Indigenous sustainable development, then, is best described as an iteration of existing ideas from varying sources as they interact with the historical and current every-day experience of indigenous communities.
Indigenous sustainable development, this implies, is not a new idea in the sense that it has emerged in a pristine form that is unrelated to other modes of thought. It has an expansive history of engagement with ideas that stem from modernity and coloniality. It is a transmodern idea of development-that is, an idea that is not modern or traditional, but both. A look at the origins of the idea makes obvious the fictitiousness of an assumed binary that separates the modern from the traditional. Indigenous sustainable development represents an interpretation of development that is informed by indigenous culture and historical experience. It is not a pure idea of development that has emerged from some pristine "other" of modernity, however. It has been forged by a continual relationship between coloniality and modernity, as the colonized mind is articulated by the modern, which is, in turn, articulated by the colonized. They are not separate, but constituent parts of a transmodern cultural, political, and economic reality. Indigenous ideas of sustainable development emerge from this transmodern reality, but unlike the majority of such ideas, they favour the perspectives and knowledges of the historically marginalized.
Substantial critiques may be raised regarding Maya, Andean, Garifuna, and indigenous movements in general regarding a tendency towards the essentialism and romanticization of indigenous culture. If Maya culture is as egalitarian and as rooted in a deep integration with nature as members of El Centro tend to argue, it might be asked, then, why do indigenous people so regularly throw plastic, paper, and metal refuse out of the windows of cars and public buses? Why do tourists learn upon visiting the ancient Maya ruins at Copan that that civilization disappeared due to the severe environmental degradation it wrought on its local natural environment? Why does culture in Maya villages seem so patriarchal, and why are women severely underrepresented in communal mayorships? Maya culture does not reveal itself to the outside observer to revolve around the types of ideals that are encapsulated in the so-called Maya cosmovision. The same could be said of Garifuna and Andean traditions.
Certainly, it seems fair to say that all indigenous people do not display nature-centric, communal, and egalitarian tendencies. A responsible development practitioner or policymaker would be wise to think about this before funding or designing any development intervention. The carte blanche support of an indigenous group that makes such claims may not lead to intended consequences. If this cultural characterization is dishonest, an indigenous-run grassroots project that is intended to promote gender equality and environmental sustainability may fail to achieve its goals. Blind trust in the egalitarian and naturalistic tendencies of indigenous culture could lead to bad programmes and misplaced development assistance funds.
When confronting indigenous leaders regarding this issue, the answer one receives is as simple as it is uncontestable: indigenous people who diverge from ideals of environmentalism and egalitarianism, it is explained, have/had lost their way. They have/had lost touch with their culture and the wisdom of their ancestors. Patriarchy and disrespect for the natural environment, it is argued, are largely Spanish colonial imports, and pre-Columbian divergences from nature and equality were temporary departures from ingrained cultural traits. Such answers are inscrutable due to their tautology. But the asking of them might just miss the point of movements for indigenous sustainable development in general.
It should be clear from the account given in this book that claims that the people have "lost their way" are a way of saying that indigenous culture has been transformed in a climate of severe discursive inequality-that cosmovision has been dominated, and that this is not democratic, nor does it represent development. This cultural domination (along with its economic and political counterparts) is contested by indigenous movements as they attempt to move towards an ideal of pluralistic democracy. Such a democracy, in its ideal form, would ensure that all cultural knowledges, understandings, and identities carry the same persuasive force. The valorization of indigenous cosmovisions in Guatemala, Honduras, and Ecuador is a political project designed to counter cultural domination that maintains discursive and material inequities. It is not intended to maintain an "indigenous" culture in a pre-assumed frozen state.
These indigenous democratizing projects were born at an intersection. One of the metaphorical roads in this meeting brings a deep historical past which includes an egalitarian, nature-centric ethic, as well as divergences from this. It also includes a history of colonial marginalization. This road carries the past. The other road brings current thinking on topics such as the environment, indigeneity, human rights, and development. It also carries the material and cultural realities of conflict with international mining, extraction, or tourism interests, and various laws and/or development projects. This road carries the present.
Many of these concepts of the past and present have resonance at the crossroads at which indigenous development thought works, and the items which resonate exude the pitch of a local-centric, nature-centric, egalitarian, indigenous, and endogenous political project. It is the strength and depth of this resonance which is the item to be judged for its "authenticity"-not its relation to historical "fact." The essentialist question oversimplifies the claims and the project connected with indigenous politics in Latin America and elsewhere. It may be functional as a limited academic trope, but is of limited functional interest where policy choices, development projects, and political assertions are concerned.
The sustainable development projects described in this book are not so much concerned with the revival of indigenous culture. They are concerned with building new manifestations of indigenous culture as remnants of the past mix with ideas of the present. These new manifestations place emphasis on equity and respect for the natural environment. Practitioners and policymakers that wish to engage similar culture-based, indigenous-run organizations in the interest of collaborative development would benefit from considering such things. An indigenous organization that simply references the assertion of culture as its policy goal is different from one that also has programmes enacted that seek to promote environmental sustainability and gender equality. These types of markers are important when considering development assistance or political solidarity.
What Is InDIgenous sustaInable Development?
But what, in the end, is the relation between indigenous culture and sustainable development? To answer this, it must first be understood that indigenous ideas of development resonate strongly with the group of ideas that were labelled cultural political economy (CPE) in this book-this includes more recent and less relativistic variants of post-development theory. At the very root of indigenous development theory is the depiction of humans as social subjects that are formed through discoursewhich also serves to make the social and material world intelligible. Such discourses also shape the materiality of the physical world as humans enact their cultural relation with their environment. The material environment simultaneously formats the range of possibility available to society and culture. Fundamental to indigenous sustainable development, then, is the inseparability of the material and the cultural-these are part of the same substance. Separation of them can occur for descriptive and analytical purposes, but this split should not be reified in the process. In indigenous thought, nature and culture are one. Indigenous thought posits a similar relationship between the individual subject and larger community. Individual subjects, with their range of beliefs, tastes, and values, are produced socially and communicatively. One may be able to pull the individual apart from the social temporarily for descriptive or analytical purposes, but the fact that they belong to one substance should not be left aside. The individual and its social world are continually iterating one another. The connections between these two typological categories are so substantial that, in practice, they cease to be separable entities.
With this, an image is created of a circular, mutually constituting relationship between subject, society, culture, and nature. It would be more accurate, in fact, to describe these interrelated spheres as one substance. This substance is described in terms of the Heart of Heaven/Heart of Earth relationship in Maya cosmovision. Similar unities are implied in Andean and Garifuna insistences that the people cannot be conceived of separately from the land, and that development should be culturally defined. In the interest of generalizability, and in recognition that many indigenous cosmologies around the globe contain a similar concept, the term naturacultura will be used hereon to refer to this.
The world that is assumed at the base of indigenous sustainable development is one in which the naturacultura is continually shifting. One could easily misinterpret this as a suggestion of an unstable postmodern world, the structure of which can shift at any moment at the whimsy of social imagination. This would be a misrepresentation, however. Naturacultura is always embedded in a history that structures the possibility of its form. It is highly path-dependent. Any change in it must be iterative-built onto the past, not in denial of it. Policies that are devised to direct this iteration can only be imagined in relation to it, and they must resonate with it to have impact.
Andean, Garifuna, and Maya thinkers and practitioners are vitally aware of power in this relationship, however. Being a constructivist theory at its root, the vision of power contained in indigenous sustainable development is generative and productive. It works to create conceptions of the world and-due to the fact of naturacultura-it shapes that world itself. This idea of power, it should be noted, is not dissimilar to those concepts generally associated with the French theorist Michele Foucault. And, as was Foucault, indigenous thinkers are aware that economic, political, and cultural power tends to congeal-that it is not equally distributed. The structural form of naturacultura has very much been determined by the power relations suggestive of a colonial and neocolonial history, by racism, patriarchy, and physical and symbolic variants of violence. Given this, the degree of environmental degradation and indigenous marginalization that exists in Latin America should not come as a surprise.
In practice, indigenous sustainable development involves attempts to build multiple physical, political, and cultural capacities in a culturally applicable way. This requires direct support for indigenous institutions such as the indigenous mayorship of Maya communities. In Ecuador, it involves the rewriting of the constitution to contain elements of Sumak Kawsay. In the predatory state of Honduras, this involves struggles for Garifuna autonomy and food sovereignty. Official integration of such institutions into more mainstream regional, national, and international governance and legal frameworks is vitally important to this goal. This can be achieved through articulation with both legal instruments and social movements on these various levels. In other words-the goal is to build social capital by facilitating the networking of indigenous institutions with a myriad of other entities such as national governance structures, political parties, media, international NGOs, and social movements on all levels.
Indigenous development movements do not just seek to increase awareness and visibility of indigenous institutions, however. Beyond this, they seek to increase the recognition and sense of legitimacy attached to these institutions-in both the eyes of external actors and members of their own communities. This might be framed as a project to build what Pierre Bourdieu (2005) has called cultural capital, or what Taylor (1995) called the terms of recognition. It involves a valorization, or an increase of the respect attached to indigenous culture, institutions, and practices. This recognition is vital in increasing what Appadurai (2004) has called the capacity to aspire. Recognition means that indigenous institutions can function more effectively in asserting claims. It also means that these claims can be asserted in a way that is culturally relevant to the communities involved. This, so the argument goes, would lead to a sense of empowerment where members of the local community gain the ability to credibly imagine and enact the steps that are necessary to improve their conditions. Increases in both social and cultural forms of capital, however, are not thought to be sufficient in the pursuit of culturally sustainable development. Andean, Garifuna, and Maya leaders all insist via alternative wordings that social capital and cultural capital cannot be built independently of physical capital-especially in the form of land and related natural resources. It is not reasonable to expect poor Garifuna or Kechwa who work most of their waking hours in handicrafts production, as farm labour, or in other facets of the informal economy, to also invest serious energy in local governance and in participating in community groups.
The sustainability of such groups in the face of material constraints was the largest single problem voiced by the Maya organizers of El Centro as well. Put simply, the time expended on work necessarily undertaken by community members in the interest of survival on the most basic of levels often makes the building and maintenance of social and cultural capital almost impossible. Access to land as well as proceeds from (and control over) natural resources such as gold deposits are necessary if any meaningful pursuit of culturally sustainable development is to be undertaken. Like naturacultura itself, the elements of social, cultural, and physical capital are mutually reinforcing.
As anthropologist Charles Hale (2004) has suggested, the hegemonic neoliberal global political economy does not conceptually blend physical resources with social and cultural ones, as Maya, Garifuna, or Kechwa activists do. Neoliberal multiculturalism has an ability to tolerate rights to cultural expression and political voice, and to prevent discrimination. When indigenous peoples make appeals to rights to physical resources, however, neoliberal multiculturalism reaches its limits of tolerance. For the indigenous leaders, intellectuals, activists, and rights educators, these limits of neoliberal multiculturalism can have stark implications. El Centro Pluricultural para la Democracia, the focal point of the Maya case in this book, was placed on a national list of terrorist organizations, its funding was subsequently revoked, and it was eventually dismantled. Indigenous rights education programmes of El Centro came to be seen by foreign capital and the Guatemalan business elite as a threat to profits. Garifuna rights and environmental activists continue to be targets of state suppression in Honduras. Finally, as the Yasuní-ITT initiative illustrated, indigenous rights cannot insulate lands from extractive exploitation even in a country with one of the world's most progressive constitutions.
praCtICIng InDIgenous sustaInable Development
Building social, cultural, and physical capital is the core of indigenous sustainable development. This capital formation can be pursued through education programmes and the establishment of community working groups. Indigenous sustainable development should be utilized by development practitioners, sympathetic academics, or active citizens only under certain circumstances, however. First of all, a cultural revivalist movement must preexist in a particular community-it cannot be conjured as the romanticization of a well-meaning outsider. As in the Maya case, this movement must have evolved to solve current problems and not simply for the valorization of antiquated culture. Indigenous sustainable development is only possible where things like environmental sustainability and gender equality are held as primary components of development.
This is often the case with indigenous movements. As with the Maya, Kechwa, and Garifuna, many of these search for traces of environmentalism and gender equality in their own cultural histories. They then centralize these cultural traits in the iteration of a new indigenous cultural form. New ideas attach themselves to old cultural traits within these movements to create something newer still. But, just as with the cases in this book, these programmes are rarely simply cultural-they are material as well. Physical resources are linked to cultural realities. Access to land is especially important in most indigenous movements and should be an essential component of any indigenous sustainable development programme as well.
In assisting such movements, practitioners, citizens, or sympathetic academics could do a number of things. First, since the provision of material resources is essential in such projects, development assistance funding should be pursued. So too should access to land for nutritional, economic, and cultural subsistence. Further on this point, assistance in the prevention of degradation of communal lands by outside agents such as mining or tourism companies is in the spirit of indigenous sustainable development-since the degradation of the land is also thought to degrade the culture, not to mention the health of the community. Working with groups to ensure that cultural revivalism projects include components that centralize equality, environmental protection, and access to land is paramount.
If these things are all in place, practitioners and policymakers must then take a hands-off, supportive approach. Indigenous sustainable development projects must be self-driven. The idea is for development to be pursued via cultural systems of understanding that exist in the community. The well-intentioned assertion of a romanticized ideal of indigenous culture or attempts to impart purified modern rationality on indigenous communities will not work. The content of an indigenous sustainable development programme must be internally sourced-it must find its fuel in the transmodern reality of local culture.
This should not present itself as a foreign idea to university-trained academics, policy analysts, or development practitioners. There are many components in that culture too that could be centralized in the interest of pursuing and understanding indigenous sustainable development. It was argued in this book that the most innovative current trends of thought in development thinking already do this. These trends were typified as cultural political economy approaches to development. The goal in such approaches is building the capacities of communities and individuals to assert their will and ideas in a more equitable environment, and thus reinforcing the capacity to aspire (Appadurai 2004) of such entities through programmatic attempts to create an equality of agency (Rao & Walton 2004).
Analysts whose intellectual tradition comes more from economics than the other social sciences might find a resonance with indigenous sustainable development in newer nonlinear theories such as those of the complexity sciences (Urry 2005;Colander 2000;Bowles 1998) and cultural theory (Williams 1977;Kaufman 2004). They might also look to the "old" institutional economics of Veblen (1899/1994), Dusenberry (1949, and the foundational work of Adam Smith (1759Smith ( /1790.
The policy implications of indigenous sustainable development could be dramatic in some ways, particularly to development economists. Arguments that free markets be embraced as the most efficient means of ensuring material development would become nonsensical, since the fundamental theoretical premise on which such ideas stand-a stable and sovereign homo economicus-would have to be abandoned. The old habit of measuring development in absolute quantitative terms such as GNP per capita would cease to make sense since well-being would need to be measured relatively and qualitatively. Importantly, a shift should take place from defining development in terms of output to defining it in terms of equality. That is to say that indigenous sustainable development projects would seek to ensure that the naturacultura is used and recreated in a democratic way by the humans that both compose and are composed of it. This would ensure the most efficient allocation of resources given that equality in the composition and maintenance of naturacultura is the goal instead of crude aggregated economic output.
I have been speaking of this point about the implications of indigenous sustainable development to issues in what is often called the developing world, the Third World, or the Global South. A suggestion was made in the introduction of this book, however, that the idea might have value to a larger society that needs to think its way out of substantial large-scale social, economic, and environmental predicaments. Certainly, conflict that is rooted in reified nationalisms and essentialized ethnicities remains commonplace throughout the globe-whether we are referring to wellpublicized (an often reified) tensions between Islam and the West, or ethnic and nationalistic conflicts in places such as Sudan, Kashmir, Somalia, and Yemen. Perhaps the non-essentialized view of culture used by indigenous thinkers might contain a fragment of thought that could help inform attempts to solve such serious and persistent conflicts.
The constant threatening global economic crises that have been indicative of life in the twenty-first century may also be addressed somewhat by these ideas. If economic instability has stemmed from increased debt levels in the advanced economies, and if this increased debt is the tangible result of competitive consumer cultures that have become unmoored from the natural environment to which all consumption is ultimately tied (Schor 2005), then indigenous cosmovision may be of some help. The ideals of both discursive and material equality that are embedded in indigenous ideas, along with the rejection of overly powerful Western media, could reduce the need for competitive consumption in general, if we allow, as Veblen (1899Veblen ( /1994 and Frank (1985) suggest that consumption is fuelled by drives towards relative wealth and its display as opposed to absolute measures.
Considering the Latin American indigenous development programmes discussed in this book leads to an important, and obvious, conclusion: to speak of indigenous development is to speak of sustainable development. Garifuna, Maya, and Andean groups do not conceptually separate nature from the human community. They also utilize a holistic concept of development. To improve the welfare of the community, from these perspectives, is to improve the health of the natural environment. Development that disrupts, or fails to repair, harmony between the human community and environment, cannot be defined as development from such perspectives.
This creates an imperfect fit between indigenous development and the United Nation's Sustainable Development Goals (SDGs)-the most notable mainstream development programme that considers both environment and development. Listing the 17 SDGs separately, or choosing one or two to focus on, is common practice amongst researchers, practitioners, and policymakers who prioritize mainstream sustainable development. The separation, or singling out, of any SDG, however, is in conflict with the indigenous, holistic vision of development. All must be considered at once to resonate properly with indigenous concepts.
A further issue is that two SDGs do not make sense within indigenous cosmology. SDG 8 Decent Work and Economic Growth, lists in its title two elements that are not necessary for indigenous sustainable development. Indigenous perspectives have a close resonance with de-growth or nogrowth development and often see economic growth as harmful to healthy communities and environments. Work in formal labour markets sits equally uncomfortably, although work redefined to mean activity taken to benefit the family, environment, or community may resonate with indigenous thought.
Equally problematic is SDG 9 Industry and Infrastructure. To resonate with indigenous sustainable development ideas, both of these elements would have to be defined in ways that prioritize community and environmental wellness as opposed to economic growth. Oil extraction attempts in the Yasuní National Park, gold mining in Maya territory, and tourism infrastructure projects in Garifuna lands have all failed local indigenous communities and their natural environments. Unless directed by indigenous communities themselves, and subsumed in their own cosmovisions, industry and infrastructure projects are likely to diminish community and environmental well-being. Thus SDG 9 can often make little functional sense from an indigenous perspective.
Indigenous ideas could be used to guide SDG implementation and global economic policy towards more holistically sustainable outcomes, however. The concept that nature and culture are inseparable implies a reinstated cultural connection to the natural environment and its preservation. Greater discursive and material equality may encourage the easing of competitive consumption. This could go a long way in easing the impact of human behaviour on the natural world. The incorporation of such ideas Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/ by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. | 6,344.8 | 2020-01-01T00:00:00.000 | [
"Economics"
] |
Packet Flow Based Reinforcement Learning MAC Protocol for Underwater Acoustic Sensor Networks
Medium access control (MAC) is one of the key requirements in underwater acoustic sensor networks (UASNs). For a MAC protocol to provide its basic function of efficient sharing of channel access, the highly dynamic underwater environment demands MAC protocols to be adaptive as well. Q-learning is one of the promising techniques employed in intelligent MAC protocol solutions, however, due to the long propagation delay, the performance of this approach is severely limited by reliance on an explicit reward signal to function. In this paper, we propose a restructured and a modified two stage Q-learning process to extract an implicit reward signal for a novel MAC protocol: Packet flow ALOHA with Q-learning (ALOHA-QUPAF). Based on a simulated pipeline monitoring chain network, results show that the protocol outperforms both ALOHA-Q and framed ALOHA by at least 13% and 148% in all simulated scenarios, respectively.
Introduction
Medium access control (MAC) is one of the key requirements in underwater acoustic sensor networks (UASNs), garnering a major interest in the research community [1][2][3]. As an analogue of terrestrial sensor networks, UASNs are envisaged to enable a multitude of civilian and military applications [4][5][6]. To advance these applications, sensor nodes are being developed to be small/compact for easy transport, given that the environment is characteristically challenging to access. There is interest in new sensor nodes being energy efficient for longer deployments; as currently, there is no viable energy harvesting technology. Nodes should also be inexpensive to lower the overall cost, since UASNs are envisaged to be deployed to cover substantial marine areas and require a large number of devices. Employing acoustic waves in UASNs imposes some unique channel-centric constraints, such as: limited distance and frequency dependent capacity (bandwidth and data rate), long and variable propagation delay and high bit error rate (BER) on the design of UASNs [2,4,7]. As such, there is growing demand for efficient MAC solutions, especially adaptive MAC protocols for practical networks in the highly dynamic underwater environment.
Although preliminary studies on adopting existing MAC techniques/schemes from the vast body of work on terrestrial MAC protocols to underwater networks was largely found to be ineffective [1,8], the insight from the underlying principles remains useful. As a general guide, the network topology gives an insight into the appropriate category of MAC scheme to employ, with contention-free and contention based schemes better suited to centralised and decentralised topologies, respectively. Centralised topologies typically facilitate schedule creation and coordination from a central controlling node. Therefore, uncoordinated channel access becomes too contentious and less efficient. On the other hand, in a decentralised topology, such coordination is prohibitively challenging to implement, and the limited resources make contention-free protocols inefficient.
Code division multiple access (CDMA) and frequency division multiple access (FDMA) are promising contention-free schemes considered for UWASNs [9,10]. CDMA assigns unique binary codes to users (nodes) to spread the information signal, thereby offering the complete frequency band to nodes for simultaneous transmissions. Frequency hopping and direct sequence spread spectrum (FHSS and DSSS, respectively) are the standard modulations employed in this scheme. FDMA splits the channel into distinctive frequency bands and assigns them to different users. In this way, users can initiate concurrent transmissions without incurring collisions [5,10]. While the radio bandwidth (GHz) enables the implementation of these schemes with relative ease, in UANS, the available bandwidth is very limited (kHz).
Time division multiple access (TDMA) [11] creates schedules by splitting time into slots and is the most promising contention-free approach used in UASNs, because of its flexibility and potential to achieve true collision-free scheduling. Despite the challenges of synchronisation, some solutions leverage the long propagation delays for spatial reuse to improve performance. A gateway node in [12] creates a gap-free schedule and then requests packets from the transmitting nodes. Other solutions incorporate sleep cycles between activities to save energy [3]. The solution in [13] is for a central node to use an initialisation stage to gather network-wide information, which is then optimised using genetic and particle swarm algorithms to create a collision-free schedule. However, the lack of complete knowledge of the environment poses a major challenge for creating a lasting collision-free schedule.
Contention based MAC protocols such as ALOHA [14] and its variants offer low complexity and simplicity of implementation. The downside is that contention based protocols suffer low utilization and prohibitively large end-to-end delay at high loads due to the blind transmission strategy. The works in [15,16] integrated additional guard times between successive transmissions in order to reduce collisions, and reference [17] demonstrated receiver initiation (RI) to improve the performance. In RI, the receiver makes the first move of initiating the data transfer session by sending a request packet to the transmitter(s) (essentially polling). Since collisions occur at the receiver, the RI approach aims to eliminate the most common source of collision (transmit-receive collision). All these approaches add to the complexity, and the overheads incurred by the control packets limit the achievable utilisation.
A popular technique is to incorporate both contention based and contention-free components to form hybrid MAC protocols. This strategy improves performance by allowing networks/devices to switch to an optimum MAC scheme based on demand or traffic profiles. Variations in traffic were addressed in [18], where the protocol was preconfigured to assign capacity either by free assignment or on demand, and reference [19] balanced performance with two time slots in a frame, one slot for scheduled transmissions and the other for random access.
In the highly dynamic underwater environment, MAC protocols need to be adaptive to changing conditions as well. This is because previous assumptions used to create schedules may be outdated or sub-optimal due to changes in topology, traffic, node(s) failure(s) and/or addition(s). Reinforcement learning is a promising solution used in MAC protocols to provide adaptability and robustness in wireless sensor networks, such as adhoc emergency networks for disaster monitoring [20,21]. In such networks, intelligent MAC protocols will adapt to the changing topology or the environment. Instead of switching between MAC schemes, reinforcement learning is used to continually assess the network condition through feedback and appropriately responds with a view towards maintaining (as much as possible) a collision-free schedule.
In [21], we studied the use of ALOHA-Q [20] underwater. ALOHA-Q is a MAC protocol originally developed for terrestrial wireless sensor networks. It employs a Qlearning algorithm to incorporate intelligence into framed ALOHA. The frame is created with a predetermined number of periodic fixed time slots. Each slot is structured such that it accommodates a data packet, an ACK packet and their corresponding one hop propagation delays ( Figure 2). Initially, nodes randomly select and transmit in any slot, but eventually, each node settles on a collision-free slot as the underlying Q-learning reward/punishment serves to reinforce successful slots. However, because the ACK serves as the critical signal for the reward/punish mechanism in the Q-learning algorithm, the overhead with respect to the slot size due to the long propagation delay severely constrains the effectiveness of the Q-learning strategy in terms of achievable utilization and end-toend delay. In Section 3.2, we demonstrate the Q-learning update mechanism and how it is applied in the ALOHA-Q protocol.
The focus of this paper is to implement a robust, simple and computationally inexpensive MAC protocol that consistently and efficiently delivers the maximum channel utilisation in a monitoring chain UASN, such as an underwater pipeline. To achieve that, we were inspired by the research in [20,[22][23][24]. For reference, Table 1 describes the terms/symbols used in this paper.
Our specific contributions are: • To provide some background work on the feasibility of restructuring the slot size in a typical frame-based underwater MAC protocol to improve network performance. • To propose a new slot structure with minimal overhead based on the relationship between packet transmission duration and the one hop propagation delay that is capable of achieving the theoretical channel utilization. • To propose ALOHA-QUPAF, a novel dual-control intelligent approach to medium access control based on packet(s) flow in a linear chain network.
The rest of the paper is structured as follows. Section 2 introduces the frame based approach of the MAC protocol design and the network model. Section 3 presents the proposed slot size, the analytical modelling, and discusses the simulation results as compared to the theoretical results. It is followed by Section 4, our detailed dual-control intelligent MAC scheme, and the results obtained when applied to varying lengths of chain networks. Section 5 discusses the simulation results obtained of our proposed protocol. Finally, in Section 6 we draw conclusions.
Frame Based MAC Protocol
In this section, an overview is given of the fundamental operation of a baseline frame based random access protocol. With the aid of a simple network model, we analyse and identify the limitations of frame based scheduling (in terms of achievable channel utilization) with a random access scheme.
Framed ALOHA is one of the baseline protocols we compare against our proposed intelligent scheme. In contrast to slotted ALOHA, whereby time is divided into slots and nodes can only transmit at the beginning of each slot, a frame is used in framed ALOHA, which comprises a fixed number of contiguous slots N s . In the framed ALOHA random access strategy, each node independently and randomly chooses one of the transmission slots at the beginning of each frame.
Typically, a slot is structured such that it accommodates: a data packet of duration (τ d ), an acknowledgement packet of duration (τ A if required), the associated propagation delays of each packet (τ pg ) and a small guard band (τ g ): the band is essential to correct and guard against drifts in clock precision and synchronisation. The slot structure is shown in Figure 1, for cases with and without acknowledgements. Whereas, in radio networks, the overheads due to the wait period between successive data transmissions in a slot/frame can be of negligible length with respect to the packet duration, in an underwater acoustic channel however, the physics impose a long propagation delay, plus low capacity (bandwidth and therefore data rate), making the overheads significant, thus negatively impacting the channel utilization and end-to-end delay.
Defining the channel utilization (U) as the rate of delivering data at the designated sink node (Equation (1)), then, in frame/slot based protocols, the utilization is also a function of the number of slots (N s ) in the frame. For example, if a node is allowed to transmit N packets per frame, then the maximum effective utilization at the sink is going to be upper bounded at N/N s . The value of N s is determined from the topology and interference population of the network. Setting N s inappropriately will negatively affect not just the utilisation, but potentially the stability of the MAC protocol as well. For example, in a star topology, N s is equal to the number of transmitting nodes (N n ); as each node should have a unique transmitting slot, setting N s > N n adds extra un-utilised slot(s), and N s < N n will cause contention as some nodes will not exclusively own a slot. Therefore, for a particular topology and interference model, there is an optimum N s (N opt ) [20]. Erlang [25] is a dimensionless unit that represents continuous channel usage (for example 0E = zero channel activity, 0.5E = half channel activity and 1E = full channel usage).
therefore, the optimum utilization is: where S, τ d denote the slot duration and packet duration in seconds respectively. One of the consequences of having low capacity is the long transmission duration, which presents two situations for a given transmitter and receiver pair: the transmission duration is either greater than or less than the propagation delay between the nodes. Following [26], if we introduce the parameter K τ (Equation (3)), then the resulting slot structure can have either of two sets of transmission-reception patterns: overlapping and non-overlapping based on the value of Kτ, as shown in Figure 1. S a1 and S a2 represent the slots' length with ACK and are typically used by slotted protocols employing an ACK signal such as ALOHA-Q. Similarly, S n1 and S n2 are the slots without ACK as used in framed ALOHA and TDMA. Equations (4) and (5) are used to calculate the slot sizes.
In this slotted concept, nodes are allowed to transmit only one packet per frame (i.e., N = 1), and the expression of maximum utilisation (U) can be simplified to the ratio of packet duration-to-frame size (Equation (6)). We can combine Equations (2) and (6) to calculate the expression of the utilisation below: As τ d , τ pg >> τ A , τ g , Equation (6) approximates to: From Equation (7), it can be seen that, since τ d and τ pg dominate, the value of Kτ will guide us on how to improve channel utilisation by restructuring the slot size. For Kτ > 1, we are constrained with respect to any change to the slot size. Any reduction will create overlapping slot reception that will effectively render the slotting meaningless, as demonstrated with the downgrade of slotted ALOHA to pure ALOHA underwater [26].
In most UASNs applications, the propagation delay is longer than the transmission duration because of sparse connectivity. Therefore, Kτ < 1 best describes such scenarios. We propose the slot structure in Figure 2. The slot size is now reduced to approximate the propagation delay (S ≈ τ pg ), which is possible since with Kτ < 1, the data packet can be safely accommodated in τ pg . This simple slot structure aims to reduce and fill the otherwise wide gap in the conventional slots with useful data (compared to Figure 1). Therefore, for a given chain UASN, designed with nodes separated by a dm transmission range, we demonstrate that there are advantages to the performance improvements of using our slot structure; for example, the peculiar characteristic of the underwater communication channel in terms of its distance dependent capacity, that is the acoustic transmission bandwidth and data rates decrease with increasing transmission distance [27]. As such, instead of a few hops transmitting over longer ranges (requiring high power) with low capacity, we can potentially achieve higher capacity transmissions with additional hops added to route data over shorter ranges (low power). To investigate the achievable utilisation, the slot structure shown in Figure 2 is based on Kτ ≈ 1: a special case of Kτ < 1. This is purely to limit the overhead in the slot, as increasing the slot size beyond τ pg negatively affects the utilisation according to Equation (6).
Scenario and Network Model
Consider a scenario comprising quasi-stationary equally spaced nodes in an N hop underwater network chain topology, with data delivered along the chain from one end to the other. Figure 3 depicts an example of such a network with N = 4 and hop distance d. This topology is representative of pipeline monitoring. As such, during the reporting cycle, the network can be considered loaded to capacity; accordingly, this work is primarily concerned with the achievable utilisation. To aid the analysis, the following assumptions are made: 1.
All nodes are homogeneous and communicate over a single channel, half-duplex mode. 2.
The collision model (non-capture) is used, i.e., if two or more packets overlap at the receiver, they are discarded.
3.
Nodes are globally synchronised, an assumption commonly employed to simplify analysis and applicable to quasi-stationary nodes synchronised before deployment. 4.
The interference range (Ifx) is twice the reception range (Rx); this model is typically employed for chain networks as an illustrative model to incorporate the effect of interference from nodes that are two hops away.
5.
A source node has saturated traffic, i.e., always has a packet to send, to provide the maximum monitoring rate based on the transmission opportunities offered by the MAC layer. Similar research papers are concerned with achievable utilization [23,28,29]. 6.
All source/relay nodes can only transmit one packet per frame, a consequence of Assumption (4) yielding a frame consisting of four slots [20], as only one of four connected nodes can transmit successfully at a given time. We re-write Equation (7) of S n to get the new utilisation for the proposed slot structure: and in terms of K τ , it becomes: In summary, while the traditional slot structure that incorporates the propagation delay and/or ACK packet within the constraints of the available channel resources, we show that with Kτ < 1, the propagation delay is sufficient to accommodate the data packet, then it is possible for the slot size to be effectively reduced and restructured (by at least 50% of the cases in the Kτ < 1 regime), and as long as a protocol does not require an ACK packet, there is a potential for a dramatic improvement in performance (Equation (9) vs. Equation (7)).
Model Analysis
To analyse the network with the proposed slot structure (Figure 2), we consider a baseline scheme whereby each node initialises by randomly choosing a transmission slot. The purpose of considering this scheme is first to demonstrate the inefficiency of a random access scheme by analysing the distribution of the achievable channel utilization, second to investigate the feasibility of applying intelligent techniques to the model that could lead to a significant performance improvement and, finally, to evaluate the efficacy of the proposed slot structure coupled with the intelligent techniques relative to similar intelligent approaches and random access baseline schemes.
To build the frame, we start with the optimal number of slots per frame N opt . In a linear chain network (such as Figure 3 and longer,) N opt is four as computed according to the two hop interference model [20]. This is because in a linear topology with two hop interference model, technically only one in four nodes can successfully transmit at a given time. Similarly, for one hop and three hop interference models, one in three and one in five nodes can transmit successfully [20,23]. Therefore, for a distributed MAC protocol, such as framed ALOHA employed in this setup, each node is free to chose any of the available four slots in the frame, resulting in 4 4 = 256 ways for nodes to independently select and occupy transmission slots. Table 2 Figures A2-A7) are provided to illustrate the process. For each pattern, N_0 is the source node; it generates and transmits data in every frame to N_1, which forwards the packet (if successfully received) to N_2 in the next frame, and so on. Overall, individual packets are traced frame-by-frame as they traverse the network from source to sink (N_0 to N_4). The final utilisation is measured when an overall periodic pattern emerges at the sink node (vertical red lines in each example figure; refer to Appendix A).
Results
In order to empirically evaluate the performance of the above random access scheme, we ran a simulation on a network of five nodes ( Figure 3) configured with the proposed slot structure analysed in Section 3. Each node is pre-configured to run a MAC protocol that randomly selects and maintains a transmission slot at the beginning of each simulation run. It should be noted that in this simulation, since K τ ≈ 1, the transmission delay and propagation delay are abstracted to 1:1 for the best results. Figure 4 shows and compares the utilisation results from both the analytical distributions of the slot patterns and the simulations. Overall, there are three dominant utilisation levels and some spurious intermediate levels, as summarised in Table 3. The summary provides individual proportions of levels in each plot, and the overall column is the contribution of each sequence in the combined set of 256 slots. Depending on the chosen slot by the source node, transmissions could be initiated from either the frame edge (Slots 0 or 3) or mid-frame (Slots 1 or 2), and to some degree, the results show how the position of a chosen slot affects the utilisation. As shown in the result summary (Table 3), there is a subtle, but clear advantage in performance when the source node initiates transmissions with emerging slot patterns at frame edges (i.e., SEQ_0XXX, SEQ_3XXX) relative to the mid frames (i.e., SEQ_1XXX, SEQ_2XXX) or there is at least an 8% better chance of getting a packet received at the sink node when the source node transmits at the edges of a frame as compared to when source node uses mid frame (in terms of the worst case utilisation levels).
Intuitively, the distribution of the utilisation of the patterns can be assumed to be similar, since it can be demonstrated that each column sequence can be translated to another corresponding sequence in the remainder of the columns (Table 2). However, due to the transmission strategy of the protocol of scheduling packet transmission at the beginning of each frame, the simple slot structure guarantees that packets transmitted at slot i be received at slot i+1 . This means sequence translations will result in packet reception/interference across frames, consequently causing the distribution of the utilization outcomes to vary. Figure A7). In contrast, [ 1 1 0 1 ] and [ 2 2 1 2 ] have no cross-frame reception and yield 0 Erlangs (Appendix A: Figure A2). Only 60 out of the total 256 slot sequences yield the maximum utilization level as a whole and remain immune to the slot sequence translations because they are perfectly collision-free. In Figures A2-A7 (Appendix A), we show how we computed six of the ten prominent utilisation levels for brevity.
The simulation results are in agreement with our analytical results, as they show that no data is delivered 48% of the time. This corresponds to the average of the possible 43-53% worst cases in the given original slot patterns, as expected. Most importantly, the simulation result confirms that the full channel utilization is achievable with the exact proportion of 23%. Finally, the simulation result shows the average performance of the random slot selection protocol and will serve as a baseline with which to demonstrate the merit of slot based learning in the new protocol ALOHA-QUPAF.
Q-Learning
This section demonstrates the underlying Q-learning update procedure based on stateless Q-learning [30]. Following the standard Q-learning framework, an agent learns how to behave in an unknown environment by interaction with the environment. The agent perceives and changes the state of the environment by taking an action and receives a response from the environment, which indicates the quality of the action taken in the form of a reward/punish signal. This process is Markovian, and it is modelled as an MDP [30][31][32]. To develop a MAC protocol, this is translated to a node taking the action of transmitting the data packet, and the successful/unsuccessful reception of an ACK packet represents the reward/punish signal. Each node is given a vector of Q-values (Q-table), and each Q-value is in turn assigned to one slot in the frame (Section 1). At the beginning of each frame, a node will scan the Q-table and select the slot with the highest Q-value to schedule transmission in that slot. Successful transmissions are rewarded and unsuccessful transmissions punished based on the reception or otherwise of an ACK packet and updating the Q-value of the transmission slot using Equation (10).
where Q[S t ], α and ψ denote the Q-value of the current slot, the learning rate (0.1) and the reward/punish signal (±1). Table 4 illustrates an example of the Q-learning as implemented in ALOHA-Q. Consider an initial situation (Frame 0) whereby a node i with data to send randomly chooses Slot 2 (because all slots have equal Q-values) at the beginning of a frame to schedule transmission and the transmission was unsuccessful.
•
The new Q-value of Slot 2 becomes; In the next frame, Slot 2 has the lowest Q-value and is not considered, and the node again chooses Slot 1 randomly (among Slots 0, 1 and 3). Following a successful ACK reception, the new Q-value of Slot 1 is updated.
For Frame 2, the node chooses Slot 1 as it has the highest Q-value (0.1) and sends data; with successful ACK reception, the Q-value is updated accordingly.
The table gives the Q-values up to twenty frames assuming Slot 1 continues to be successful. This simple, yet effective recursive Q-learning update bootstraps the trialand-error mechanism to a robust collision-free schedule as each node will eventually and independently occupy a unique transmission slot. However, as previously stated, while the ACK signal is crucial to the Q-value update operation, it puts an additional burden on the scarce network resources underwater: reducing utilisation due to overheads and increased delay due to the ACK signal wait times. Our goal is to implement a novel Q-learning approach that maintains the level of intelligence without this explicit ACK signal, thereby maximising the channel utilisation and improving end-to-end delay.
Underwater Packet Flow ALOHA-Q: ALOHA-QUPAF
The proposed slot structures in Figure 2 pose a critical question: how do we apply a simple reinforcement learning algorithm to ultimately achieve collision-free scheduling without an ACK packet? In this section, we present a two stage solution using a reformulated Q-learning coupled with a simple stochastic averaging expression [33], the harmonised stages are described in Algorithm 1. We demonstrate the efficacy of our dual-mode learning approach in improving performance in a chain network as introduced in Section 2.
Protocol Design
In order to achieve the goal of realising a collision-free schedule without an explicit ACK signal, we modified the Q-value update process (Section 3.2) while maintaining the remaining protocol settings and assumptions (Sections 2 and 3.2). Specifically, at the beginning of each frame, a relay node chooses the slot with the highest Q-value (if more than one slot has the highest Q-values, one is chosen at random) to forward a received packet on to the next hop. In the case of the source node, it initialises by randomly selecting and maintaining a constant slot for transmission. This is because we employ a Q-learning process that utilises packet receptions to update and reinforce transmission slot selection. Our solution involves a two stage approach based on the following intuitions:
1.
In a network with half-duplex nodes, they cannot transmit and receive at the same time (slot); therefore, we employ Q-learning to isolate all reception slots by punishing those slots to lower their Q-values. As such, when a node scans the Q-table, reception slots will have low Q-values and are unlikely to be selected for transmission.
2.
A continuous flow of packets over the chain is expected in saturated traffic with a healthy channel. Thus, a relay/sink expects a new packet(s) in every frame after receiving the first packet, and a packet collision is inferred whenever that stream of packets gets disrupted. To exploit this realisation, every time a relay node transmits a packet, it rewards the chosen transmission slot (positively updates the slot's Q-value) if and only if a new packet is received afterwards.
We denote the two stages in the dual mode control as slot selection and flow harmony, and a detailed description of the process is given below: • Slot selection: This is implemented by Q-learning to eliminate the reception slot(s). When a source node generates a packet and transmits, upon receiving the packet, the receiver (relay node) will record the reception slot (rx_s) and update the Q-value of the slot according to (Equation (10)). Specifically, each slot in a frame is mapped to a value in the vector of Q-values (Q[ns]), and the Q-values are initialised with a uniform random number less than one, whereby for each reception, the node computes rx_s and updates Q[rx_s] accordingly with ψ = −1. Consequently, this continual negative reinforcement of reception slots isolates those slots, and the slot(s) with the highest Q-value(s) signifies a probable collision-free slot at the local level, therefore a good candidate(s) slot(s) for transmission. For a relay node, at the beginning of each frame, if a node has a packet(s) in its queue, it will schedule a packet transmission in a slot with the maximum Q-value; however, if more than one slot shares the maximum Q-value, one will be chosen at random from amongst them. Whilst the Q-value of the reception slot is always punished following any reception, the Q-value of the transmission slot is only updated after every transmission. If there is a subsequent packet reception, the transmission slot is rewarded (ψ = 1), otherwise it is punished (ψ = −1). However, since this scheme lacks a definitive feedback signal based on this node action(s) of transmissions, the success of any transmission in the chosen slot is uncertain. This is because, unless if the packet flow is network wide, a continuous transmission and reception by a relay node does not mean that a given node's transmissions are not interfering with some other transmissions especially for the downstream links. Therefore, to avoid nodes from getting stuck in local minima, a control mechanism has to be devised to regulate the Q-values especially of the transmission slot. • Flow harmony: Although we devise a means to obtain feedback from the environment (reward/punishment), the node cannot directly link these signals to its own action(s); hence, at any given time during the network run, we only have a partial observation of the channel condition; this type of process is best modelled as a partially observable Markov decision process (POMDP) [34,35]. This is because, instead of certainty in the network wide flow, the packet flow experienced by each node gives us a partial observation on the channel at the local level. The POMDP framework enables us to model the local observations by agents to generate a probability distribution of a belief state (in our case, settled or unsettled flow). The network can be in either stable or unstable packet flow states, and we therefore designate two belief states accordingly. We employ a simple heuristic strategy based on the stochastic averaging [36], whereby each node independently tracks its overall local packet flow in a given window, which we then translate as the distribution of the belief state. The distribution of the belief states is computed with Equation (11). For each reception in a frame, f l τ is updated by λ steps at the tracking rate γ. While the expression monotonically approaches one, it is continually windowed every (W n ) frames and compared to a fixed threshold (thresh). Based on our simulation experiment, ideally, f l τ will reach 98% by the 20th frame; hence, we heuristically set (W n = 20) to check for f l τ with a tolerance of thresh = 95%, which should be achieved at (W n = 14). If we designate the belief states S1 and S2 respectively as the initial state (both Qvalues and f l τ reset; the network is assumed to have no stable flow during learning) and the flow harmony state, S1 is decided when the averaging function exceeds the threshold, which indicates that flow harmony has been achieved at least in the node's local interference group, otherwise the node resets to S2. In essence, every node has a window of 20 frames to isolate incoming reception slots and settle on a transmission slot. Whenever a particular node(s) fails to settle and join the flow, the reset will make the node switch to another slot and potentially notify other nodes in the neighbourhood as well.
where f l τ , γ and λ denote the flow averaging, the learning/tracking rate and the increment scale, respectively.
By using this two stage solution, ALOHA-QUPAF unlike ALOHA-Q effectively isolates both reception slots from the transmission slots and finds an implicit way of getting the feedback signal of the node's action based on the individual nodes experiencing successful reception of a continuous stream of packets. Furthermore, it differs from framed ALOHA, since it can intelligently create and maintain a robust collision-free schedule. The complete ALOHA-QUPAF algorithm is given below.
Results
Since the focus of this work is principally to improve performance in terms of channel utilization measured at the sink, ALOHA-QUPAF is compared to a state-of-the-art ALOHA-Q, which employs a similar Q-learning technique, and a baseline framed ALOHA scheme in terms of the normalised utilization. We simulated networks of varying hop lengths with the protocols configured with respect to the structures in Figure 2. For a fair comparison, as our proposed slot structure is constrained to Kτ > 1, we only compare ALOHA-QUPAF with the other protocols in the Kτ > 1 regime. The network was simulated in the Riverbed Modeler (formerly OPNET) environment, and the setup used the parameters given in Table 5, which were based on a modem developed by Newcastle University [37]. In all cases, the network was simulated for 15,000 frames, with a single saturated source at one end of the network and a sink at the other end. In terms of result collection, due to the continuous nature of the learning of the ALOHA-QUPAF algorithm, the results were collected from the beginning of the simulation.
Discussion
Figures 5 and 6 are the results obtained when the network was simulated on four and eight hop networks, respectively. The figures compare the performance of ALOHA-QUPAF with ALOHA-Q and framed ALOHA. This comparison is particularly important as the protocols share similar reception conditions in the Kτ > 1 scenario; transmission and reception occur in the same slot ( Figure 1). Evidently, in this setup, both ALOHA-QUPAF and ALOHA-Q are dramatically affected as the network size increases (four hops to eight hops). The maximum utilisations of ALOHA-QUPAF (0.217 Erlang) and ALOHA-Q (0.191 Erlang) are both sharply halved for about 40% and 58% of the simulated cases, respectively. This performance drop can be explained by the presence of the hidden node phenomenon [38,39]. This is simply the situation whereby a particular communication between any two nodes is interfered by another transmission within range of the receiver. Figure 7 depicts the hidden node problem in an eight hop chain network, in a situation whereby both N2 and N5 share the same transmission slots; thus, transmission from N2 to N3 will be periodically interfered by N5 transmitting to N6, as packets are relayed along the chain. The effect of the hidden node problem as the reason for the performance degradation is confirmed by the agreement shown in the simulation results obtained when the interference range (Ifx) is reduced from two hops to one hop in the eight hop chain ( Figure 6) with the results in the four hops network ( Figure 5). This is because, in a two hop interference range model, a four hop range chain network is of insufficient length for the issue to manifest. Mitigating the hidden node issue is a subject of further work. Another important metric worth mentioning is the end-to-end delay; however, it is not presented here, since ALOHA-QUPAF does not implement packet retransmissions. Therefore, neglecting any processing and queuing delays in the nodes, the E2E delay is fixed as a function of the number of hops in the network. The simulations show that ALOHA-QUPAF achieves 0.124 Erlangs at its worst and 0.248 Erlangs at its best, outperforming both ALOHA-Q (0.19 Erlangs best) and framed ALOHA (0.069 Erlangs) respectively by at least 13% and 148% in all simulated scenarios. Figure 8 presents the performance of ALOHA-QUPAF with our proposed slot structure (Figure 2) in the Kτ < 1 scenario. To demonstrate how the ALOHA-QUPAF protocol is affected by the network length, we extend the range to 16 hops and evaluate its performance. The results show a subtle drop in the overall performance from four to 16 hops. The decrease in performance is attributable to the increase in the hidden node spots (bottlenecks points) and the time needed for the protocol to find a collision-free schedule as the network size increases. Each time a node switches to a different transmission slot, this will have a ripple effect across the neighbouring nodes, causing others to potentially switch slots as well, essentially resetting the process. Despite a lack of an explicit acknowledgement signal, the protocol demonstrates significant performance improvement with more than 90% of cases achieving 0. 24
Conclusions
In this work, we present a simple slot structure based on the relationship between packet transmission duration and propagation delays in conjunction with two stage reinforcement learning techniques to develop a novel MAC protocol (ALOHA-QUPAF) that can achieve near channel capacity utilisation in a UASN chain topology. Our solution addresses the excessive overhead required in slot structures used by typical slotted/framed protocols. Incorporating a Q-learning in the protocol makes it robust against network and channel changes due to the high dynamic underwater environment. Furthermore, one of the primary goals is for the protocol to be distributed, adaptive, simple and computationally inexpensive so that it is suitable for use in inexpensive and low capacity modems.
To implement our solution, firstly, we analyse the slot structure using an intuitive diagrammatic representation to map the achievable channel utilisation levels. We then reformulate a Q-learning routine that exploits an implicit feedback signal to negatively reinforce and isolate reception slots in the slot selection phase. Secondly, by averaging the packet flow rate, we are able to generate a distribution for belief states that control and consolidate the choice of transmission slot to achieve overall network wide packet flow. We finally evaluate and demonstrate that ALOHA-QUPAF significantly outperforms the comparable protocols with similar Q-learning and slotting concepts. | 8,719.2 | 2021-03-24T00:00:00.000 | [
"Computer Science"
] |
Housing Quality Assessment Model Based on the Spatial Characteristics of an Apartment
: Today more than ever, people are demanding higher-quality housing, and therefore, there is an increasing need for scientifically sound methods of systematic housing assessment that are capable of addressing multiple, conflicting, and irreconcilable aspects in both qualitative and quantitative terms. Existing studies and models often use a relatively small number of indicators and consider housing quality from a single perspective. This paper presents a methodology used to develop a model for assessing the quality of multiple conflicting spatial characteristics of an apartment. Through a literature review and a survey of 12 architects, 24 spatial indicators were identified and then classified into five categories: (i) additional rooms, (ii) room size, (iii) window orientation and ventilation, (iv) circulation, and (v) spatial organization. Finally, the overall rating of the apartment is calculated as the sum of the ratings of all indicator categories, where the share of each category in the overall rating and desirable characteristics of the apartment is determined by the user. The model was tested on the example of two apartments in the city of
Introduction
For most people, a house or an apartment is the most expensive item they will buy in their lifetime and is a crucial factor in subjective well-being [1]. It represents a status symbol, a part of a person's identity [2], and a major element of material living standards [3]. Since we spend a large part of our lives in a house or an apartment, it must meet various housing needs. The housing dimension has a substantial influence on the quality of life [4], and the housing unit should therefore be adapted to its users. Quality housing plays a significant role in healthy living, affects childhood development [3], leads to improved productivity [5], provides a comfortable space, and reduces psychological distress [6]. According to Natividade-Jesus et al. [7], the purchase of a home is usually a decision made on the basis of less detailed information than the purchase of a car, and this is explained by the fact that there is no multidisciplinary and specialized knowledge that could be included in the assessment of housing. Today, more than ever, people are demanding a superior quality of housing; therefore, there is an increasing need for more scientifically sound methods to conduct systematic housing assessment that are capable of addressing multiple, conflicting, and irreconcilable aspects, both qualitative and quantitative, and of addressing the concerns of various stakeholders (developers, consumers, government agencies, municipalities, etc.) [7]. Moreover, because potential users belong to different economic strata, live in different countries, climates, and cultures, and have different perceptions of housing quality, a single, unified assessment tool may not be appropriate. Finally, personal characteristics, such as age and gender [8], stage of life, income, education, family structure [9], and family needs [10], generate different housing expectations. Therefore, the characteristics, attitudes, needs, and desires of individual users should be included in the housing assessment.
Housing quality is a broad concept that encompasses many housing aspects and has both an objective and subjective dimension [11]. It encompasses many factors, including Buildings 2023, 13, 2181 2 of 20 the physical condition of the building and other facilities and services that make living in a particular area pleasant, as well as the characteristics of the occupants [12]. Tibesigwa et al. [12] assert that spatial characteristics are fundamental quality parameters, including the organization of space, hierarchy, aesthetics, relationship with spatial functions, and flexibility of space. Previous research has also confirmed that improved spatial quality contributes to the attractiveness and public image of a building, as well as the users well-being [13].
A large amount of research defines the quality of housing through user satisfaction with housing conditions. Housing satisfaction is a dynamic process [14] as well as a multidimensional and complex construct [15] and can be defined as the perceived gap between respondents' needs and preferences and the reality of the current housing environment [3]. So far, housing satisfaction has been studied on the basis of different users (students [5], young population [16], older adults [1]), different housing types (multifamily housing [9], public housing [17], large housing estates [15], affordable housing [12], rental housing [18]), and different participants in the construction and sales process [19] in different parts of the world. Some of the conclusions of previous research that are significant for this research are: residential satisfaction in Europe is driven first by housing-specific characteristics, followed by neighborhood conditions and individual/household characteristics [20]; building characteristics are one of the important factors in tenant residential satisfaction [9]; dwelling size is shown to be a strong determinant of residential satisfaction; nice and helpful amenities in the apartment are a source of residential comfort [15]; an important component in the measure of quality in housing is the quality of the apartment unit design characteristics and features [21]. In addition, the personal satisfaction aspects of housing quality are generally associated with the personal characteristics of households, such as the occupant's age, income, level of education, preference, etc. [21].
Parallel to the studies on housing satisfaction, various methodologies and models for housing quality assessment were developed. These methodologies and models assess the quality of housing through many dimensions (for example, through the dimension of apartment, building, location, neighborhood, socio-economic dimension, etc.), where one dimension is often determined by only a few different indicators, which cannot give a realistic assessment of a certain dimension. Some of them are presented below.
The French Qualitel Association established in 1974 a set of seven indicators that are rated on a scale of 1 to 5, with 1 being the minimum standard and 5 being a comprehensive design solution. The Qualitel profile is simple, straightforward, and easy to understand, even by people who are not experts in the field of housing [22].
The Housing Quality Indicator (HQI) system, designed in the United Kingdom in 1998, is a tool for evaluating existing housing schemes on the basis of quality rather than simply cost [23]. There are 10 indicators related to: quality of the location; site ((i) visual impact; layout and landscaping; (ii) open space; (iii) routes and movement); permanent units ((i) size; (ii) layout; (iii) noise, light, services, and adaptability; (iv) accessibility within the unit; and (v) sustainability); and the external environment [24]. Each indicator includes a series of questions and receives one tenth of the total possible score. HQI users also have the option to change the weightings applied to each indicator. Failure to meet suitable levels of, for example, security or noise control may render a place so uninhabitable that other factors cannot compensate. However, this does not imply that these indicators should be more heavily weighted than other factors; merely that failure to meet a certain level is unacceptable for these indicators [25]. Scores are presented numerically and graphically to show the strengths and weaknesses of a project and how the overall score is composed. In March 2023, the HQI system was withdrawn because it was no longer current [23].
In 1999, also in the UK, the Construction Industry Council addressed the issue of poor-quality design in buildings through the development of the design quality indicator (DQI) [26]. The DQI can be used by all stakeholders involved in the production and use of buildings (public and private clients, developers, financiers, design firms, contractors, building managers, and occupants) [27]. Participants work through the DQI's structured questionnaire, which covers the three main quality principles (functionality, build quality, and impact) in 10 more focused sections. The DQI has two types of weighting; the first allows results to be distorted depending on how the respondents judge the success of various aspects of the building. Other, separate types of weighting can be applied, indicating whether aspects are fundamental relating to factors that the building must achieve in order to fulfill its purpose, added value relating to factors that will enhance the building's usefulness and pleasure value, or excellence relating to factors that make good design [27]. Results are presented graphically to highlight comparisons between different groups of respondents.
Système d'evaluation de logements (SEL) is a Swiss tool developed to help design, rate, and compare housing. It consists of 25 indicators and measures the quality of the building through three dimensions: (i) location in the settlement, (ii) building lot and building, and (iii) apartment. Each of the indicators is assigned between 0 and 4 points, and the sum of the final results determines the usable value of the apartment, which in total can reach 100 points [28].
In the mid-2000s, a housing performance evaluation model (HPEM) for multi-family residential buildings in Korea was developed. This model is intended to encourage initiatives toward achieving better housing performance and to support a homebuyer's decision-making on housing comparison and selection [29]. The model has 41 indicators divided into three dimensions (housing environment, housing function, and housing comfort). The overall score of housing performance for residential buildings depends on the aggregate of indicators' respective performance scores, which result from multiplying the numerical values (2)(3)(4)(5) of the evaluated performance grades by the credits allocated for the indicators [29]. For easier application, an assessment program has been developed that allows the user to define the points for the indicators to reflect their own value for housing performance for the assessment.
A model of housing quality determinants (HQD) was developed in Pakistan for evaluating affordable housing. Twenty-four quality determinants marked as HQD were grouped into seven sections (housing site and planning; architectural design; structure and construction; building services; user comfort; maintenance; and sustainability) [30].
In 2016, Le et al. [31] developed a system of indicators to measure the quality of social housing in Vietnam, which is useful not only for investors and consultants but also for ordinary citizens to make a better decision about buying a home. They proposed three major quality dimensions: location, master plan of the building, and architecture, which include 12 indicators with 55 specific component factors that cover almost all aspects of Vietnamese social housing. There are 4 levels of satisfaction: good (100%), fair (75%), pass (50%), and fail, and points would be rounded to 0.25 [31]. The total score is calculated based on the individual scores of components within each indicator.
The housing evaluation methodology for evaluating housing quality in a situation of social poverty designed in Mexico contains 51 indicators divided into four dimensions: social, physical, spatial, and urban environment. The attributes of the indicator system were mathematically weighted to quantify and evaluate the level of satisfaction, and once the users of the homes rated these and the level of satisfaction of the different dimensions was established through the Likert survey, the data obtained were treated statistically through a numerical stratification of values and satisfaction level [32].
In addition to the ones presented, there are many other different models and methodologies that deal with the assessment of different aspects of housing, such as: estimation algorithm for predicting the performance of private apartment buildings in Hong Kong [33], matrix of affordable housing assessment that design variables set in a survey tool with a Likert scale to evaluate user satisfaction levels with the designs of their respective buildings [34]; various different green building assessment tools that evaluate environmental performance of buildings including the residential ones such as BREEAM [35] or LEED [36], Assessment of Housing Quality method with 47 factors for assessing the quality of housing which are scored from 1 (not important) to 5 (extremely important) and where data was statistically processed in SPSS 9.0 software [37], etc.
In summary, previous studies have evaluated housing quality using different dimensions and indicators, as well as different assessment systems. Based on the indicators used, they can be mainly divided into two categories: (i) those that have developed indicators for housing quality assessment; or (ii) those that use existing quality indicators for different applications, including direct assessment of housing quality, assessment of comfort, satisfaction, safety, or health of residents, or measurement of energy efficiency of dwellings [38]. According to Wimalasena et al. [38], the three most represented indicator categories are: architectural features and characteristics of the housing unit (25%), user comfort (22%), and location and neighborhood of the dwelling (20%). Since climate, culture, urbanization level, technological progress, and socioeconomic progress influence the perception of housing quality standards, there is no universal definition of quality, and the tools developed for housing quality assessment should consider a flexible/adaptable system for indicator selection [21].
The question of where and how to live and under what physical, spatial, social, and urban conditions has become very important for millions of families around the world due to the confinement caused by the pandemic of COVID-19 [32]. Now more than ever, people are demanding a higher quality of life when buying or renting a home. Therefore, there is an increasing need for scientifically sound methods for the systematic assessment of housing that are able to take into account multiple, contradictory, and incompatible aspects [7].
The main objective of this research is to present a developed methodology for assessing measurable spatial characteristics of an apartment (SCA), which could be used in the future to develop a more comprehensive housing quality assessment model. An additional objective is to show how the model for assessing the spatial characteristics of an apartment works on the example of two apartments within the same residential building. Although the model is applicable to any micro-location, it will be tested on the example of two apartments in the same building in Osijek, Croatia. This location was chosen because it was the area of previous research [39,40] regarding housing policies and apartment characteristics in relation to those housing policy periods. In addition, Osijek was chosen as a research location because the issue of housing quality in Osijek has never been studied on this scale before and, as the fourth largest city in Croatia, the city has a representative building stock.
The term model refers to the entire system for evaluating the spatial characteristics of an apartment with all its necessary components, i.e., its graphic and mathematical representation. The term user is used to refer to a person who, in the case of this research, evaluates an apartment either with the intention of defining its level of quality, to obtain general information on a specific apartment, or to compare and purchase a specific apartment.
This section presented the problem and objective of the research, issues of housing quality and user satisfaction with housing, and an overview of existing methods and models that address housing quality. In the following section, a model for user assessment of the spatial characteristics of the apartment is presented in terms of the preliminary work required for its operation as well as its graphical and mathematical representation. The fourth section shows how the model works, using the example of a comparison of two apartments within the same residential building. The final two sections are the discussion and conclusions.
Materials and Methods
The model was developed through several stages, as shown in Figure 1. The starting point of the research was the analysis of existing literature on housing satisfaction, housing quality, housing design guidelines, and previously developed methods and models for evaluating housing quality (Step 1, in Figure 1) [41]. Based on the available literature, various indicators for housing quality were identified (Step 2, in Figure 1) [41]. The indicators were divided into four categories: (i) apartment unit quality; (ii) apartment building quality; (iii) neighborhood quality; and (iv) social and economic indicators (Step 3, in Figure 1) [41]. Due to the large number of identified indicators, only measurable spatial characteristics from the category of apartment quality indicators were selected for the first phase of the development of the housing quality assessment model presented in this paper.
In the further course of the study, the significance of the spatial indicators was verified by interviewing experts: 12 architects with an average professional experience of 18.5 years in the design of residential buildings (Step 4, in Figure 1) [42]. The results of the architects' interviews served as the basis for the development of a structured questionnaire designed to elicit user preferences regarding specific apartment characteristics. This questionnaire is part of one of the preparatory actions, as it provides input data for the functioning of the model. Each of the questions is related to one of the indicators of the quality of the apartment. The questionnaire was tested on 130 apartment users between the ages of 18 and 82 (Step 5, in Figure 1) [42]. After examining the views of architects and testing questionnaires with users, the final questionnaire that will be used in the third prephase was defined. Further steps in the formulation of the model (Step 6 in Figure 1) and the presentation of how the model works (Step 7 in Figure 1) are the main focus of the research presented in this paper. Table 1 shows five indicator categories: (i) existence of additional rooms, (ii) room size (square footage), (iii) window orientation (in relation to the insolation) and ventilation, (iv) circulation (communication between rooms/traffic pattern), and (v) spatial organization [41] with 24 indicators and the questions from the questionnaire corresponding to each of the indicators. The user answers the questions based on the offered answers by ranking them, either by choosing the most preferred answer or using a Likert scale. An example of the answers to each survey question can be seen in Appendix A. Table 1. Relationship between the questions in the survey and the indicators in the model.
Indicator Category Indicator Survey Question
Additional rooms (ar) The existence of several storage rooms How desirable is it for the apartment to have more than one storage room (e.g., pantry, wardrobe)? The existence of outdoor space How desirable is it for the apartment to have an outdoor space (balcony, loggia, terrace)? The existence of additional toilet How desirable is it for the apartment to have an additional WC in addition to the bathroom? Room size (rs) Living room In your opinion, what should be the In the further course of the study, the significance of the spatial indicators was verified by interviewing experts: 12 architects with an average professional experience of 18.5 years in the design of residential buildings (Step 4, in Figure 1) [42]. The results of the architects' interviews served as the basis for the development of a structured questionnaire designed to elicit user preferences regarding specific apartment characteristics. This questionnaire is part of one of the preparatory actions, as it provides input data for the functioning of the model. Each of the questions is related to one of the indicators of the quality of the apartment. The questionnaire was tested on 130 apartment users between the ages of 18 and 82 (Step 5, in Figure 1) [42]. After examining the views of architects and testing questionnaires with users, the final questionnaire that will be used in the third pre-phase was defined. Further steps in the formulation of the model (Step 6 in Figure 1) and the presentation of how the model works (Step 7 in Figure 1) are the main focus of the research presented in this paper. Table 1 shows five indicator categories: (i) existence of additional rooms, (ii) room size (square footage), (iii) window orientation (in relation to the insolation) and ventilation, (iv) circulation (communication between rooms/traffic pattern), and (v) spatial organization [41] with 24 indicators and the questions from the questionnaire corresponding to each of the indicators. The user answers the questions based on the offered answers by ranking them, either by choosing the most preferred answer or using a Likert scale. An example of the answers to each survey question can be seen in Appendix A.
The impact of a certain indicator category on the final apartment rating is defined through paired comparison analysis (PCA). PCA is used when there is no objective data or when different subjective criteria need to be compared. It is particularly useful when priorities are not clear, when options are completely different, and when trying to define the importance of each criterion. This method provides a framework for comparing each option to all the others and helps to show the relative importance of each option [43]. This study used a customized PCA method in which the surveyed user must not only select the most important indicator but also indicate the extent to which this indicator is more important to him than the others. Along with the letter of the indicator most important to them, they were also asked to write a number (from 1 to 5) indicating how important this indicator is to them compared to the others. The number 1 next to the letter means that the indicator is minimally more important, while the number 5 indicates that the indicator is much more important (Figure 2). Table 1. Relationship between the questions in the survey and the indicators in the model.
Indicator Category Indicator Survey Question
Additional rooms (ar) The existence of several storage rooms How desirable is it for the apartment to have more than one storage room (e.g., pantry, wardrobe)?
The existence of outdoor space How desirable is it for the apartment to have an outdoor space (balcony, loggia, terrace)?
The existence of additional toilet Communication between rooms What is the most convenient way to connect the rooms in the apartment?
Kitchen-living room Connection
What is the preferred connection between the kitchen and the living room?
Indoor-outdoor connection From which room is it best to go outside (balcony, loggia, terrace)?
Bedroom-living room connection How desirable is it to enter the bedroom through the living room?
Spatial organization (so) Dining When indicators for which there are multiple spatial options are evaluated within indicator categories, the following scoring system was developed. In the model, the number of points assigned to each indicator within an indicator category depends on the user's answers in the questionnaire. There are three different ways to assign points to a particular indicator that are correlated with the question types:
1.
For questions answered with a Likert scale (1 undesirable; 5 desirable), 1 point is awarded if the user rated a particular indicator as 4 or 5 and the rated apartment has In the questions about the sizes of certain rooms, users chose the interval in which they think the minimal area of a certain room should be. If the area of the room in the observed apartment is within the interval or higher, 1 point is awarded; if it is smaller, 0 points are awarded; 3.
Since each spatial feature of the apartment can be designed in several different ways, i.e., has several possible variants, the user must be able to evaluate the desirability and quality of each of these variants in the model. When indicators for which there are multiple spatial optio indicator categories, the following scoring system was developed ber of points assigned to each indicator within an indicator catego answers in the questionnaire. There are three different ways to assi indicator that are correlated with the question types: 1. For questions answered with a Likert scale (1 undesirable; awarded if the user rated a particular indicator as 4 or 5 and that characteristic. If the apartment does not have this chara points. If, on the other hand, the user has assigned 1 or 2 t apartment has the characteristic, it is awarded 0 points or 1 the characteristic. If the user marked the characteristic of t means that this characteristic is neither important nor unimpo it does not affect the rating, and this indicator is removed fro 2. In the questions about the sizes of certain rooms, users cho they think the minimal area of a certain room should be. If th In the continuation, the model is represented graphically by identifying and recording all the steps and actions necessary for its operation. Then, a mathematical representation of the model is given, based on which the Excel spreadsheet for calculating the score was programmed.
The procedure for user assessment of the spatial characteristics of the apartment consists of three preparatory actions and seven steps: Preparatory actions: 1.
Collecting general data on apartments and defining the spatial characteristics of the observed apartments based on the project documentation; 2.
Application of the paired comparison analysis to five categories of indicators to determine the importance of each category; 3.
Completing a questionnaire that identifies the importance of certain spatial features (indicators) of the apartment. Steps: 1.
Entering general apartment information into the model; 2.
Entering data on the importance of each category of indicators into the model; 3.
The model calculates the share of the indicator categories in the overall rating; 4.
Data entry on the importance of each indicator within each indicator category; 5.
The model awards points for each indicator; The model defines the overall rating of the apartment.
The relationship between preparatory actions and steps is visible in Figure 3. The preparatory actions refer to the collection of information about the apartment and the user's preferences regarding apartment characteristics, while the steps refer to the input of information into the model and the processes that take place in the model. The relationship between preparatory actions and steps is visible in Figure 3. The preparatory actions refer to the collection of information about the apartment and the user's preferences regarding apartment characteristics, while the steps refer to the input of information into the model and the processes that take place in the model.
The rating of the apartment (AR) depends on the rating of each indicator category of the SCA. Each indicator category of the SCA depends on the number of points achieved by each indicator within that indicator category, the number of indicators included in the assessment, and the share of the indicator category in the total score. The relationship between these elements and the overall rating of the apartment is represented by Equation (2).
AR-overall apartment rating I ( ) -the number of points achieved by the indicator within a certain indicator category S ( ) -share of the indicator category in the total score NI ( ) -number of indicators within an indicator category n-the number of criteria within each SCA indicator category The rating of the apartment (AR) depends on the rating of each indicator category of the SCA. Each indicator category of the SCA depends on the number of points achieved by each indicator within that indicator category, the number of indicators included in the assessment, and the share of the indicator category in the total score. The relationship between these elements and the overall rating of the apartment is represented by Equation (2). If the apartment meets all the user's conditions, the total rating is 100; if some of the conditions are not met, the apartment rating is lower. The overall rating of the apartment is the foundation on which two different apartments can be compared and by which it can be determined which is better for a particular future user/buyer.
Results
In the continuation of this section, the model is presented through its five segments, which refer to a specific category of indicators. The input data on the users' attitudes was taken from one of the surveys in the user question-testing procedure from the person who at that moment was looking for a new apartment for his family of three; the same person also filled out the additional PCA form. The users' responses can be found in Appendices A and B. The model was tested on the example of two apartments in the city of Osijek, Croatia.
Evaluated Apartments
In order for all conditions related to the location of the apartment in the apartment building and the location of the building within the settlement to be the same, two apartments located in the same apartment building on the same floor in the residential complex Sjenjak in the city of Osijek were selected for testing the model. Considering the preferences of the survey participant, two apartments with a living room and two bedrooms were selected for testing. The apartments are located in a twelve-story building with 106 apartments built in 1974. The floor plans of the apartments being evaluated, as well as the square footage of their rooms, are shown in Figure 4. The floor plans were redrawn based on project documents from the Croatian State Archives in Osijek.
Assessment of the Presence of Additional Rooms (ar)
In the first category of indicators, three spatial characteristics of the apartment were evaluated. Every apartment, with the exception of the studio, contains the minimum required number of rooms: an entrance hall, a bathroom, a kitchen, and bedrooms. In this indicator category, additional rooms (storage areas such as a pantry or a wardrobe, outdoor space, and an additional toilet) that the apartment may contain are evaluated. According to the results of the PCA method, this category has a 19.05% share in the overall rating of the apartment for the user. For the user, it is desirable that the apartment have more than one storage room, and it is important that the apartment have an outdoor area and an additional toilet. Based on user preferences for the SCA in this category, points for each indicator for both apartments are shown in Table 2.
Assessment of the Presence of Additional Rooms (ar)
In the first category of indicators, three spatial characteristics of the apartment were evaluated. Every apartment, with the exception of the studio, contains the minimum required number of rooms: an entrance hall, a bathroom, a kitchen, and bedrooms. In this indicator category, additional rooms (storage areas such as a pantry or a wardrobe, outdoor space, and an additional toilet) that the apartment may contain are evaluated. According to the results of the PCA method, this category has a 19.05% share in the overall rating of the apartment for the user. For the user, it is desirable that the apartment have more than one storage room, and it is important that the apartment have an outdoor area and an additional toilet. Based on user preferences for the SCA in this category, points for each indicator for both apartments are shown in Table 2. In this category, Apartment 1 received two points due to the lack of an additional toilet, while Apartment 2 received a maximum of three points. The share of the category rating depends on the result of the PCA method, the number of indicators, and the individual points obtained for each indicator. For the user, this category makes up 19.05% of the total rating. Since the category contains three indicators, Apartment 1 received 12,70% of the ratings for 2 points, while Apartment 2 reached the maximum rating of 19.05% because all indicators correspond to the user's preferences. The calculation of the rating for Apartment 1 is shown in Equation (4) and for Apartment 2 in Equation (5).
Assessment of Room Size (rs)
The second category of indicators evaluates the spatial dimensions of the apartment: the square footage and the height of the rooms. The number of indicators in this category depends on the number of bedrooms in the apartment. Since apartments with a living room and two bedrooms (the parents' bedroom and one child's bedroom) were evaluated, the total number of indicators in this category is five. User preferences regarding each indicator and the points they received for both apartments are shown in Table 3. Table 3. Indicators, user preferences and apartment characteristics in the category of room size (rs). In this category, both apartments received 1 point for all indicators and thereby achieved the maximum number of points within the category. For the user, this category makes up 42.86% of the total rating. The calculations of the rating for Apartment 1 are shown in Equation (6) and for Apartment 2 in Equation (7).
Assessment of Window Orientation and Ventilation (wo)
Since apartments with a living room and two bedrooms were evaluated, the total number of indicators in this category is seven: orientation of windows in the living room, kitchen, and two bedrooms; presence of windows in the kitchen and bathroom; and twosided orientation of the apartment. User preferences and the points for each indicator for both apartments are shown in Table 4. 1 If the apartment has a characteristic ranked under 1, it receives 1 point, for the other places, the score is reduced by one third of the point; 2 Likert scale 1-5 (1 = least desirable; 5 = most desirable); 3 The user has rated this indicator as 3 (neither important nor unimportant), therefore the indicator has no influence on the rating and is excluded from the assessment.
Indicators of room orientation depend on the preferences of users regarding the orientation of a particular room. Apartment 1 has only a living room, while Apartment 2 has a kitchen oriented according to the user's preferences, and for these two indicators, both apartments received 1 point each. Other rooms have a less desirable orientation and therefore received fewer points. It should be noted that the indicator for the presence of a window in the bathroom was excluded from the overall assessment, as this indicator has neither a positive nor a negative impact on the user, so this feature of the apartment does not affect the quality of the apartment in the opinion of the user. Therefore, 6 out of a maximum of 7 indicators were used to evaluate apartments for this user in this category. For the user, the importance of this category is 4.76% of the total rating of the apartment. The method of calculating the rating of this category of apartment is shown in Equations (8) and (9).
Assessment of Apartments Circulation (ci)
The category contains indicators related to the way rooms within the apartment are connected. The category contains four indicators that evaluate the connection of rooms at the level of the entire apartment: the way communication was established between the living room and kitchen, indoor and outdoor spaces, and between the living room and bedrooms. It should be noted that in the indicator of the connection of indoor and outdoor spaces, only one outdoor space was considered, as this is most often the case. For assessed apartments that have two or three outdoor spaces, the one that has the most favorable impact on the rating would be included in the evaluation. User preferences regarding each indicator and the points for all indicators for both apartments are shown in Table 5. Table 5. Indicators, user preferences and apartment characteristics in the category of circulation (ci). If the apartment has a characteristic ranked under 1, it receives 1 point, for the other places, the score is reduced by one quarter of a point. 2 If the apartment has a characteristic ranked under 1, it receives 1 point; for the other places, the score is reduced by one half of a point.
Bedroom-Living
Considering the points for each indicator and the share of this category in the total score, Equations (10) and (11) show the scores achieved by apartments 1 and 2 in this category. For the user, this category makes up 9.52% of the total rating.
Assessment of Apartments Spatial Organizations (so)
The last indicator category evaluates the SCA that was not represented in the other categories and refers to the arrangement of rooms within the apartment. This category includes three indicators shown in Table 6 that evaluate the location of the dining area, the presence of a hallway that groups intimate spaces (bedrooms and bathrooms) and separates them from the entrance area, and the apartment's flexibility. Table 6. Indicators, user preferences and apartment characteristics in the category of spatial organization (so).
Indicator
Dining Table Location Both apartments have a dining area in the living room, so both apartments received one point each for these indicators. In Apartment 1, access to the sleeping area is through the living room, for which it received 0 points, while Apartment 2 received 1 point due to access to the sleeping area through the corridor connected to the entrance area. Due to the low flexibility caused by the location of the windows and the load-bearing structure, both apartments received 0 points for this indicator. For the user, the importance of this category is 23.81% of the total apartment rating. The overall score for this category is derived from Equation (12) for Apartment 1 and Equation (13) for Apartment 2.
Overal Apartments Ratings
Based on the ratings for each category, the apartments achieved the overall ratings shown in Table 7.
Discussion
The question of housing quality is one that researchers have been dealing with for many years and from many different angles. The quality of housing is of paramount importance not only for the professionals involved in housing construction (investors, architects, engineers of various professions, developers, real estate agents, etc.), but above all for the people who will be its buyers, or rather, the end users-their tenants. Since buying a home is the biggest investment of most people's lives [21], this also increases the responsibility of professionals to ensure the highest possible design and construction quality [2]. A high-quality living space not only plays a major role in a person's identity but is also of greatest importance for his or her quality of life and physical and psychological well-being [3]. With globalization and computerization leading to an increase in homebased work as well as the COVID pandemic impacting changes in people's lives and work, more is expected of homes today than in the past. A well-designed space is expected to, in addition to the usual functions and comfort of living, now provide the framework for many other functions.
Existing studies and models often look at housing quality from one perspective and use a relatively small number of indicators per category. FQA, for example, has only seven indicators [22]. Other methods do have more indicators, going as far as 55 in research from Vietnam [31]. The aim of this research was to develop a methodology that could enable a comprehensive assessment and comparison of quality criteria for different apartments and that would later allow us to develop a comprehensive housing quality assessment model that includes the assessment of apartments, residential buildings, settlements, and various social and economic aspects of housing. This paper therefore presents the development of a methodology and a model that evaluate the quality of the spatial characteristics of an apartment. The indicators used to evaluate the quality of apartment spatial characteristics were grouped into five categories: (i) additional rooms; (ii) room size; (iii) window orientation and ventilation; (iv) circulation; and (v) spatial organization.
Since both needs and perceptions of housing quality vary from person to person and depend on personal characteristics and the environment in which they live, it is not possible to measure housing quality with a universal tool. Therefore, it is necessary to develop a tool that takes these circumstances into account. To achieve this, the wishes and attitudes of future users were considered when evaluating the apartment. This was achieved in two different ways: (1) In most previous studies, all indicator categories often had the same proportion of the total score (only HQI [24] and HPEM [29] have the ability to change the weightings), which is not the reality from the perspective of the vast majority of users. In this model, each category has a different impact on the overall housing quality score, so each user defines the importance of each category for himself through the PCA method; (2) Each user uses a questionnaire to determine which housing characteristics are important to them and to what extent, and what characteristics an apartment must have to be good for them and meet their criteria.
An assessment defined in this way allows a broader use of the model but also provides more accurate assessment results for a larger number of different users. The goal is not to provide a universal model for apartment assessment but to define a model that allows individual users to accurately evaluate the quality of different apartments in relation to their own needs and desires at the time of purchase.
The proposed model can be used to evaluate apartments that have a living room, a bathroom, a kitchen, and at least one bedroom. For the assessment of a studio apartment, the model must be redesigned to exclude questions about the square footage of the bedroom and its orientation from the assessment. This can be achieved by specifying the number of rooms in the apartment at the first stage of the assessment, whereupon the automatically determined indicators for small apartments are excluded; i.e., for larger apartments, a larger number of bedrooms is added.
Since the model is intended to be adapted to different users, it would be good to leave the possibility to add additional indicators that are important for a particular user. This would mean that before starting to evaluate the apartments, the user must enter the indicator in the model, as well as the number of points that the user thinks this characteristic, if present in the apartment, should achieve.
In evaluating the two apartments shown, the question of the size of the rooms arose. In the questionnaire, and therefore in the model itself, the minimum dimensions of the rooms that make up a quality apartment were defined. In Apartment 2, the bedroom for the child is larger than the minimum size indicated by the users. It is necessary to either: (i) define the interval of optimal square footage of rooms within the questionnaire so that all rooms within this interval receive points and all rooms smaller or larger do not receive points; or (ii) include in the questionnaire a question about how much larger square footage is acceptable so that the rooms of the apartment are evaluated accordingly. If this question were phrased differently, the rating of Apartment 2 according to the size of the room category could be different depending on the user's view.
Using the example of the rating indicators within the window orientation and ventilation category, it can be seen that a particular feature of the apartment that is not important to the user does not factor into the overall rating of the apartment, because regardless of whether the apartment has this feature or not, it does not play a decisive role in the overall rating of the apartment.
For indicators related to room orientation and other questions that offer multiple answer choices that the users rank from desirable (1) to least desirable (5), points are determined based on the number of answers (if there are four answer choices, such as room orientation, the minimum score is 0.25, and each higher-scoring attribute is scored 0.25 points higher). The points were defined in this way in a survey of experts in order to create a simpler scoring system. Further research needs to determine, through model validation with users: (1) how much a different distribution of points would affect the overall score if the user defined the number of points for each answer, and (2) how much more complicated the questionnaire would be if it contained such questions, and how much longer the entire assessment process would take.
In addition, the validation of the model should consider whether there is a difference in the assessment of orientation between bedrooms and whether the orientation of the master bedroom should be assessed separately from the children's bedrooms.
Based on the presented methodology, the future comprehensive model will be able to be used not only by users but also by various professionals. Depending on the data entered into the model (opinions of architects, various experts, or the public), and by adding or excluding individual indicators or categories of indicators, the model will be able to be used not only for the assessment of individual apartments but also for the assessment of housing quality in general.
What sets this model apart from previous research is the ability of the user to add value to both indicators and categories. The user can choose whether one category is more important to him than the other and by how much. One user might value room size five times more than window orientation, while the other might value it only three times more. Additionally, within each category, the user can assign ratings to those indicators he finds more suitable to his lifestyle preferences. For example, one user might prefer having the dining room table in the living room, while the other might prefer it in the kitchen. The model as it is presented allows for both of them to not only choose one or the other but also to rank them in order of importance. No previous research identified in the literature review process has had this level of adaptability to users' preferences.
Conclusions
This research presents the development of a housing quality assessment model based on the spatial characteristics of an apartment. Previous research has already tackled the issue, but too broadly by comparing many categories with only a few indicators. This research focuses specifically on the spatial aspects of the apartment, such as the number and orientation of the rooms, internal communication, the existence of certain areas, etc., while for the time being disregarding other dimensions that were often included in the previous research, such as the location of the building in its surroundings, the size of the building, available services, housing comfort, etc. Those other dimensions are quite subjective and depend greatly not only on the opinions of the user but also on each specific macro-location, or rather city, and perhaps even the culture of living in the country or area. Of course, these dimensions are important to the final decision of which apartment the user should buy, and the model is designed to be able to be expanded with these dimensions and additional indicators.
An additional contribution of this research with regards to previous assessment models is its ability to weigh both indicator categories and individual indicators. For example, one user might favor the spatial organization of the apartment more than the size of individual rooms, or perhaps the orientation more than the communication between the rooms. Therefore, the user can place more emphasis on those indicators he values more than others.
The model, as it stands, does have certain limitations. In addition to only measuring one of the aspects related to housing quality, there might be other spatial characteristics that the user would like to score but that were not included in the model. For this reason, it could be possible in later iterations of the model for the user to add their own categories and indicators. Also, and much more likely, some of the characteristics of the apartment could have the same value (for example, a south-and east-facing bedroom could be equally important) and the same number of points. For now, for simplicity's sake and to make the questionnaire manageable and user-friendly, this variability was left out.
However, future research on this topic could easily address these limitations. The inclusion of other dimensions was planned ahead, and the model is designed to be able to be expanded to include more dimensions, indicator categories, and indicators. Also, the questionnaire could be redesigned to allow for specific user preferences to be included.
The housing quality assessment model presented in this paper fills in the gap identified in the literature by specifically focusing in depth on one of the most important sets of criteria when assessing the quality of housing, the apartment itself, and further builds upon previous assessment models by including the opportunity for each user to tailor the importance of each of the categories to their own preferences.
Funding: This research received no external funding.
Data Availability Statement: Not applicable.
Conflicts of Interest:
The author declares no conflict of interest. Table A1. Relationship between the questions in the survey and the indicators in the model and user responses in regard to preferred apartment characteristics.
Survey Question (Evaluation Method) Offered Answers User Responses
How desirable is it for the apartment to have more than one storage room (e.g., pantry, wardrobe)? Likert scale 1-5 (1 = least desirable; 5 = most desirable) 4 How desirable is it for the apartment to have an outdoor space (balcony, loggia, terrace)? 5 How desirable is it for the apartment to have an additional WC in addition to the bathroom? 4 In your opinion, what should be the minimum area (m2) for you to feel comfortable in the following rooms? 1 How important is it that the bathroom has a window? 3 How important is the two-sided orientation of the apartment? 4 What is the most convenient way to connect the rooms in the apartment?
Rank from 1 to 5 from most convenient to least convenient (circular connection, central living room, via corridor, zoning, direct room to room connection) 1 zoning; 2 circular connection; 3 via corridor; 4 central living room; 5 direct room to room connection Table A1. Cont. Figure A1. PCA results for a user who participated in the evaluation o Figure A1. PCA results for a user who participated in the evaluation of apartments. | 11,438 | 2023-08-28T00:00:00.000 | [
"Engineering"
] |
Black-box model for the complete characterization of the spectral gain and noise in semiconductor optical amplifiers
A Black Box Model for the quick complete characterization of the optical gain and amplified spontaneous emission noise in Semiconductor Optical Amplifiers is presented and verified experimentally. This model provides good accuracy, even neglecting third order terms in the spectral gain shift, and can provide cost reduction in SOA characterization and design as well as provide simple algorithms for hybrid integration in-package control. ©2006 Optical Society of America OCIS codes: (250.5980) Semiconductor optical amplifiers; (140.3280) Laser amplifiers References and links 1. A. Rieznik et.al., “Spectral functional forms for modeling SOAs noise,” Proceedings of the SBMO/IEEE MTT-S International Microwave and Optoelectronics Conference 2005 (Brasília, DF, Brazil). 2. K. Stubkjaer, “Semiconductor optical amplifier-based all-optical gates for high-speed optical processing,” IEEE J. Sel. Opt. Quantum Electron. 6, 1428-1435 (2000). 3. E. Conforti, C.M.Gallep, A.C. Bordonalli, “Decreasing Electro-Optic Switching Time in Semiconductor Optical Amplifiers by Using Pre-Pulse Induced Chirp Filtering,” Optical Ampl. Applications 2003 TOPS, J. Mørk, and A. Srivastava ed.. (OSA Publications) 92, 111-116 (2003). 4. J. Leuthold et.al., “Novel 3R regenerator based on semiconductor optical amplifier delayed-interference configuration,” IEEE Phontonics Technol. Lett. 13, 860-862 (2001). 5. N. C. Frateschi et.al., “Uncooled Performance of 10-Gb/s Laser Modules With InGaAlAs–InP and InGaAsP–InP MQW Electroabsorption Modulators Integrated With Semiconductor Amplifiers,” IEEE Phontonics Technol. Lett. 17, 1378-1380 (2005). 6. C.Y. Tsai et.al., “Theoretical modeling of the small-signal modulation response of carrier and lattice temperatures with the dynamics of nonequilibrium optical phonons in semiconductors lasers,” IEEE J. Sel. Top. Quantum Electron. 5, 596-605 (1999). 7. C. M. Gallep and E. Conforti, “Reduction of Semiconductor Optical Amplifier Switching Times by PreImpulse-Step Injected Current Technique,” IEEE Photon. Technol. Lett. 14, 902 –904 (2002). 8. C. M. Gallep and E. Conforti, “Simulations on picosecond nonlinear electro-optic switching using an ASEcalibrated semiconductor optical amplifier model,” Opt. Commun.236, 131-139 (2004). (C) 2006 OSA 20 February 2006 / Vol. 14, No. 4 / OPTICS EXPRESS 1626 #9311 $15.00 USD Received 28 October 2005; revised 23 December 2005; accepted 13 February 2006 9. A.A. Rieznik et.al., “Black Box Model for Thulium Doped Fiber Amplifiers,” Proc. of the Optical Fibers Conference 2003 (Atlanta, Georgia, USA), 627-628 10. E. V. Vanin, U. Person, and G. Jacobsen, “Spectral Functional forms for Gain and Noise Characterization of EDFAs,” IEEE J. Lightwave Technol. 20, 243-249 (2002).
Introduction
Semiconductor Optical Amplifier (SOA) is a key device for nonlinear sub-systems implementation, enabling all-optical signal processing functionalities in Wavelength Division Multiplexing (WDM) networks [2]. SOA-based sub-systems can give feasible alternatives for wavelength conversion [2], switching [3] and pulse regeneration [4], among others. Also, SOA's are essential for the development of small form factor, low power, uncooled high performance hybrid integrated optical systems [5]. In all these applications, simple models allowing reduction in testing time and resources for chip or package characterization are desirable, since this step is a costly one in the optoelectronic production chain. Also, simple algorithms covering the spectral gain and spontaneous emission over a large range of wavelengths and pump currents can enable the fabrication of tunable components for uncooled operation relying on simple in-package correction logic circuit.
Several different approaches to SOA modeling have been presented in the literature. While sophisticated models can provide design and analysis tools for the active region of the amplifiers [6], some simplified semi-empirical models are versatile for practical analysis [7].
We have recently proposed a Black Box Model (BBM) for the characterization of the gain and noise behavior of Semiconductor Optical Amplifiers (SOA) operating under CW conditions [1], i.e., a model that does not use any intrinsic device parameter and needs few experimental data points to map an entire range of SOA's spectral properties. In this BBM three spectral gain or noise curves for different pump (bias current) condition are used to predict the SOA spectral response under any other bias condition, by knowing only two spectral points of the desired curve. In other words, the whole gain or noise spectrum of a SOA is predicted if only two spectral points of the curve are known and for this prediction we use three spectral functions that can be calculated from three gain or noise spectra. In this work, the BBM for SOAs operating under CW conditions is presented and experimentally validated to describe the spectral gain and amplified spontaneous emission (ASE) behavior. The derivation of the BBM for gain predictions is straightforward, while its application in ASE spectra modeling is not so obvious and deserves special attention. We discuss the conditions under which this BBM works for SOA gain and noise characterization and experimentally validate our results using a commercially available SOA.
2.1Gain modeling
In many recent SOA models, the incremental material gain dG (for a spatial discrete step dz inside the amplifier active medium) as function of the wavelength λ is related to the electronic population density N(z) by using a direct linear term and indirect terms, second and a third order (spectral gain's peak shift), being expressed as [8]: where λ sh = [λ 0 -a 4 (N -N tr )] is the shift in the central frequency λ 0 , N tr the transparency carrier density, Γ the confinement factor, and a 1-4 are gain parameters [8]. Now, it is straightforward that the total amplifier gain in logarithmic scale can be approximated in the same way, but now with N being an averaged value of the electron-hole population density along the amplifier cavity. Thus, (2) it is easy to show that G(λ) can be written in terms of N as: where R, S, T and W depend only on the SOA internal and intrinsic parameters and, obviously, on the wavelength λ. Now, if the cubic term in Eq. (3) is discarded (i.e., T = 0), Eq. (3) can be rewritten for three different wavelengths (λ, λ 1 and λ 2 ) and combined in order to eliminate N and N 2 , hence obtaining where the functions F 1 , F 2 and F 3 depend on R, S and W, which are evaluated at λ, λ 1 , and λ 2 .
Equation (4) is the BBM fundamental equation and shows that the gain at any wavelength can be expressed as a linear function of the gain at the reference wavelengths λ 1 and λ 2 if the F's spectral functions are known. The main advantage in the BBM interpolation process is that these three spectral functions (Fs) can be easily determined from the amplifier as a whole, including all penalties inherent to engineering mounts (packaging, gain polarization dependence, optical interconnections, etc). In fact, F 1 , F 2 and F 3 are obtained from three complete spectral gain curves, each one measured under different SOA's bias currents (say A, B and C). Writing Eq. (4) three times for these three different pump conditions, a set of equations are obtained which in a matrix form are: Solving the system above, F 1 , F 2 and F 3 are obtained and so the gain spectra at any different pump condition can be determined by Eq. (4) by measuring the optical gain just at the two reference wavelengths λ 1 , and λ 2 . If a reduction in device testing complexity is desired, one needs to measure the gain spectra for three bias currents and then, under any other operating condition, the optical gain at two fixed wavelengths to obtain the entire spectrum. In Section 3, experimental validation is presented. Now, it becomes important to evaluate an extension of the same approach to treat the ASE noise.
Noise modeling
It is well-known that the ASE output power of a SOA, in a bandwidth B, is given by: where the superscript L indicates linear scale are used, N sp is the so called noise factor where the rightmost term in Eq. (6) is valid for G L >>1, as is usually the case in SOAs. Now, one can write Eq. (6) in logarithmic scale and use Eq. (3) with T=0 to obtain: where Seq dBm is equal to N sp (λ)B in logarithmic scale (dBm), called 'equivalent input noise term' in order to stress that the ASE output power could be expressed as the amplification of this equivalent input noise. Now, Seq dBm depends on N, the carrier population density, and so it can be expanded as a power series of N. Assuming this expansion up to the quadratic term and rearranging the terms proportional to each power of N in Eq. (7), one can proceed as in the derivation of Eq. (4) and write Eq. (7) for three different wavelengths, combining them to eliminate N and N 2 : Therefore, a similar approach as described in Section 2.1 can be, in principle, employed for the ASE characterization in amplifiers. One should observe that the main approximation in the Eq. (8) construction is the dispose of third and higher order terms in both Seq dBm (λ) and G(λ) in Eq. (7). Thus, Eq. (8) is an even more limited solution for the ASE output power than Eq. (4) is for the gain of the SOA. Nevertheless, as shown in the next Section, this equation provides excellent theoretical predictions for the ASE output power of a commercially available SOA.
Experimental validation
ASE and optical gain measurements were used to validate the BBM. The gain and ASE spectra of a commercially available SOA (Corning Inc.) were measured with an Optical Spectrum Analyzer (Anritsu, MS96A), using 400-point discretization for the acquired spectral span. The optical signals were directly collected from the SOA module with single-mode fiber cables with FC-APC (angled) connectors, avoiding spurious reflections. The SOA bias current was varied, in 50-mA steps, from 100mA to 450mA, and the typical ASE spectra are presented at Fig. 1(a). With the experimental data collected, the BBM proceeds as follow: first, three ASE spectra are used in Eq. (5) to calculate the Fs spectral functions. Then, at different pump conditions (bias currents), the ASE power is measured at the two chosen reference wavelengths, λ 1 and λ 2 , to predict the whole spectra through Eq. (8). In this example the curves corresponding to bias currents of 100mA, 250mA and 400mA were chosen to calculate the Fs functions, and λ 1 = 1450 nm and λ 2 = 1550 nm as the reference wavelengths. The calculated curves are shown in Fig. 1(b). To better visualize the BBM accuracy, the relative error ((P exp -P BBM )/ P exp ) was calculated and is presented at Fig. 2, with good agreement between the BBM reconstruction and the experimental within 2% in all cases.
The same procedure applied to the ASE data was done to the SOA optical gain. In this case, however, due to the limited bandwidth of our CW tunable laser, the model was tested in a much narrower band, from 1520 to 1570 nm. The optical power injected in the SOA by the tunable laser was -10 dBm. The SOA bias current was varied in 50 mA steps and a computer controlled Optical Spectral Analyzer measured the output optical power. The net SOA optical gain, considering the back-to-back link losses, is presented at Fig. 3(a). In this case, the gain curves corresponding to the bias currents of 100mA, 200mA and 400mA were chosen to calculate the Fs functions, and the wavelengths 1530 nm and 1555 nm as the reference wavelengths. The BBM prediction for the SOA optical gain is presented at Fig. 3(b). A good accuracy was obtained as shown by the relative error presented in Fig. 4, as done before for the ASE case, where relative errors within 4% are shown. The 10% relative error point (out of the figure span) at the 150 mA curve was due to an experimental fluctuation in the laser optical power at 1520 nm. Therefore, the accuracy is similar for both ASE and gain.
The second observation is that the noise over experimental data propagate during BBM interpolation mechanism, as can be seen in Fig. 1 around the region of 1.40 μ m. The noisy characteristic of this region in Fig. 1(b) is a consequence of the fluctuations also observed in the spectral curves used to calculate the Fs functions ( Fig. 1(a)).
To guarantee that the probe signal (-10 dBm) used to measure the SOA gain is not saturating the amplifier, the optical gain versus optical input power response were measured for four bias currents (50 mA, 100 mA, 250 mA and 500 mA, not shown) and optical input powers from -30 dBm up to 3.8 dBm, with less than 2 dB of gain depletion. This guarantee that linear gain regime was used.
Discussion and conclusion
The validity of a simple BBM has been experimentally demonstrated for ASE and optical gain data. Since we have neglected all cubic and higher order dependences on average carrier density for the model, we can conclude that these terms do not significantly affect SOA gain and ASE behavior in the C-band.
Interestingly, the result presented here concerning the linear relation between the gain at three different wavelengths, has been first shown to correctly predict Thulium-Doped Fiber Amplifiers (TDFAs) with good experimental accuracy [9]. But, while in the case of SOAs the linear relation between gain at three different wavelengths arises from the fact that the cubic terms of the electronic population density can be neglected in its modeling, in the TDFA case it arises from the fact that three energy levels are involved in the amplification process. The BBM presented in [9] is an extension for a tree-level system of a model originally presented for erbium-doped amplifiers, i.e., for a two-level system [10]. | 3,190.6 | 2006-02-20T00:00:00.000 | [
"Engineering",
"Physics"
] |
Photoacoustic imaging velocimetry for flow-field measurement
We present the photoacoustic imaging velocimetry (PAIV) method for flow-field measurement based on a linear transducer array. The PAIV method is realized by using a Q-switched pulsed laser, a linear transducer array, a parallel data-acquisition equipment and dynamic focusing reconstruction. Tracers used to track liquid flow field were realtimely detected, two-dimensional (2-D) flow visualization was successfully reached, and flow parameters were acquired by measuring the movement of the tracer. Experimental results revealed that the PAIV method would be developed into 3-D imaging velocimetry for flow-field measurement, and potentially applied to research the security and targeting efficiency of optical nano-material probes. ©2010 Optical Society of America OCIS codes: (170.5120) Photoacoustic imaging; (170.0110) Imaging systems; (170.3010) Image reconstruction techniques; (120.7250) Velocimetry. References and links 1. R. I. Siphanto, K. K. Thumma, R. G. Kolkman, T. G. van Leeuwen, F. F. de Mul, J. W. van Neck, L. N. van Adrichem, and W. Steenbergen, “Serial noninvasive photoacoustic imaging of neovascularization in tumor angiogenesis,” Opt. Express 13(1), 89–95 (2005). 2. Q. Zhang, Z. Liu, P. R. Carney, Z. Yuan, H. Chen, S. N. Roper, and H. Jiang, “Non-invasive imaging of epileptic seizures in vivo using photoacoustic tomography,” Phys. Med. Biol. 53(7), 1921–1931 (2008). 3. C. K. Liao, S. W. Huang, C. W. Wei, and P. C. Li, “Nanorod-based flow estimation using a high-frame-rate photoacoustic imaging system,” J. Biomed. Opt. 12(6), 064006–064009 (2007). 4. G. F. Lungu, M. L. Li, X. Xie, L. V. Wang, and G. Stoica, “In vivo imaging and characterization of hypoxiainduced neovascularization and tumor invasion,” Int. J. Oncol. 30(1), 45–54 (2007). 5. Z. Yuan, C. Wu, H. Zhao, and H. Jiang, “Imaging of small nanoparticle-containing objects by finite-elementbased photoacoustic tomography,” Opt. Lett. 30(22), 3054–3056 (2005). 6. Z. Yuan, Q. Wang, and H. Jiang, “Reconstruction of optical absorption coefficient maps of heterogeneous media by photoacoustic tomography coupled with diffusion equation based regularized Newton method,” Opt. Express 15(26), 18076–18081 (2007). 7. X. Wang, Y. Pang, G. Ku, X. Xie, G. Stoica, and L. V. Wang, “Noninvasive laser-induced photoacoustic tomography for structural and functional in vivo imaging of the brain,” Nat. Biotechnol. 21(7), 803–806 (2003). 8. J. J. Niederhauser, M. Jaeger, R. Lemor, P. Weber, and M. Frenz, “Combined ultrasound and optoacoustic system for real-time high-contrast vascular imaging in vivo,” IEEE Trans. Med. Imaging 24(4), 436–440 (2005). 9. R. A. Kruger, K. D. Miller, H. E. Reynolds, W. L. Kiser, Jr., D. R. Reinecke, and G. A. Kruger, “Breast cancer in vivo: contrast enhancement with thermoacoustic CT at 434 MHz-feasibility study,” Radiology 216(1), 279–283 (2000). 10. S. Manohar, S. E. Vaartjes, J. C. van Hespen, J. M. Klaase, F. M. van den Engh, W. Steenbergen, and T. G. van Leeuwen, “Initial results of in vivo non-invasive cancer imaging in the human breast using near-infrared photoacoustics,” Opt. Express 15(19), 12277–12285 (2007). 11. M. Pramanik, G. Ku, C. H. Li, and L. V. Wang, “Design and evaluation of a novel breast cancer detection system combining both thermoacoustic (TA) and photoacoustic (PA) tomography,” Med. Phys. 35(6), 2218–2223 (2008). 12. S. A. Ermilov, T. Khamapirad, A. Conjusteau, M. H. Leonard, R. Lacewell, K. Mehta, T. Miller, and A. A. Oraevsky, “Laser optoacoustic imaging system for detection of breast cancer,” J. Biomed. Opt. 14(2), 024007 (2009). 13. L. Li, R. J. Zemp, G. Lungu, G. Stoica, and L. V. Wang, “Photoacoustic imaging of lacZ gene expression in vivo,” J. Biomed. Opt. 12(2), 020504 (2007). 14. R. O. Esenaliev, I. V. larina, K. V. Larin, D. J. Deyo, M. Motamedi, and D. S. Prough, “Optoacoustic technique for noninvasive monitoring of blood oxygenation: A feasibility study,” Appl. Opt. 41, 4722-4731 (2002). (C) 2010 OSA 10 May 2010 / Vol. 18, No. 10 / OPTICS EXPRESS 9991 #124865 $15.00 USD Received 1 Mar 2010; revised 12 Apr 2010; accepted 13 Apr 2010; published 28 Apr 2010 15. H. F. Zhang, K. Maslov, G. Stoica, and L. V. Wang, “Functional photoacoustic microscopy for high-resolution and noninvasive in vivo imaging,” Nat. Biotechnol. 24(7), 848–851 (2006). 16. J. Laufer, C. Elwell, D. Delpy, and P. Beard, “In vitro measurements of absolute blood oxygen saturation using pulsed near-infrared photoacoustic spectroscopy: accuracy and resolution,” Phys. Med. Biol. 50(18), 4409–4428 (2005). 17. M. L. Li, J. T. Oh, X. Y. Xie, G. Ku, W. Wang, C. Li, G. Lungu, G. Stoica, and L. V. Wang, “Simultaneous molecular and hypoxia imaging of brain tumors in vivo using spectroscopic photoacoustic tomography,” Proc. IEEE 96(3), 481–489 (2008). 18. S. Hu, B. Rao, K. Maslov, and L. V. Wang, “Label-free photoacoustic ophthalmic angiography,” Opt. Lett. 35(1), 1 (2010). 19. E. I. Galanzha, E. V. Shashkov, T. Kelly, J. W. Kim, L. Yang, and V. P. Zharov, “In vivo magnetic enrichment and multiplex photoacoustic detection of circulating tumour cells,” Nat. Nanotechnol. 4(12), 855–860 (2009). 20. P. Ephrat, M. Roumeliotis, F. S. Prato, and J. J. L. Carson, “3D photoacoustic imaging of a moving target,” Proc. SPIE 7177, 71770W–1-9 (2009). 21. P. Ephrat, M. Roumeliotis, F. S. Prato, and J. J. L. Carson, “Four-dimensional photoacoustic imaging of moving targets,” Opt. Express 16(26), 21570–21581 (2008). 22. D. W. Yang, D. Xing, S. H. Yang, and L. Z. Xiang, “Fast full-view photoacoustic imaging by combined scanning with a linear transducer array,” Opt. Express 15(23), 15566–15575 (2007). 23. L. M. Nie, D. Xing, and S. H. Yang, “In vivo detection and imaging of low-density foreign body with microwave-induced thermoacoustic tomography,” Med. Phys. 36(8), 3429–3437 (2009). 24. C. K. Liao, M. L. Li, and P. C. Li, “Optoacoustic imaging with synthetic aperture focusing and coherence weighting,” Opt. Lett. 29(21), 2506–2508 (2004). 25. W. J. Welch, X. Deng, H. Snellen, and C. S. Wilcox, “Validation of miniature ultrasonic transit-time flow probes for measurement of renal blood flow in rats,” Am. J. Physiol. Renal Physiol. 268, F175–F178 (1995). 26. X. Jin, and L. V. Wang, “Thermoacoustic tomography with correction for acoustic speed variations,” Phys. Med. Biol. 51(24), 6437–6448 (2006). 27. R. J. Zemp, L. Song, R. Bitton, K. K. Shung, and L. V. Wang, “Realtime photoacoustic microscopy in vivo with a 30-MHz ultrasound array transducer,” Opt. Express 16(11), 7915–7928 (2008). 28. H. Golster, M. Lindén, S. Bertuglia, A. Colantuoni, G. Nilsson, and F. Sjöberg, “Red blood cell velocity and volumetric flow assessment by enhanced high-resolution laser Doppler imaging in separate vessels of the hamster cheek pouch microcirculation,” Microvasc. Res. 58(1), 62–73 (1999). 29. D. E. Goertz, J. L. Yu, R. S. Kerbel, P. N. Burns, and F. S. Foster, “High-frequency 3-D color-flow imaging of the microcirculation,” Ultrasound Med. Biol. 29(1), 39–51 (2003). 30. L. Sandrin, S. Manneville, and M. Fink, “Ultrafast two-dimensional ultrasonic speckle velocimetry: A tool in flow imaging,” Appl. Phys. Lett. 78(8), 1155–1157 (2001). 31. H. B. Kim, J. Hertzberg, C. Lanning, and R. Shandas, “Noninvasive measurement of steady and pulsating velocity profiles and shear rates in arteries using echo PIV: in vitro validation studies,” Ann. Biomed. Eng. 32(8), 1067–1076 (2004). 32. H. Fang, and L. V. Wang, “M-mode photoacoustic particle flow imaging,” Opt. Lett. 34(5), 671–673 (2009). 33. A. De La Zerda, C. Zavaleta, S. Keren, S. Vaithilingam, S. Bodapati, Z. Liu, J. Levi, B. R. Smith, T. J. Ma, O. Oralkan, Z. Cheng, X. Y. Chen, H. J. Dai, B. T. Khuri-Yakub, and S. S. Gambhir, “Carbon nanotubes as photoacoustic molecular imaging agents in living mice,” Nat. Nanotechnol. 3(9), 557–562 (2008). 34. J. W. Kim, E. I. Galanzha, E. V. Shashkov, H. M. Moon, and V. P. Zharov, “Golden carbon nanotubes as multimodal photoacoustic and photothermal high-contrast molecular agents,” Nat. Nanotechnol. 4(10), 688–694 (2009).
Ephrat et al. developed a 3-D photoacoustic imaging system to image moving target based on a sparse array and iterative imaging reconstruction [20,21].Liao et al. measured nanobased flow speed with photoacoustic wash-in technology [3].In this paper, real-time photoacoustic imaging technology was used to measure flow field.Moving tracers used to track flow field were real-timely detected and the speed of flow field was acquired by measuring the movement of the tracer with continuous PA images.This speed measuring method is referred as photoacoustic imaging velocimetry.
The method is realized by using a Q-switched pulsed laser, a linear transducer array, a parallel data-acquisition equipment and dynamic focusing reconstruction.In this imaging system, the linear transducer array is used as a staring array.For each laser pulse, PA signals are collected by the 64-channel staring array, and then temporarily stored in a buffer memory module of the data-acquisition equipment.After a completed data is acquired, the data is transferred to a PC, and PA images are reconstructed off-line with dynamic focusing algorithm.The frame rate of the PAIV system is 15 Hz, which is limited by the laser pulse repetition frequency.With the PAIV system, PA signals of the tracer moving in flow field can be captured real-timely, 2-D flow visualization can be successfully recorded with continuous PA images, and flow parameters can be acquired by measuring the movement of the tracer.To verify the capability of flow visualization and flow-field measurement of the imaging system, phantoms made of wood block and Indian ink were constructed to track flow field.Experimental results validated that the PAIV system has the ability of two-dimensional flow visualization for flow field measurement.To our knowledge, this is the first report of flow field measurement with photoacoustic imaging velocimetry method.
Methods
The schematic of the PAIV system is shown in Fig. 1 The limited acceptance angle will induce received heterogeneity for the field of view (FOV) in the processes of data acquisition.Particularly, the effective received accumulation number of points at the edge of the FOV is much less than that of points in the centre of the FOV (When 1 / 2, D ≥ we deem it's an effective receive, and add 1 to , ij Ω which is defined as the effective received accumulation number of point ( , ) i j in the imaging region).The effective received accumulation number is an idealized parameter, and in the paper, it is calculated in the processes of image reconstruction.In reconstruction, if the point ( , ) i j was located in the effective received angle of m-th transducer, ( ) ( ) was greater than the minimum response value of the transducer, ij Ω was added by 1.For the same absorber at different location of the FOV, the intensity of reconstructed images should be uniform.Thus after focusing reconstruction, in order to modify the received heterogeneity, the pixel value of each point was normalized by the received weighting factor.For the point ( , ) i j in the imaging region, the image intensity S can be expressed as where m RF is the received photoacoustic signal by the m-th transducer element, t is the time after the trigger, ij τ corresponds to the acoustic propagation time from the point ( , ) i j to the m-th transducer element, M is the number of elements in the linear transducer array.w is the received weighting factor and max , where max Ω is the maximum of Ω in each individual image.
After modifying the received heterogeneity, the image uniformity in contrast will be changed.For the points in the center area, it can be simply regarded that its PA signal was received absolutely by the linear transducer array.Correspondingly, for the points at the edge of the FOV, the PA signal was received incompletely.Thus the image intensity in the center of the FOV was used as a standard to assess the improvement of image uniformity in contrast.
After PA reconstruction, an image sequence according time can be acquired, and each image shows the absorbed optical deposit distribution at different time.Thus the dynamic information of moving light-absorbing target can be displayed with the continuous PA images.With the real-time PA imaging method, dynamic information of flow field can be acquired by imaging the moving tracers tracking in the investigated field.To realize imaging velocimetry, the position of the tracer in each reconstructed image should be scaled.In the paper, the centre of the tracer has been used to describe the position simply.And the centre and the size were assessed by the midpoint and the length of the tracer in the moving direction approximately.For accurate measurement, the assessing method of the movement should be developed.The flow velocities were acquired by measuring the movement of the midpoint of tracers.For the liquid tracer in flow field, the variation in length of reconstructed image is used to evaluate the diffusion velocity of the liquid tracer.
In the study, a small rectangular wood block was selected to perform solid absorber.A plastic tube filled with water was used to perform transport apparatus, which has inner diameter of 3 mm and outside diameter of 4 mm with a sound velocity of 2340 m/s.An infusion pump (CZ-74901-15, Cole Parmer VernonHills, IL, USA) and a 1 ml standard syringe connected with the plastic tube were used to produce a constant current.The wood block floated on water, and could be driven by the flow.A piece of black tape glued on the plastic tube was used to mark position.Experiment 1 (Fig. 2) was performed to demonstrate the improved dynamic focusing algorithm (The received heterogeneity is modified) has the ability to modify the received heterogeneity of the FOV.In experiment 1, the wood block was performed as absorber target, and located at the edge and the center of the FOV, respectively.And the transducer array was placed parallel to the plastic tube with a distance of 2 cm.Experiment 2 (Fig. 3) was designed to demonstrate that the PAIV system has the ability to image solid tracer moving in flow field.In experiment 2, the wood block was performed as solid tracer, and a velocity of ~1.20 cm/s inside the plastic tube was created by the infusion pump.The schematic of transport apparatus used in the experiment 1 and 2 has been shown in Fig. 1(b).Experiment 3 (Fig. 4) was performed to further demonstrate that the system has the ability to image solid tracer with varying flow velocity.In experiment 3, a taper glass tube was used to produce a varying velocity, and the corner of the taper glass tube was located in the middle of the imaging area.The configuration schematic of the taper glass tube has been shown in Fig. 1(c).
In order to verify that the PAIV system has the ability of whole field flow visualization, Indian ink was performed liquid tracer to track flow field.In experiment 4 (Fig. 5) and 5 (Fig. 6), a tee pipe was used to produce a mixed flow, and the configuration schematic has been shown in Fig. 1(d).Port 1 of the tee pipe was connected with the infusion pump through the standard syringe to get a constant flow.Port 2 was connected with another syringe and used to inject a small amount of Indian ink.The injected ink will diffuse and flow toward port 3 together with the flow from port 1. Port 3 was connected with the plastic tube and located in the FOV.In experiment 4, the transducer array was placed parallel and toward to the flow direction, respectively.In experiment 5, the transducer array was placed obliquely, forming an acute angle with the flow direction.Additionally, the transducer array and the investigated flow direction were also in the same plane.Flow velocity in experiment 5 was 0.20 cm/s.
Results
The result of experiment 1 has been shown in Fig. 2. Figure 2(a) and 2(b) are reconstructed PA images with the conventional dynamic focusing algorithm [23,24].Clearly, the image intensity of the wood block in Fig. 2(a) is weaker than that in Fig. 2(b).Figure 2(c) and 2(d) are reconstructed PA images from the improved dynamic focusing algorithm (The received heterogeneity is modified by the received weighting factor).Obviously, the image intensity of the wood block in Fig. 2(c) is higher than that in Fig. 2(a), while that is almost the same in Fig. 2 After modifying the received heterogeneity, image intensity of the wood block at the edge of the FOV is closer to that in the center of the FOV.Thus we believe that Fig. 2(c) is more uniform in contrast of whole image than 2(a).Image intensities of the wood block in Fig. 2(b of the FOV completely, and another reason should be the inhomogenous light due to the heterogeneity of light energy distribution.In conclusion, after modifying the received heterogeneity, the intensity of absorber at the edge of the FOV has been more close to that in the center of the FOV.In conclusion, with the improved dynamic focusing algorithm, a relative homogeneous imaging area can be obtained.In experiment 2, the movement of the tracer with constant velocity was captured in 2.06 second.Reconstructed result has been shown in Fig. 3 with an interval of 0.33 second.There are two absorbers in the reconstructed images.The shorter one is the tracer moving with the flow field, and the longer one at the right of the reconstructed images is the marker.The distances traveled between successive images in Fig. 3 are 3.7mm, 3.8mm, 3.8mm, 4.1mm, 3.9mm and 4.3mm, respectively.And the average velocity of the tracer is 1.18 cm/s.The experiment result demonstrates that the PAIV system has the ability of flow visualization for moving target.In the experiment, the velocity of flow field is evaluated by measuring the movement of the solid tracer.Solid tracer is driven by the investigated flow in this measuring mode, thus small and low-density tracer should be selected.Reconstructed PA images of experiment 4 are shown in Fig. 5.In Fig. 5A, images (a)-(e) are given according to time with an interval of 0.2 second.It is shown that the liquid tracer is flowing from left to right.In Fig. 5B, images (a)-(e) are given in time sequence and the time interval is 0.4 second.It is shown that, the liquid tracer is flowing toward to the transducer array.In Fig. 5, the moving direction of the liquid tracer is accurately acquired, and the diffuseness phenomenon is also clearly displayed.In the diffusion process, the ink will be affected by the adhesion force of flow and can't maintain a fixed shape.Thus reconstructed PA images of the liquid tracer are different with time.In the experiment, the ever-changing liquid tracer is imaged real-timely, which demonstrates that the PAIV system has the ability of whole field flow visualization.
Figure 6 shows the reconstructed PA images of the liquid tracer with an oblique direction toward the transducer array in experiment 5. Images (a)-(f) are given according to time with an interval of 0.2 second.Furthermore, flow velocity and diffusion velocity of the liquid tracer are measured from successive PA images.The mean flow velocity is 0.17 cm/s and the mean diffusion velocity is 0.07 cm/s in the experiment.It is demonstrated that the PAIV system has the ability to acquire the flow direction information.In the experiment, the random flow direction was limited in a plane, which was caused by the plane information selection of the linear transducer array.
Discussion and conclusion
We have presented the photoacoustic imaging velocimetry method for flow field measurement.Present work has demonstrated that the system has the ability of 2-D flow visualization.Furthermore, both velocity and direction of the investigated flow can be obtained from reconstructed PA images.The current system can only be used in 2-D measurement, which is limited by the plane-selected transducer array.For 3-D measurement, a planar array with large 3-D acceptance angle and appropriate structure is needed, and this is our further work.The frame rate of the PAIV system is limited by the low pulse repetition frequency (PRF) of the laser, and the time resolution of the system is restricted.Actually, in the PAIV measurement sound velocity 0 v of PA wave will be affected by the flow field [25].Assuming that the flow is moving away from the transducer with a velocity v (In this case, the speed of sound deviation is maximum), the sound velocity will be 0 , v v − and the relative error of the target location can be described as 0 .v v v − If 1% is selected to perform the limit of the relative error, the maximum measuring velocity of the PAIV method is about 14.85 m/s (Assuming that the sound speed is equal to 1500 m/s).In practical medical application, the sound velocity will change throughout the medium.In this case, the measured relative velocity can be used to evaluate flow field qualitatively, and the dynamic information is also useful for clinic diagnosis.For further development, investigated velocity can be acquired by correcting the acoustic speed variations with ultrasonic transmission tomography [26].For the current imaging system, the actual limiting velocity is well below the maximum measuring velocity due to the low pulse repetition frequency (PRF) of the laser.Previously, a 1 kHz repetition laser has been used in PA imaging by Wang et al [27].With high pulse repetition frequency laser in future, the PAIV system can overcome the current limiting velocity.
The light scattering will result in: (1) the light energy distribution will be more homogeneous, which will induce a more homogeneous imaging area; (2) the light energy will be decreased.For the reconstructed image, the image intensity will be decreased.However, in the paper the imaging area was located in a plane and was almost vertical with the light.Thus the affection of the image intensity due to the light scattering can be neglected.
In The PAIV method has some advantages for flow field measurement.1) Compared to Doppler flow imaging (laser Doppler imaging and ultrasound Doppler imaging) [28,29], the PAIV method can provide the direction information of the investigated velocity vector, while the measured velocity by Doppler flow imaging is just a projection of the investigated velocity vector in the probe direction.2) Compared to speckle contrast flow imaging (SCFI) and particle imaging velocimetry (PIV) [30,31], the PAIV method possesses a better sensitivity.With a single probe (light or sound), SCFI and PIV are based on the scattering property of endogenous red blood cells or exogenous particles.However, the light is strongly scattered by the tissue and the ultrasound wave is strongly reflected by tissue boundaries [32].On the contrary, PAI employs both light and sound as probe and relies on light absorption contrast.
3) The PAIV method can provide more information of the investigated flow field except mean flow velocity.The measuring region of the imaging system is a large area, not a point.Thus the method can be used to display the varying flow.
The method would be regarded as a new scientific research method, and potentially applied in the research of security and targeting efficiency of optical nano-material probes.Recent year, more and more attention has been attracted in nanotechnology research.And the optical nano-material probes, such as carbon tubes probe [33], golden carbon tubes probe [34], have been tried to target and identify tumor.However, the penetration and the targeting efficiency of the nanoparticles should be assessed experimentally.With the photoacoustic imaging velocimetry method, the movement of optical nano-material probe could be monitored realtimely.By mapping of the dynamic distribution of optical nano-material probe, the penetration and flow tendency could be observed clearly, and the targeting efficiency could be obtained by monitoring the PA intensity of the particles around the tumor.
In conclusion, we have developed the photoacoustic imaging velocimetry method for flow field measurement.The method can be developed into 3-D imaging velocimetry.The photoacoustic imaging velocimetry system was presented and its ability of visualizing and measuring flow has been demonstrated experimentally.
Fig. 1 .
Fig. 1. (a).Schematic of the photoacoustic imaging velocimetry system.PDA: parallel dataacquisition.The light-absorbing target moving along velocity vector is illuminated by laser pulses.The velocity vector is located in the image plane of the linear transducer array.The black circle represents an optical absorber.(b) The schematic of transport apparatus (plastic tube) used in experiment 1 and 2. The black rectangle on the left represents the solid tracer (wood block).The black rectangle on the right represents the mark (a piece of black tape).The arrow shows the direction of flow field.(c) The schematic of the taper glass tube used in experiment 3. (d) The configuration schematic of the tee pipe used in experiment 4 and 5. Port 1 was connected with the infusion pump through the standard syringe to get a constant flow.Port 2 was connected with another syringe and used to inject a small amount of Indian ink.Port 3 was connected with the plastic tube and located in the imaging region.
Fig. 2 .
Fig. 2. Reconstructed photoacoustic images with the wood block located at different position of the FOV.(a) and (b) Reconstructed PA images from the conventional dynamic focusing algorithm with the absorber target located at the edge and in the centre of the imaging area, respectively.(c) and (d) Reconstructed PA images from the improved dynamic focusing algorithm modified with the received weighting factor.(e) Reconstructed profiles at y = 11.5 mm of images (a)-(d).The reconstruction profiles of the wood block from (a)-(d) are shown with the color of black, blue, red and green, respectively.The red line in each panel represents the position of the linear transducer array.
Fig. 3 .
Fig. 3. PA images of the solid tracer with constant velocity at different times.The movement of the tracer was captured in 2.06 second.The arrow shows the direction of flow field.The red line represents the position of the linear transducer array.
Fig. 4 .
Fig. 4. PA images of the solid tracer with varying velocity at different times.The movement of the tracer was captured in 1.06 second.The arrow shows the direction of flow field.The red line represents the position of the linear transducer array.Result of experiment 3 has been shown in Fig. 4 according to time with an interval of 0.2 second.Seen from Figs. 4(a) to 4(c), it is clearly that the velocity of the tracer is homologous.While seen from Figs. 4(d) to 4(f), distance traveled between successive images is enlarged gradually, which means that the velocity of the tracer is varied.Furthermore, it also can be observed that moving direction of the tracer has been changed from Figs. 4(d) to 4(f), which is
Fig. 5 .
Fig. 5. PA images of liquid tracer at different times.The dash lines represent positions of the wall of the plastic tube.Arrows show the flow direction.A: The transducer array was placed parallel to the flow direction.The movement of the liquid tracer was captured in 1.06 second.B: The transducer array was placed toward to the flow direction.The movement of the liquid tracer was captured in 1.66 second.The red line represents the position of the linear transducer array.
Fig. 6 .
Fig. 6.PA images of liquid tracer in a flow with an oblique direction toward the transducer array.The movement of the tracer was captured in 2 second.The red line represents the position of the linear transducer array.
(a).A Q-Switched Nd: YAG laser (LOTIS TII Ltd, Minsk, Belarus) is used as excitation source, which operates at 1064 nm with pulse duration of 10 ns and pulse repetition rate of 15 Hz.The laser is expended by a concave lens and then homogenized by a piece of ground glass, and the incident energy density is controlled below 20 mJ/cm 2 .The incident laser beam has a diameter of 2.5 cm.PA signals are detected by the 64-element linear transducer array.Each element of the array has a width of 0.3 mm, a height of 4 mm, and a center frequency of 7.5 MHz with a 70% bandwidth.The array has an effective length of 46 mm and the pitch between adjacent elements is 0.72mm.
The parallel data-acquisition (PDA) equipment is employed to acquire, pretreat and transmit the 64-channel PA signals simultaneously.It contains electronic boards of the receiving module, A/D conversion module, and buffer memory module.In the receiving module, the detected signals are amplified firstly by two-stage amplifiers and then filtered by anti-aliasing filters.In A/D conversion module, analog-to-digital converters with 12-bit precision are equipped.The data sampling rate is 40 M Samples/sec.After A/D conversion, the data from per channel are stored in the buffer memory and finally transferred to the PC through USB interface.With the PDA equipment, the 64-channel array data can be acquired and transmitted every 1/100 second.Thus the frame rate of the PDA equipment is up to 100 Hz, which enable the system to image flowing target.After data acquisition, dynamic focusing reconstruction is applied to reconstruct PA images off-line.Additionally, the PDA equipment is triggered ) and 2(d) are almost identical, and the reason is that in the center of the FOV the received weighting factor w is almost equal to 1. Additionally, it is hard to eliminate the heterogeneity the processes of image reconstruction, ij Mar 2010; revised 12 Apr 2010; accepted 13 Apr 2010; published 28 Apr 2010 that in Figs.2(a) and 2(c).For the same point in different image, the parameter Ω would be different, thus the range in each image with the parameter w is almost equal to 1 is different with each other.Only when the parameter w is almost equal to 1, the lines before and after modifying the received heterogeneity can coincide with each other.This should be the reason why the coinciding range of the green line and the blue line is different with that of the black line and the red line. | 6,717.2 | 2010-05-10T00:00:00.000 | [
"Engineering",
"Physics"
] |
High Capacity Downlink Transmission with MIMO Interference Subspace Rejection in Multicellular CDMA Networks
We proposed recently a new technique for multiuser detection in CDMA networks, denoted by interference subspace rejection (ISR), and evaluated its performance on the uplink. This paper extends its application to the downlink (DL). On the DL, the information about the interference is sparse, for example, spreading factor (SF) and modulation of interferers may not be known, which makes the task much more challenging. We present three new ISR variants which require no prior knowledge of interfering users. The new solutions are applicable to MIMO systems and can accommodate any modulation, coding, SF, and connection type. We propose a new code allocation scheme denoted by DACCA which significantly reduces the complexity of our solution at the receiving mobile. We present estimates of user capacities and data rates attainable under practically reasonable conditions regarding interferences identified and suppressed in a multicellular interference-limited system. We show that the system capacity increases linearly with the number of antennas despite the existence of interference. Our new DL multiuser receiver consistently provides an Erlang capacity gain of at least 3dB over the single-user detector.
INTRODUCTION
Third generation wireless systems will deploy wideband CDMA (W-CDMA) [1,2] access technology to achieve data transmission at variable rates.Standards [1] call for transmission rates up to 384 Kbps for mobile users and 2 Mbps for portable terminals.On the downlink (DL), high-speed DL packet access (HSDPA) [3,4] allows for transmission rates up to about 10 Mbps in the conventional single-input singleoutput (SISO) channel and about 20 Mbps in the multipleinput multiple-output (MIMO) channel.It is expected that most of the traffic will be DL due to asymmetrical services like FTP and web browsing.The DL will therefore become the limiting link, and only high DL performance can give the network operator maximal revenue from advanced radionetwork technologies.
MIMO [5] and multiuser detection (MUD) [6,7,8] are both very promising techniques for high capacity on the DL in wireless systems.In a noise-limited MIMO system, Shannon capacities increase linearly in SNR with the number of antennas [5] instead of logarithmically as in the SISO system.Recent studies, however, have shown that in an interferencelimited MIMO system, this linear relationship is not achieved due to the multiple-access interference (MAI) [8].In [9,10], it was shown that the gain in such systems is basically limited to the antenna beamforming gain at the receiver.In terms of system capacity, 1 this means that the Erlang capacity increases linearly with the number of antennas.MUD can significantly increase the capacity further especially when interference is pronounced [11].It is therefore of prime concern to establish a cost-effective solution that combines MIMO and MUD for optimal DL performance.
MUD is a challenging problem, not only for the uplink (UL), but even more so for the DL.On the UL, the receiving base station knows the connection characteristics of all in-cell users.The DL MUD problem is more difficult because the terminal has no knowledge of active interference, its spreading codes, SF, modulation, coding, and the connection type (packet switched or circuit switched).Furthermore, complexity considerations are more important because terminals are limited by size and price and are restricted in available power.
Most previous work was aimed at the UL (e.g., [11,12,13,14,15,16,17,18,19,20,21]).For the DL, blind adaptive MMSE solutions based on generalizations of singleuser detectors (SUDs) have previously been proposed for the STAR [22] receiver in [23], denoted STAR GSC, and for the RAKE [24] receiver in [25], denoted the generalized RAKE (G-RAKE).These solutions are characterized by low complexity and low risk because they impose the least change to an established technology.But they require the use of short codes and the capacity gain in a practical DL environment is limited to about 1.5-2.5 dB for the G-RAKE [26,27] (and expectedly in the same range for STAR-GSC).In [28], a solution which offers potentially higher capacity gains is presented.Relying on the use of orthogonal variable spreading factor (OVSF) [29] codes, it probes for interference on the OVSF code tree at a high SF level in order to identify and reject codes with significant energy.This solution is complex because it rejects interference at a high SF level and is defined for rejection of in-cell interference only.
We propose a new class of MUD solutions for DL multicellular interference-limited CDMA-based MIMO systems.These new solutions are all DL variants of the previously presented interference subspace rejection (ISR) technique [30] and are therefore referred to as DLISR.The DLISR variants do not rely on prior knowledge of the interference and its properties (e.g., modulation, coding scheme, and connection type).Nor do they attempt to estimate the SF and modulation of the interference.DLISR takes advantage of a concept we denote by virtual interference rejection (VIR) combined with a new OVSF code allocation scheme denoted dynamic power-assisted channelization code allocation (DACCA).VIR reduces complexity in the receiver by attacking interference at a low SF.DACCA provides information to the terminal about the location of interference in the OVSF code-space.DLISR does not necessarily require VIR and DACCA.However, when combined with these new concepts, DLISR provides very high performance at very low complexity.As a benchmark, we consider the PIC [16,17] with soft decision (PIC-SD), which can also exploit the VIR and DACCA techniques.
Performance of MUD detectors heavily relies on the distribution of interference.For instance, MUD typically offers very significant performance gains if the interference arrives from one strong source.However, if interference arrives from numerous weaker sources, MUD performance approaches SUD performance.In order to provide convincing results with regards to real-world applications, it follows that interference must be modelled realistically.We have therefore implemented a precise model as shown in Figure 1.First we establish a realistic realization of the interference using a radionetwork simulator (RNS); then this information is used for the link-level simulations to assess the BER for DLISR, PIC-SD, and the SUD.Repeating the cycle many times and combining the results, we arrive at system-level capacity estimates.Our link-level simulator makes assumptions very similar to those in W-CDMA standards.We do not rely on any a priori knowledge of the channel; instead we employ the STAR receiver [22] to estimate the channel.Simulations show that our new MUD consistently offers a gain of at least 3 dB over SUD based on maximal ratio combining (MRC) for QPSK and as much as 6.5-8.1 dB for 16 QAM.Our solution demonstrates a linear growth in Erlang capacities with the number of receiving antennas.
The main contributions of this paper are as follows.Most importantly, we propose a new solution for DL MIMO MUD in CDMA-based systems.We present the concepts of VIR and DACCA to allow for effective operation of DLISR and to reduce the complexity at the receiver significantly.Finally, we propose an RNS to generate realistic realizations of the interference in the DL MIMO system.
The paper is organized as follows.We present our linklevel signal model in Section 2. In Section 3, we derive DLISR and introduce DACCA and VIR.The RNS is presented in Section 4. Then our system-level simulation results are presented in Section 5. Finally, our conclusions are given in Section 6.
LINK-LEVEL SIGNAL MODEL
In this section, we discuss the link-level signal model and discuss briefly basic estimation issues.The radio-network model, which is important for the quality of our simulation results, is presented later in Section 4. Section 2.1 presents an overview of the MIMO model, Section 2.2 provides the mathematical model of the signals, and finally, Section 2.3 considers estimation of the basic parameters.
Overview of the MIMO model
We consider a DL MIMO CDMA system as illustrated in Figure 2. Let (u, v) denote the user with index u = 1, . . ., U v connected to the cell with index v = 1, . . ., N CELLS .We define a cell as one site sector, that is, a three-sector site has three cells.U v is the number of users connected to the cell with index v and N CELLS is the number of cells considered.
Radio-network layer:
-Topology -Site design -Traffic -Blocking -Dynamic range b Let b (u,v) enc (t) represent a BPSK stream of encoded information bits.The encoded data bits are modulated according to the modulation scheme (we consider QPSK and 16-QAM in this paper) and scaled by the desired transmit amplitude ψ (u,v) (t).The stream of modulated channel symbols are switched to one of N G groups such that the user (u, v) is assigned to the group g (u,v) .The modulated symbols are then spread by a user-specific channelization code, increasing the rate by the SF, L = T/T c , where T is the time duration of one modulated symbol and T c is the chip duration.The channelization code is defined as c (u,v) ch (t) = c (g(u,v),i(u,v)) (t), where i (u,v) is the index to one of the codes of the group.Assignment of groups and channelization codes are discussed below.We add a pilot unique to each group scaled by the desired pilot amplitude, that is, , where (ψ v π (t)) 2 is the desired pilot power and c v,g π (t) is a PN code unique to the group.Finally the cell-specific scrambling code c v sc (t) is applied to yield the group-specific signal G g (t), g = 1, . . ., N G .The N G groups of signals, organized in the vector is transmitted over the channel H v (t) and received by the mobile unit with M R antennas.If M has full rank, the groups are mapped orthogonally in space onto the transmitting antennas.Orthogonal spatial mapping is possible as long as the condition (M T ≥ N G ) is satisfied.In this paper, we assume that M T = N G and therefore the Hadamard matrix is useful.The Hadamard matrix ensures both orthogonal transmission in space and equal distribution of power between the transmitting antennas. 2If a different delay D m is employed at each transmitting antenna, we obtain time diversity.This may be attractive in low-diversity situations, but in a typical multipath channel possibly with multiple receive antennas, the sufficient diversity is available and extra time diversity may degrade performance because channel identification is made more difficult [31] (see also footnote 16).In our simulations, we consider multipath mostly with antenna diversity reception and therefore we have used D m = 0. Simulations (not shown herein) have demonstrated that using different antenna delays generally results in the same or slightly worse performance when multipath propagation is considered.
We now return to the concepts of grouping and channelization-code design.Channelization codes are grouped into N G groups with L codes in each group.The purpose of grouping is to allow for user capacities beyond the SF.Each group will contain channelization codes unique to the group.Codes are correlated between groups but mutually uncorrelated within groups.The spatial mapping M serves to separate groups further by assigning orthogonal spatial signatures at transmission.Users are assigned a group and a channelization code pair (g (u,v) , i (u,v) ) on a first-come firstserve basis in the following order: (g, i) = (1, 1), (1, 2), . . ., (1, L), (2, 1), . . ., (N G , L).Let G g denote the set of channelization codes in group g.By wise definitions of the code groups, intragroup (preferably orthogonal) as well as intergroup correlations are controlled.It is noteworthy that since the same scrambling code is used across groups, crosscorrelation properties, once set by proper choice of channelization code sets, are preserved after scrambling.As an example, we consider the following two groups of SF = 4 channel-ization codes: Intragroup correlations are zero for both groups and intergroup correlations are always −6 dB (relatively).Using these code groups as a baseline, we can easily derive an OVSF tree for both groups (see [29]).It is easy to show that intergroup correlations reduce with higher SFs.For SF lower than four some code pairs will have nonzero correlation.Lower SFs must therefore be employed in practice with extra coordination between groups.In this example, the two code groups have been rotated by 45 • with respect to each other.
Multiuser multicell downlink signal model
We now present a mathematical formulation of the received signal.A useful diagram is shown in Figure 3.We consider the DL of a cellular CDMA system, where the mobile is equipped with an antenna array of M R sensors.At time t, the observation vector received at the antenna array of M R sensors at the mobile terminal can be defined as follows: . . . where is the signal arriving from the vth cell, X (u,v) (t) is the contribution from the (u, v)th user, X g,v π (t) is the pilot signal of the gth group of the vth cell, and N(t) is the thermal noise assumed to be uncorrelated additive white Gaussian noise (AWGN).
The contribution of the (u, v)th user, X (u,v) (t), to the received signal X(t) is given by where H v m (t), m = 1, . . ., M T , is the M R -dimensional channel vector from the mth transmitting antenna to the receiving antenna array with M R sensors, and A (u,v) m (t), m = 1, . . ., M T , is the contribution of the (u, v)th user to the signal transmitted at the mth antenna.Each dimension corresponds to one transmit antenna.The total transmitted signal arriving from the (u, v)th user is defined as follows: with where (ψ (u,v) (t)) 2 is the power, c (u,v) (t) = c (u,v) ch (t)c (u,v) sc (t) is the spreading code (channelization code + scrambling code), and b (u,v) (t) denotes the modulated symbols.For lack of space, we do not detail the contribution of the pilots to the received signal, but it follows the pattern of (4), (5), and (6) by replacing X (u,v) We adopt the common assumption that the channel response can be modeled as a tapped delay line with Rayleighfaded tap gains [32].The M R -dimensional channel response vector from the transmitting cell to the mobile unit with M R antenna elements is therefore given as follows: , m = 1, . . ., M T (7) with where δ(t) is the Dirac delta function, τ v p (t) ∈ [0, T) are the multipath time delays for p = 1, . . ., P. Note that the physical path delays are the same for all receiving antennas but delay differences may optionally be imposed at transmission.
] T is the unit-norm propagation vector, ε v p (t) 2 , p = 1, . . ., P, are the power fractions along each path such that P p=1 ε v p (t) 2 .= 1, D m is an additional transmit delay associated with each transmit antenna, and L LOSS is the path loss.In practice, L LOSS is largely compensated by power control and we therefore fix it to unity in what follows.Note that this implies that the expected gain of H v m (t) is one (by definition).At reception, the M R -dimensional received signal is first filtered by the pulse-matched filter, then sampled and framed into observation vectors containing Q consecutive symbols of the desired user (the signal is first down converted in reality).We define the preprocessing step through the function ×1 as follows (see [30] for more details): where L ∆ is an extra margin to account for the delay spread, φ(t) is the square-root raised-cosine shaping pulse, and a is an offset that guarantees that the targeted symbols nQ + k, k = 0, . . ., Q − 1, occur within the duration of the observation frame.Without loss of generality, we set a = 0 in what follows.With this definition, we can now define the preprocessed observation as where n is to be understood as the contribution of the (u, v)th user to the nth observation.It is useful to decompose its contributions as follows: and Y (u,v) k ,n is to be understood as the signature of the nQ + k th symbol.We next define the user (d, v d ) as the desired user (v d denotes the best server of user d) and let g d denote the group to which the user is assigned.We now isolate the desired signal and pilot in (10) from intersymbol interference (ISI) and in-cell/out-cell MAI as follows: (d,vd) nQ+k ψ (d,vd) n Y (d,vd) k,n where with reference to (11), we have (13)
Basic parameter estimation principles
In our simulations, we estimate every parameter as needed with no prior information assumed known to the receiver.
To estimate the multipath delays and the multipath gains, we employ a variant of the STAR receiver [22] as discussed in Section 2.3.1.MRC data detection (used by the SUD considered herein), power estimation, and signal-to-interferenceplus-noise ratio (SINR) estimation for PC are then discussed in Sections 2.3.2,2.3.3, and 2.3.4,respectively.
STAR: the spatio-temporal array-receiver
We employ a variant of the STAR receiver [22] which mainly differs in the despreading operation.Instead of using the code of the desired user for despreading, we employ a more generalized code for despreading.We consider multicodes to represent one cooperative code for despreading, which is a combination of concatenating codes in time (i.e., consecutive symbols by data remodulation) and combining over channels.For the channel of the desired user, we combine the pilot code with the data remodulated spreading code over Q consecutive symbols.For other channels, we employ only the pilot for channel identification with STAR.
MRC beamforming and data detection
The signal component nQ+k contains sufficient statistics for the estimation of both data and power.The signal component can be estimated by MRC which is optimal in white noise.With reference to (12), the MRC combiner for the k th symbol of user (u, v) is as follows: and then the signal component is estimated as A beamformer for the pilots can be defined accordingly.Note that we use the term beamformer because W MRC,k ,n works in both space and time.The transmitted symbol is estimated as the symbol in the signal constellation which is the closest to , where ψ(u,v) n is the estimated power (Section 2.3.3).
Power estimation
We consider two different power estimators.The first estimator first estimates the amplitude where α is a forgetting factor.The power estimate is then found by squaring the amplitude estimate.The second estimator estimates the power directly: The latter is biased because it effectively estimates the combined signal and interference noise power.The estimator in (16) has less bias and is more accurate because the filtering appears before the squaring; but it requires that the decision feedback (DF) is decent.The estimator of ( 17) is useful to 1 estimate the power of the interference (where decision feedback is difficult), whereas the estimator of ( 16) is used for the desired pilot and data signal.
SINR estimation
The PC command is determined by comparing the SINR estimate at the receiver with the target SINR.We use the following estimator for the SINR: where ψ(d,vd) n results from ( 16) and σ(d,vd) n is an estimator for the postcombined noise, which is obtained by estimating the total received power (of all users) after combining and then subtracting the estimated power of the desired user.
DOWNLINK INTERFERENCE SUBSPACE REJECTION
Our main contribution is a new efficient and cost-effective MUD solution for DL MIMO, DLISR.DLISR is based on ISR previously presented for UL systems [30].It incorporates new variants of ISR modes which are specially suited for the more problematic DL case.In particular, DLISR employs VIR, which involves rejection of virtual users instead of physical users.VIR has many benefits especially when it is combined with DACCA.Neither VIR nor DACCA are indispensable for DLISR; however, capacity gains and especially complexity reductions are achieved when combined.We next review ISR in Section 3.1.Then we define DACCA and VIR and introduce DLISR.Finally, we discuss the attractive complexity features of our new solutions.
Review of ISR
In this section, we provide an overview of ISR.For a more complete picture, see [30].The basic ISR recipe is to form a constraint matrix Ĉ with a column span which spans the estimated interference subspace.In a second step, the observation is mapped away from the interference subspace spanned by Ĉ by constrained spatio-temporal projection; thereby, MAI and ISI are reduced significantly.The desired signal can then be estimated by conventional beamforming, 3 for example, MRC. 3 We use the term beamforming because our solution works in space and time.However, the term filter-combiner could equally well be used.
The projection and combining steps can also be carried out in a single beamforming step.The ISR beamformer W (d,vd) k,n , k = 0, . . ., Q − 1, is defined by where I NT denotes an N T × N T identity matrix, and N T = M R (QL + L ∆ ) is the total space-time dimension.First, we form the projector Π n orthogonal to the constraint matrix Ĉn .Second, we project the estimated response vector and normalize it to yield the ISR beamformer W (d,vd) k,n .
ISR modes
The ISR modes differ in the construction of the constraint matrix.Table 1 defines the constraint matrix of each mode when considering only MAI rejection and a pedagogical illustration is provided in Figure 4 which links the modes to the composition of the constraint matrix.In the table, NI denotes the number of interfering signals to be rejected, and i is the index to a subset of MAI signals which we strive to reject.Note that for simplicity, Table 1 defines the composition of the constraint matrix when only MAI is rejected, but it is easily generalized to also incorporate ISI rejection by adding columns of the estimated ISI.Of the modes previously presented, three merit discussion here.
In the ISR-hypothesis mode (ISR-H), every symbol signature 4 of the selected interfering users is rejected individually.This mode does not require DF.If the channel is known, selected interfering users can be rejected perfectly but the white noise is enhanced.ISR-H was found to perform poorly on the UL because of the large noise enhancement associated with the many constraints [30].Its application to the DL, however, is more appealing due to the adverse near-far situations there as we will witness later.
In the ISR-realizations mode (ISR-R), we do not form a null constraint for each symbol signature of each interfering user.Instead, we reconstruct the sequence of symbols over the duration of the observation frame.The R mode therefore requires DF.These decisions are obtained from MRCbased decisions (Section 2.3.2).The number of constraints is reduced with ISR-R giving less white noise enhancement at the cost of reduced near-far resistance.
In the ISR-total realization (ISR-TR) mode, we reconstruct interference using DF as in the R mode, then we add the reconstructed interfering users scaled by their estimated amplitudes to form one constraint only.ISR-TR, in addition to DF, also requires power estimates (Section 2.3.3).The TR mode has negligible white noise enhancement but also the worst near-far resistance.
Before we introduce the proposed application of ISR to the DL (DLISR) in Section 3.4, we will present DACCA and VIR in Sections 3.2 and 3.3, respectively.
DACCA
We propose a strategy for channelization code allocation of user data channels at the base station, which we denote by DACCA.With DACCA, the base station dynamically reassigns channelization codes to the users at a low rate with the aim of concentrating energy in the left-hand side of the OVSF tree.We propose a simple metric for code assignment as the product between each user's output power and SF, denoted by the power-SF product (PSFP) in the following. 5DACCA is illustrated in Figure 5a.The aim is to fill the OVSF tree from left to right subject to the PSFP of users.The desired outcome is a concentration of power at the left-hand side of the OVSF tree. Figure 6 shows the probabilistic origin of the interference for a random mobile in a network.The distributions were obtained with the aid of the RNS to be presented in Section 4 and corresponds to a soft-blocking rate (SBR) (see Section 4.2) of 20%, processing gain (PG) of 16, and an offered traffic of T OFF = 4 Erl.In this paper, the PG is defined as the SF, L, multiplied by the number of receive antennas, that is, PG = M R L. Otherwise, the assumptions specified in Section 5.2.1 apply.We observe that most of the interference is generated by just a few users.For example, 30% of the total interference arrives from the strongest in-cell interferer and the sum of only two interferers accounts for almost half the interference.With DACCA, therefore, most of the interference power can be concentrated in a relatively small portion of the OVSF code space.It is the pronounced near-far situations on the DL which make DACCA especially interesting.
Dynamic code assignment and reassignment strategies have previously been considered in [33,34].The goal in previous works was to reduce code blocking and limit the code reassignment rate.Instead, the purpose of DACCA is to provide the mobile with a priori knowledge on where to look for interference and at the same time concentrating the interference energy in a small portion of the OVSF tree.DACCA shares some similarities with the strategy denoted "leftmost" in [34], namely, users are assigned to the leftmost available code in the OVSF tree.DACCA imposes additional restrictions because it both strives to assign the leftmost codes and at the same time to achieve the best possible concentration of power at the left-hand side of the OVSF tree.Therefore, DACCA will exacerbate the probability of code blocking and more frequent code reassignments must be performed by UTRAN (UMTS terrestrial radio access network).The need for frequent reassignment is satisfied by reassigning codes at a low rate of 75 Hz in our simulations.Regarding code blocking, previous results [34] indicate that a load (i.e., number of OVSF codes in use divided by the SF) of 50% yields a code-blocking rate less than 1%.Comparing this blocking with the loads we can achieve (see Section 5) and the SBR on the air interface, it is reasonable to deem code blocking C ch (8,1) C ch (16,1) C ch (2,2) C ch (4,4) C ch (8,8) C ch (16,16) Higher power-SF product (PSFP) (16,1) C ch (16,3) C ch (32,3) to be a minor drawback of DACCA.Note that DACCA does not conflict with 3G standards because channelization codes can be allocated almost freely by UTRAN.Only the primary CPICH and the primary CCPCH have predefined channelization codes [29].
Virtual interference rejection
VIR involves rejection of interference targeting a channelization code with low SF (rejection SF (RSF)) although no physical users may be assigned this code.VIR is particularly interesting in the context of OVSF trees [29].The idea is to target one or more virtual channelization codes with low RSF L R and reject these codes as if they were physical users.The advantage is that any offspring (in the OVSF tree) from the rejected virtual code is also rejected; therefore, multiple in-terfering users are rejected, targeting only a few virtual channelization codes.
It is noteworthy that VIR targets the channelization codes.In practice, the channelization codes are repeated at the rate L R T c , scrambled by the scrambling code and filtered by the channel response.A mathematical formulation of VIR is provided in [35]; here we will provide an example of VIR.Consider the segment of an OVSF tree starting at an SF of 8 shown in Figure 5b.Codes that are circled are in active use.Consider the virtual channelization code c ch (8,1), marked with an "x."We reconstruct all required segments6 of c ch (8,1), apply the appropriate scrambling code, and filter them by the estimated channel response.Then we reject all reconstructed segments.It then follows that all descendants are rejected irrespective of their SF and modulation; that is, the interferer with SF = 16 assigned to code c ch (16,1), the code with SF = 32 assigned to c ch (32,3), and the one with an SF of L = 64 assigned to c ch (64, 7), respectively, are all rejected.The code c ch (64, 8) is rejected although it is not active and the code c ch (16,3) is active but not rejected.Preferably, codes that are not active should not be rejected.
When VIR is combined with DACCA, cancelling the leftmost code at any RSF ideally causes the highest possible fraction of the interference to be rejected.The efficiency of VIR is, therefore, enhanced when DACCA is used.If DACCA is not employed, the RSF must be higher to minimize the number of rejected inactive codes.This will increase complexity significantly (see Section 3.5) and possibly degrade performance.
An idea similar to VIR was considered in [28]; however, the targeted SFs were very high SFs instead of very low SFs like in VIR.The idea there is that one interferer at a low SF is equivalent to numerous high SF virtual users.With VIR, the idea is opposite: one low SF code constitutes many interferers assigned to physical OVSF codes of higher SFs.
DLISR
Compared to the UL, DL MUD is characterized by a lack of information regarding the interference.A mobile generally has no knowledge of the interfering users' codes, modulation, connection type, and coding.This information is only available for the pilots and the desired signal.Therefore, the interference rejection is conveniently split into two steps: in the first step, we remove the MAI and in the second step, we remove the ISI and the pilots as shown in Figure 7.The TR mode has shown excellent performance in [30] with the lowest possible complexity.Therefore, the TR mode is well suited for application in the second step regardless of the solution applied in the first step.For lack of space, we disregard further details and focus on the more important first step in the following.Improved near-far resistant channel estimation [36] may be achieved by using the near-far resistant observation Y Π,n = Π n Y n (see (20)) offered as an intermediate step according to Figure 7.It is therefore natural to use Y Π,n for the purpose of channel identification because it is offered without additional complexity.In the following, we present three variants of DLISR.Two variants based on ISR-H and are denoted by DLISR-H with fixed constraints (DLISR-H-FC) and DLISR-H with best constraints (DLISR-H-BC), respectively.The final variant is based on the R mode with soft decision and is denoted by DLISR-R-SD.For the purpose of comparison, we also consider the PIC-SD.Important properties of the DLISR variants, PIC-SD, and MRC are summarized in Table 2.
DLISR-H-FC
DLISR-H-FC is the simplest of all variants.The idea is to blindly reject the same OVSF code subspace according to a fixed strategy.Obviously, this mode is relevant only when DACCA is employed.Whenever a virtual-user code is rejected, white noise is enhanced.It can be shown that if the spreading is real, the noise enhancement is given as follows: 7 where N c is the number of interfering signals to be rejected.The observation frame with dimension N T = M R (QL + L ∆ ) (see (10)) spans (QL + L ∆ )/L R segments of the targeted code with SF L R .Due to asynchronism and multipath propagation, additional symbols will contribute at the edges.Assuming that the delay spread is insignificant, it follows that the number of constraints in ( 22) is N c (QL + L ∆ )/L R + 1.Using (22) and the probabilistic distribution of interference (see Figure 6), we can identify a solution that optimizes the trade-off between noise enhancement and interference reduction.Table 3 lists the relative reduction of interference and noise enhancement for different strategies.The first row 7 If we strive to reject a subspace with dimension N c contained within the total dimension N T , a fraction of the desired signal energy is rejected as well.It is reasonable to assume that this fraction is approximately (N T − N c )/N T .Therefore, the noise compared to the desired signal is enhanced by N T /(N T −N c ).A more accurate development of (22) will be shown in a later contribution.identifies the interferers rejected, for example, 2/1/0 means the two strongest in-cell virtual users plus the strongest outcell user of the neighbor cell with the strongest pilot channel.In the second row, the noise enhancement is computed according to (22).The net gain peaks at 3.41 dB suggesting that the best strategy is to reject 4 in-cell virtual users, 2 virtual users from the strongest neighbor, and one virtual user from the second strongest neighbor.In reality, the strategy (which is fixed) should be selected according to the highest load during busy hour.This ensures optimal performance at peak load and always satisfactory performance at lower loads.
DLISR-H-BC
In the DLISR-H-BC variant, we estimate the power in the virtual subspace of the serving cell and all cells in the neighbor list.The power is estimated subject to the RSF which may represent many virtual users.The best constraints are computed along the same lines as in Table 3, but the interference reduction is based on the estimated power and not the statistical mean.This version hence adapts easily to fast fading and will attempt to reject interference most efficiently.This strategy therefore ensures that we always follow an optimal rejection strategy, provided that the powers are estimated properly and the update is done frequently.DLISR-H-BC is more complex than DLISR-H-FC because it needs to probe the interference subspace and has to decide which constraints to reject for best performance.It can, however, work in the absence of DACCA although DACCA simplifies probing.In the absence of DACCA, interference is not generally concentrated at a low SF virtual code; it may therefore be necessary to probe the OVSF tree at higher RSF levels.This increases complexity and reduces the accuracy of probing because a few strong sources can be estimated more reliably than many weak sources.
DLISR-R-SD
In this variant, we reconstruct the virtual users using soft decision.Working at a low RSF, the N v OVSF virtual codes which contain most power are selected.These codes are reconstructed as virtual users' signals, and soft decision estimates based on MRC estimation are used.Note that hard decision FB is not usually an option on the DL and the fact that one virtual code is the contribution of many physical interfering users makes hard decision even more complicated.
PIC-SD
As a benchmark, we consider the PIC [16,17] with SD FB, and denote it by PIC-SD.We follow the same steps as for DLISR-R-SD; but the reconstructed interference is subtracted instead of nulled.Obviously, PIC-SD, like DLISR, takes advantage of both VIR and DACCA to improve performance and lower complexity.
Computational complexity of DLISR
We provide complexity estimates in Table 4 assuming VIR with an RSF of 8 for all DLISR variants, PIC-SD, and MRC.We have also listed results for an RSF of 16 and 32, respectively.We have detailed the most demanding tasks and a margin of 40% has been added to account for all other operations not listed.We assume that RSF/2 virtual codes are rejected and that three cells are actively monitored.Complexity is specified in Mops, where one operation is defined as a complex multiply-add.The numbers are appropriate for M R = 1.Roughly speaking, complexity is invariant to the SF of the desired user, and grows linearly with the number of receiving antennas.The results for RSF = 8 relate to the situation where DACCA is employed (as in our later simulations).When DACCA is not employed, an RSF of 8 is too low.We simulated the leftmost and random code-allocation schemes, for which details are omitted for lack of space, and found that an RSF of about 32 must be employed if the leftmost strategy is used instead of DACCA, and even higher RSF must be employed if random code allocation is employed.
The complexity of the matrix inversion is very modest.For the R-variant, it is negligible because the dimension is only 4 (with RSF = 8).H-variants have higher complexities associated with the inversion but, although not evident, there are huge savings because Q is band diagonal as a result of VIR (low RSF approach).8PIC-SD does not require matrix inversion and therefore has a complexity advantage over DLISR which, however, is vanishing for low RSFs.
When VIR and DACCA are employed, the complexity of our solution is moderate.Our MUD solutions require from about 1.1 to 1.7 Gops.Today's high-end signal processors offer speeds of more than 10 Gops.A requirement of 1.1-1.7 Gops is therefore reasonable for a mobile terminal application where cost and power consumption must be kept low.The feasibility becomes even more evident when compared with SUD (STAR-MRC); our solution requires only about 2.5-4 times the complexity of SUD.Note that our SUD candidate, STAR [22] with MRC, is comparable in complexity to the RAKE [37], which is used in current implementations.DLISR-R-SD is less complex than the H-variants but the difference is only about 50% which is considered unimportant.
If DACCA is not employed, VIR is still applicable (and should be used!) but it must target a higher RSF.This exacts a significant complexity increase of about 4-8 times9 when comparing at RSF = 32 which as argued is a good choice when DACCA is not employed.The complexity of the R-variants is now four times less than the H-variants.It is therefore in much favor of DLISR-R-SD when higher RSFs are used.
RADIO-NETWORK SIMULATOR
The purpose of the RNS is to provide a realistic picture of the distribution of the users and how they interfere with each other.This information is then used for the link-level simulations.
The RNS starts by uniformly populating users in a homogeneous cell grid which we name the test network.Using propagation estimates, it iteratively blocks users either due to coverage or interference limitations.Once the network arrives at a stable condition, the RNS outputs the realized interference.A stable condition is characterized as one where all users can achieve the required SINR without being blocked (i.e., without exceeding the maximum power offered by the base station cell).First, we provide a mathematical formulation in Section 4.1.Then we outline the algorithm in Section 4.2.
Network-level signal model
The mobile unit always strives to achieve a certain SINR which is sufficient to provide a certain QoS.If the serving cell is not able to supply the power required by a mobile, the mobile is blocked.Below we define the link budget which is useful to assess the SINR at the target mobile subject to transmitted power, propagation loss, interference, and so forth.First, we briefly discuss the propagation model which is essential to the later considerations.
Propagation model
We consider the following simplified form of the Okumura-Hata propagation model [38,39]: where K P is the propagation exponent (typically 3.5-4 for urban environments), L 0 is an offset which relates to the morphology, u = 1, . . ., N U is the user index where N U is the total number of users in the network attempting a call, v = 1, . . ., N CELLS is the cell index where N CELLS is the total number of cells, and dist(u, v) is the distance between the mobile and the cell.Finally, Γ LNF models the log-normal fading (LNF) and is assumed to be a normally distributed random variable, that is, Γ LNF ∈ N{0, σ 2 LNF }.Note that the variables u, v, and N CELLS by definition are different from u, v, and N CELLS first introduced in Section 2.1. 10Considering that signals arriving from the same spatial direction will experience similar LNF, we introduce the following locationdependent modeling of the LNF: where Θ is the angle between the mobile and the cell, and where X LNF , Y LNF are independent zero-mean Gaussian distributed random variables with variance σ 2 LNF .
Generic multicell multiuser link budgets
We define the set G B which contains the indices of all mobiles which are blocked.If the mobile (u, v) is not blocked (i.e., (u, v) / ∈ G B ), we have where S is the signal strength at the input of the receiving antenna,11 P OUT is the power fed to the transmitting antenna, 12 G T and G R are the gains of the transmitting and receiving antennas, respectively, ∆ MARG accounts for additional engineering margins (e.g., PC margin), and L PATH is the path loss between the serving cell and the user equipment defined in (23).We assume that the mobile antenna gain is independent of its location and G R is therefore location independent.Let γ REQ specify the required SINR in dB for a specified QoS.We assume that the required target value is the same for all mobiles. 13Assume that v u is the serving cell of the uth mobile; then the required signal power at the input to the receiver, say S REQ (u, v u ), is given in dB as follows: where N is the user-independent thermal noise power, and I(u) is the total MAI received at the mobile u to be defined in (29).We use the # sign to differentiate a physical value from its dB equivalent.We next combine ( 26) and ( 27) to find the required transmitted power: and can now define the received interference as follows: where K ORTH is the orthogonality factor, a measure of the orthogonality loss due to multipath propagation (a typical value is 2 dB), and PG = LM R is the PG.We define the best server v u of the mobile with index u as the serving cell v which requires the lowest output power to satisfy the SINR target: This definition suggests that we consider the best server as the cell with the strongest signal since the interference is assumed zero.In handover situations, this may not be true because of hand-over hysteresis, congestion, or load balancing.We also note that the best server, according to the definition used, may not be the best choice because of orthogonal transmission, which implies that a weaker server may occasionally have better effective SINR.If the best server cannot supply the required power needed, the user is blocked.Blocking occurs either due to coverage blocking (excessive path loss) or interference blocking (excessive interference).The size of the test network can be limited using wraparound to mitigate the edge effect.To implement wraparound, we place nine virtual images of the test network in all directions (south, south-east, east, and so forth).In the computations, the image of a cell which gives the strongest signal is chosen.For instance, to compute the required power in (28), we compute it for both the target cell and also for all its nine replicas; and then select the replica which gives the highest signal strength.Nondocumented simulations support the efficiency of network wrapping, which allows for the use of test networks smaller than 25 sites (5 by 5 grid).
RNS algorithm
The object of the algorithm is to locate mobiles in the network so that all nonblocked mobiles experience satisfactory SINR.The algorithm estimates this by uniformly distributing N CELLS T OFF users, where T OFF is the offered traffic.Then it blocks users until a stable solution is found.The RNS algorithm is illustrated by the flowchart in Figure 8.We identify a cell near the center of the grid as the target cell.Assume that the noise floor N and the maximum output power P MAX have been defined.Initially, the sets of blocked mobiles are empty, that is, S CB = ∅ (coverage) and S IB = ∅ (interference).
(1) Distribute N CELLS cells on a map in a hexagonal grid.
(2) Randomly populate the test network with N CELLS T OFF users.(3) Compute for every mobile-cell pair the power required for service (see (28)).(4) Identify the best server for every user (see (30)).( 5) Users with P REQ > P MAX , P MAX being the maximum output power which can be assigned to any individual user, are deemed to be coverage blocked and are added to the set S CB .The fraction of users blocked estimates the coverage blocking probability, that is, Pr CB = size{S CB }/(N CELLS T OFF ). ( 6) Compute the total received interference for all remaining users (see (29)).( 7) Compute required output power for all remaining users (see (28)).( 8) Block users which have P OUT > P MAX and add to the set of interference blocked users S IB .(9) If all users that are not blocked can achieve the required SINR, stop, otherwise go to 7.
We note that the noise floor and the maximum output powers are chosen arbitrarily.By appropriate choices, we can target any desired coverage blocking.Pr IB = size{S IB }/(N CELLS T OFF ) estimates the probability of interference blocking.An estimate of the total SBR is then Pr B = Pr CB + Pr IB .
From link-level to system-level results
The simulation model consists of the RNS (Section 4) and the link-level simulator as shown in Figure 1.The RNS provides realistic realizations of the radio-network and the linklevel simulator uses this information for BER assessments.We note that a network realization from RNS will result in the same target SINR for all mobiles; however, it is the distribution of the interference which is particularly important on the DL.For a given average offered Erlang traffic, the actual carried traffic is determined by the radio-network SBR (cov-erage+interference blocking).The SBR is related to the average SINR of the mobiles.This is illustrated in Figure 9 which depicts SBR as a function of the required SINR.The coverage blocking was fixed at 10% by adjusting the maximal output power P MAX .Each SBR estimate is based on 37500 observations with the conditions otherwise stated in Section 5.2.1.Each curve corresponds to the offered traffic level specified in the legend.For a given carried traffic and SBR, the SINR from these curves dictates the PC target SINR which must be used by the link-level simulator.For instance, if we target an SBR of 20% and 4 Erlangs of traffic, the SINR target is 4.5 dB.
RNS simulations setup
We have considered a homogeneous hexagonal grid of 5 by 5 sites.Wrapping has been used to mitigate the edge effect.The sites have 3 sectors with pointing directions of 0 • , 120 • , and 240 • azimuth, respectively.The antennas are 20 m high and the site-to-site distance is 250 √ 5 m.We use the vertical/horizontal antenna patterns of Kathrein Werke KG, type number 742212, 1950 MHz antennas with 6 • electrical tilt. 14 The orthogonality factor is assumed to be 2.2 dB.We dedicate 10% of the average output power to the CPICH.Coverage blocking has been fixed at 10%.We consider high data rates herein.Therefore, the coverage blocking is chosen moderately high.Table 5 summarizes the settings otherwise used.
Link-level simulations setup
In the link-level simulations, we attempted to approximate the specifications for WCDMA [2].We have considered 14 We tested antenna tilts in the range of 0 • to 8 • and selected 6 • because it provided the highest coverage degree with the choice of site-to-site distance and antenna heights.low SF operation and high-order modulation schemes (e.g., HSPDA [3,4]).We use the interference realizations and pilot powers as given by the RNS as inputs to the link-level simulator.We explicitly generate signals from the serving cell and the three strongest neighbors, 15 whereas the interference from the remaining cells is modeled as AWGN.In all our simulations, we consider an SF of L = 8 which corresponds to a coded information rate of 480 kbps with QPSK or 960 kbps with 16 QAM when a rate-1/2 coding is assumed.The channel is Rayleigh fading [32] with chip-rate normalized Doppler f D /R c , where R c = 3.86 Mcps is the chip rate, and we consider frequency-selective fading with P = 3 equal-strength propagation paths with random delays and interpath delays limited to 10 chips.We consider both SISO and MIMO systems.For the desired user, we implement PC, 16 with a PC correction factor ∆P PC to be updated at a rate of 1500 Hz.The PC message is determined by comparing the estimated SINR (see Section 2.3.4) to the target SINR (coordinated with the RNS).We further impose a transmission delay of D PC = 1/(1600 Hz) = 0.625 millisecond and a simulated error rate on the PC bit of BER PC = 10%.Modeling closed-loop PC for all users is costly and we have therefore used a simplified model for the interfering users as illustrated in Figure 10.The signal from the unit power source is first scaled by the PC feedback to yield the transmitted power P (u,v) TX .The transmitted power is attenuated by the channel, then a Gaussian random variable with variance 0.25 is added to model practical estimation errors in the receiver.This signal is squared and used by the PC decision device to adjust the transmitted power (δ bias compensates for the bias imposed by the simulated noise), and fed back with a delay of D PC .To find the power as experienced by the target mobile, we attenuate the transmitted power by the propagation loss from the serving cell of the interferer to the desired user L (u,v) LOSS − L (d,vd) LOSS , to eventually yield (ψ (u,v) (t)) 2 .The values of the propagation losses are obtained as a side product from the RNS.
We use STAR [22] to estimate the channels with the modifications formulated in Section 2.3.1 and Figure 7. DACCA is used with code reallocation at 75 Hz.It is further assumed that DLISR-H-BC updates its constraints at a rate of 300 Hz.Working at an RSF of 8, we found that N v = 2M R is a good rule for good performance for DLISR-R-SD in the operating 16 In the absence of PC, the received power ψ 2 has a χ 2 distribution with standard deviation σ ψ 2 = 1/ M T × M R that asymptotically approaches the AWGN channel at a very high diversity order M T ×M R → ∞.With PC, however, ψ 2 has a log-normal distribution with much weaker standard deviation that quickly approaches the AWGN channel with few antenna elements only, as shown in [31].Hence, PC significantly increases capacity and reduces the MIMO array size.Indeed, as noted in [31], if we apply the asymptotic expression for the BER in the absence of PC Pr ψ 2 to the case of active PC (as an approximation), we may expect to obtain (from standard deviation measurements) the same capacity with PC and 3×2 antennas as would be obtained without PC and 30×2 antennas!region of interest (about 5% BER).The PIC-SD interestingly shows strong sensitivity to this parameter and the best choice proves to be N v = M R .The parameters most commonly utilized in the simulations, unless otherwise specified, are summarized in Table 6.All BER estimates reported are derived from at least 150 RNS realizations and each realization was run for at least 19000 symbols.
SISO with QPSK modulation
We consider first a SISO system with QPSK modulation.The SBR is 20% and the SF is L = 8.We employ one channelization-code group composed of L = 8 orthogonal Walsh codes.Note that the high soft-blocking ratio considered reflect the high data rate services that we are considering.The carried traffic is hence hard limited to a maximum of 8 users.Code blocking occurs rarely with the traffic loads we consider and its influence is vanishing compared to the SBR of 20%.This claim is true for all simulations cited herein.
Figure 11 shows the uncoded BER as a function of the carried Erlang traffic in the network.Our proposed DLISR variants significantly outperform MRC.They provide Erlang capacity gains of 3.5 dB (DLISR-H-BC) > 3.2 dB (DLISR-R-SD) > 1.6 dB (DLISR-H-FC), 17 respectively, over MRC-based SUD at 5% BER.Although PIC-SD is similar to DLISR-R-SD, it can only offer a gain of 2.6 dB.This illustrates the advantage of linearly constrained beamforming compared to subtraction. 18With DLISR-H-BC, we achieve the highest spectral efficiency of 0.78 bps/Hz, where spectral efficiency is defined as η S = log 2 (M Mod )T Erl /L, where M Mod denotes the number of symbols in the signal constellation, and T Erl is the carried Erlang traffic (at 5% BER).It is noteworthy that the H mode on the UL did not demonstrate as good performance.The pronounced near-far situations on the DL makes its application attractive.Note that if we instead compare capacities at BER levels below 3 dB and above 5%, respectively (i.e., 2.5% and 10%), the DLISR-H-BC capacity gains over MRC are 4.5 dB and 2.3 dB, respectively.It is therefore advantageous for DLISR (and MUD solutions in general) to compare along lower BER levels.Our internal studies have shown that 5% is an appropriate target if a rate-1/2 convolutional code with constraint length 9 is assumed.We therefore continue to aim at 5%.
2 × 2 MIMO with QPSK modulation
We now consider a 2 × 2 MIMO system.The SF is still 8 but the PG is 16 because of the extra antenna.Since we have two transmitting antennas, we have defined two groups of channelization codes.One group consists of L = 8 orthogonal Walsh codes; the second group likewise consists of 8 orthogonal codes obtained from the first group by 45 • rotation (see the example in Section 2.1).Results are shown in Figure 12.
DLISR-H-BC, DLISR-R-SD, PIC-SD, and MRC achieve the same relative capacity gain of about 3.9 dB compared to SISO.The advantage of linearly constrained beamforming (DLISR-R-SD) compared to subtraction (PIC-SD) is confirmed in this situation as well.The best spectral efficiency of 1.95 bps/Hz is again achieved by DLISR-H-BC.It is obvious that about 3 dB of these gains are due to the antenna gain.The rest is a combination of diversity and statistical multi-plexing gain on the air interface.DLISR-H-FC improves in MIMO compared to SISO achieving a relative gain of 5.1 dB compared to SISO.DLISR-H-FC experiences a statistical gain because more users are active in the MIMO system and randomness hence plays a less dominant role.Since this variant uses completely fixed constraints, interference energy is more likely to be concentrated where expected.
4 × 4 MIMO with QPSK modulation
We increase the number of receive and transmit antennas to four.Four code groups were determined by computer simulations where the objective was to minimize the intergroup cross-correlation.Results are shown in Figure 13 (for MRC, ISRDL-H-BC, and PIC-SD).The spectral efficiency of both DLISR and MRC doubles, compared to the 2 × 2 MIMO system.We are hence able to retain our MUD advantage of at least 3 dB over MRC-based SUD.PIC-SD as usual performs worse than DLISR and can only provide a gain of 2.1 dB over MRC.With DLISR-H-BC, we can now support 17 Erlangs of 480 kbps traffic per sector corresponding to a spectral efficiency of 4 bits/Hz/sector.Comparing SISO, 2×2 MIMO, and 4 × 4 MIMO, we notice that capacity increases linearly with the number of antennas.This linear relationship was also found by [9] for the MMSE MUD in an interference-limited cellular system.In cellular interference-limited systems, the gain is limited to the antenna gain and is therefore dictated by the number of receive antennas.Note that multiple transmit antennas still serve to alleviate the shortage of OVSF codes and can provide additional time diversity.
2 × 2 MIMO with 16-QAM modulation
We use the same settings as in Section 5.4 but consider now 16-QAM modulation corresponding to a bit rate of 960 kbps after rate-1/2 coding.Figure 14a shows the uncoded BER as a function of the carried traffic.We have used the 16-QAM symbol constellation suggested in [4].
The capacity gain of DLISR compared to MRC becomes dominant offering 8.1 dB capacity increase achieved with DLISR-H-BC.DLISR-R-SD performs slightly worse with 7.7 dB gain over MRC, but as usual, outperforming PIC-SD which only provides a gain of 6.7 dB.The remarkable gains over MRC are a result of increased data rate which effectively exacerbates the near-far situations because interference is limited to fewer sources.Compared to the QPSK results, the carried Erlang traffic is reduced by about 5.4 dB for DLISR variants.The spectral efficiency, which decreases less due to the doubled symbol rate, is 1.1 bps/Hz for DLISR-H-BC corresponding to a reduction of 2.6 dB compared to MIMO QPSK.
Higher capacities can always be achieved at the expense of increased SBR because it implies higher SINR operating point, even though the carried traffic is constant.To see the effect, Figure 14b shows performance with SBR = 60%.The spectral efficiency is increased for all modes.For instance, the DLISR-H-BC spectral efficiency is increased by 1.8 dB, yielding an absolute spectral efficiency of about 1.5 bps/Hz.This illustrates the important trade-off between capacity and network SBR.Higher SBR reduces the benefit of DLISR compared to MRC slightly, but it is still a significant 6.5 dB with DLISR-H-BC.MRC benefits more from increased SBR because in-cell interference becomes dominant and therefore an orthogonality gain, which is more pronounced for MRC, is achieved.
CONCLUSION
In this paper, we have presented a new MUD for DL MIMO systems.Our solution is based on previously presented ISR and is denoted DLISR.We have defined three variants of DLISR with different performances and complexities.The DLISR variants share one common feature, they employ VIR and they can benefit from dynamic allocation of channelization codes at the base station using the DACCA technique.VIR significantly reduces complexity because interference is rejected at a low (virtual) SF.With DACCA, the base station assigns channelization codes with the aim of concentrating interference in a small portion of the OVSF code tree.With DACCA, VIR therefore becomes even more efficient because we can attack interference at a lower SF hence reducing complexity further.We note that only one of our solutions requires DACCA.The remaining solutions benefit from DACCA in terms of complexity.
Performance of DLISR has been evaluated with the aid of a realistic simulation model consisting of an RNS and a link-level simulator.The RNS generates interference scenar-ios similar to those experienced in real life.These realizations of interference are used by the link-level simulator to produce BER performance statistics.At both levels, we have strived to use realistic assumptions.As a benchmark, we have considered the MRC-based SUD and the PIC-SD.
The Erlang capacity of the network is found to grow linearly with the number of receive antennas for both MRCbased SUD and our new DLISR MUD despite the existence of interference.Significant increases of capacity are achieved with DLISR which offers capacity gains over MRC-based SUD of at least 3 dB for QPSK (480 kbit/s) and about 6.5-8.1 dB when 16 QAM (960 kbit/s) is employed.A 4 × 4 MIMO system can support 17 Erlangs of 480 kbit/s traffic per sector corresponding to a spectral efficiency of 4 bits/s/Hz.DLISR-H-BC always achieves best performance outperforming DLISR-R-SD by about 0.3-0.5 dB.DLISR-R-SD outperforms PIC-SD by 0.5-0.9dB, hence illustrating the advantage of linearly constrained beamforming (DLISR-R-SD) compared to subtraction (PIC-SD).DLISR-H-FC generally achieves the least gain over MRC, but it also possesses the simplest structure.
Our DLISR solutions have low complexity when DACCA is employed in UTRAN.The gains cited herein are achieved at a complexity of about 1.6 Gops, which is only about 4 times that of SUD, and close to the complexity of PIC-SD.The realistic assumptions of our study suggest that our solution is low risk.The new DLISR MUD is therefore a serious candidate for DL MUD in CDMA-based MIMO and SISO systems.
Figure 1 :
Figure 1: Organization of operations for radio-network and link-level simulations.
Figure 2 :
Figure 2: Block diagram of an M T × M R MIMO transceiver structure with emphasis on transmitter and channel.
Figure 4 :
Figure 4: Relation between H, R, and TR modes can be illustrated from the composition of the constraint matrix.
Figure 5 :
Figure 5: DACCA and VIR illustrated.(a) In DACCA, users are assigned channelization codes according to their PSFP.(b) Interference rejection is aimed at a low SF when VIR is employed.
Figure 6 :
Figure 6: Relative power of interferers arriving from different sources. 1 In-cell is the strongest in-cell interferer, 1@1 neighbor is the strongest interference from first-tier neighbors.
Figure 8 :
Figure 8: Flowchart of the radio-network simulation operations.
T OFF = 2 T OFF = 4 T OFF = 6 T OFF = 8 TFigure 9 :
Figure 9: Estimated SBR as a function of the SINR target in a homogeneous system.The coverage blocking is fixed at 10% and the PG is 16.
Figure 10 :
Figure 10: Simplified PC modeling used to model PC (for interfering users only).
Figure 11 :
Figure 11: Uncoded BER performance as a function of the offered traffic.The modulation is QPSK and the channel is SISO.The SF is 8 corresponding to an information rate of 480 kbps (rate-1/2 coding assumed).
Figure 12 :
Figure 12: Uncoded BER performance as a function of the offered traffic.The modulation is QPSK and the channel is 2 × 2 MIMO.The SF is 8 corresponding to an information rate of 480 kbps (rate-1/2 coding assumed).
Figure 13 :
Figure13: Uncoded BER performance as a function of the offered traffic.The modulation is QPSK, the channel is 4 × 4 MIMO, and the SBR is 20%.The SF is 8 corresponding to an information rate of 480 kbps (rate-1/2 coding assumed).
Figure 14 :
Figure 14: Uncoded BER performance as a function of the offered traffic.The modulation is 16 QAM, the channel is 2 × 2 MIMO, and the SF is 8 corresponding to an information rate of 960 kbps (rate-1/2 coding assumed).(a) The SBR is 20%.(b) The SBR is 60%.
Table 1 :
Definition of the constraint matrix of each mode.(Each generic column Ĉ j,n is normalized to one.) ISR mode Ĉn = . . ., Ĉ j,n , . . .N c (number of constraints)
Table 2 :
Proposed DLISR receiver structure.ÑCELLS ≤ N CELLS is the number of (virtual) interferers selected for rejection.Important characteristics of new ISR variants for DL MIMO.
Table 3 :
Choosing the best strategy.
Table 4 :
Complexity estimates of ISR variants in Mops.
Table 5 :
Parameters used for the RNS.
Table 6 :
Parameters used in link-level simulations (unless otherwise specified). | 14,366.8 | 2004-01-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Effects of inter-resonator coupling in split ring resonator loaded metamaterial transmission lines
This paper investigates the effects of inter-resonator coupling in metamaterial transmission lines loaded with split ring resonators (SRRs). The study is performed from Bloch mode theory applied to the multiport equivalent circuit model of the unit cell of such artificial lines. From this analysis, it follows that the stopband bandwidth, inherent to SRR-loaded lines, is enhanced as inter-resonator coupling strengthens, and this enhancement is attributed to the presence of complex modes. The theoretical results are corroborated through calculation of the dispersion relation using a full-wave eigenmode solver, and also by measuring the frequency response of SRR-loaded lines with different inter-resonator distance and, hence, coupling. VC 2014 AIP Publishing LLC. [http://dx.doi.org/10.1063/1.4876444]
I. INTRODUCTION
Transmission lines periodically loaded with split ring resonators (SRRs) 1 inhibit wave propagation in the vicinity of the SRR fundamental resonance. 2,3As long as the SRRs and their spacing are electrically small, these SRR-loaded lines can be considered to be one-dimensional effective media exhibiting a negative effective permeability in a narrow band above SRR resonance (the effective permeability of these structures is described by the Lorenz model 4,5 ).Actually, the stopband of these lines includes not only the region where the effective permeability is negative but also a narrow band below the resonance frequency of the SRRs where the effective permeability may be interpreted to be highly positive.Additionally, according to Ref. 6, in the event that SRRs are coupled to each other, the effective permeability becomes complex (under lossless conditions) within a region that emerges in the transition from positive to negative values of its real part.Nevertheless, in the present work the interpretation of the stopband is based on the analysis of Bloch mode theory, rather than on the effective permeability.
SRR-loaded lines have been applied to the implementation of stopband filters, where bandwidth has been controlled by slightly varying the dimensions of the SRR array. 7The resulting structures can be viewed as quasi-periodic transmission lines where the effective permeability varies along the line.With this strategy, it is clear that the resonance frequency of the different SRRs can be slightly tuned along the desired frequency range, with the result of a broadened stopband.Alternatively, bandwidth can be enhanced by using tightly coupled SRRs.This approach was recently considered in transmission lines periodically loaded with complementary split ring resonators (CSRRs), 8 formerly proposed in Ref. 9. Subsequently, this approach was applied in Ref. 10 to widen the common mode suppressed band of differential microstrip lines.
In order to achieve significant inter-resonator coupling, the CSRRs in Ref. 8 were chosen to be rectangular-shaped (the long side being oriented along the transversal direction of propagation) and separated by very small distances.As found therein, the relevant feature is that rejection bandwidth enhancement in CSRR-loaded lines with tightly coupled resonators can be related to the presence of complex modes supported by the corresponding periodic infinite structure.These modes, in spite of the absence of losses, have complex propagation constants and appear as conjugate pairs in reciprocal structures.2][13][14][15][16][17] The existence of these modes in CSRR-loaded lines was demonstrated through Bloch mode theory, by analyzing the equivalent four-port circuit model of the unit cell, and by obtaining the modal solutions through a full-wave eigenmode solver. 8Analogously, the theoretical analysis of SRR-loaded lines with coupled resonators carried out in Ref. 6 also leads to a pair of complex propagation constants in the region where a complex effective permeability is exhibited.
In this paper, we study the effects of coupling in transmission lines loaded with pairs of SRRs magnetically coupled to the nearest neighboring pairs of resonators.The magnetic coupling between resonators of adjacent cells is thus accounted for in the model.Therefore, the resulting lumped element equivalent circuit model of the unit cell is a four-port circuit, in parallel to the four-port circuit that describes the unit cell of a CSRR-loaded line with interresonator coupling.The dispersion relation of SRR-loaded lines, inferred from Bloch mode analysis applied to the circuit model, was already derived in a previous publication by the authors. 18In this paper, further details on the derivation of such relation are given.Moreover, the modal solutions obtained theoretically are validated by obtaining the dispersion relation by means of a numerical eigenmode solver.Finally, we report the characterization of two fabricated structures with different level of coupling between SRRs, in order to experimentally confirm the effects of coupling on bandwidth enhancement.
II. TOPOLOGY AND CIRCUIT MODEL OF THE SRR-LOADED LINES
The topology of the considered SRR-loaded lines (unit cell) is depicted in Fig. 1.It consists of a coplanar waveguide (CPW) transmission line loaded with pairs of rectangular SRRs etched on the back side of the substrate.It is important to highlight that the symmetry plane of the SRRs (crossing the gaps) is orthogonally oriented to the line axis.This orientation is necessary to guarantee that the line is only capable of exciting the SRR fundamental resonance through the magnetic coupling.With different orientations, mixed coupling is required for an accurate description of the structure, as discussed and reported in Ref. 19.Obviously, mixed coupling between the line and resonators makes the analysis of SRRloaded lines with inter-resonator coupling much more cumbersome, and, for this reason, we have considered such SRR orientation in the present study.Thus, with this SRR orientation, the lumped-element equivalent circuit model of these structures, including the magnetic coupling between SRRs of neighboring cells, is depicted in Fig. 2(a) (the nearestneighbor interaction approximation is considered, and losses are neglected).
The validity of the model is restricted to those frequencies where the resonators are electrically small enough; this extends up to frequencies beyond the SRR fundamental resonance, the region of interest.In the model, L and C are the per-section line inductance and capacitance, respectively; the SRR is described by the capacitance C s and the inductance L s ; M is the mutual inductance between the line and the SRRs; finally, inter-resonator coupling is accounted for through the mutual inductance M R .Note that the magnetic coupling between coplanar SRRs of adjacent cells is negative, and the proper modeling of the magnetic coupling sign is mandatory, i.e., the sign cannot be disregarded.Otherwise, the frequency response predicted by the circuit will not be able to describe correctly the behavior of the SRR-loaded lines.It is also important to highlight that when the SRRs of the same unit cell are close together, there can be positive magnetic coupling between them.However, this coupling is neglected here because its only effect is to decrease the resonance frequency.
In order to simplify the analysis of the circuit model, it is convenient to combine the parallel connection of the SRRs belonging to the same unit cell and to transform each pair of inductances coupled by the mutual inductance M R to its equivalent T-circuit. 20This leads us to the circuit of 18 interaction between the resonant elements, the wider is the passband of the MIWs. 26As multiconductor theory predicts that the resulting unit cell can propagate two modes, 27 forward and backward waves are expected to coexist at some frequency band.
III. BLOCH MODE ANALYSIS AND DISPERSION RELATION
The dispersion characteristics of these SRR-loaded lines can be obtained from Bloch mode theory applied to the fourport network of Fig. 2(c).Let us denote V Li and I Li as the voltages and currents at the ports (i ¼ 1, 2) of the left hand-side of the unit cell, and V Ri and I Ri as the variables at the right handside ports.The variables at both sides of the network are linked through a generalized order-4 transfer matrix, according to where V L , I L , V R , and I R are column vectors composed of the pair of port variables, and A, B, C, and D are order-2 matrices.
The dispersion relation is obtained from the eigenmodes of the system (1) where I is the identity matrix, the propagation factor e cl is the eigenvalue, c ¼ a þ jb is the complex propagation constant, and l is the unit cell length.For reciprocal, lossless, and symmetric networks, the eigenvalues can be simplified to the solutions of 28,29 det where the elements of the A matrix (inferred from the network of Fig. 2(c) as detailed in Appendix A) are Since the network of Fig. 2(c) is lossless, the elements of A (A ij ) are real numbers.Hence, if the expression under the square-root in ( 4) is positive, the propagation constant is either purely real (a 6 ¼ 0, b ¼ 0) or purely imaginary (a ¼ 0, b 6 ¼ 0), corresponding to evanescent or propagating modes, respectively.However, if that expression is negative, the two solutions are of the form c ¼ a 6 jb, corresponding to complex modes.The frequency band that supports complex modes is thus obtained by forcing the expression under the square-root in (4) to be negative.Since complex modes do not carry net power, the frequency band supporting such modes is a rejection band, despite of being of different nature than that associated to evanescent modes (where a 6 ¼ 0, b ¼ 0).Inspection of ( 4) and ( 5) reveals that a necessary condition for the presence of complex modes is that M is different from zero (this is always the case, unless the substrate of the considered CPW is extremely thick).Notice that M ¼ 0 means that the host line and the SRR array are decoupled.Under this situation, the second term of the expression under the square-root in (4) is null, and hence, the square root is a real number, preventing the appearance of complex modes.Indeed, the two modal solutions for corresponding to the dispersion relation of a lossless transmission line described by the well-known LC ladder network, and that is, the dispersion of an array of inductively (edge) coupled SRRs, where MIWs are supported in a narrow frequency band in the vicinity of SRR resonance.Note that MIWs can exist as long as the reactance of the series resonator (between the ports L2 and R2) is capacitive and, obviously, this is another condition for supporting complex modes.The dispersion relation (4) in the limit M R ! 0, corresponding to negligible inter-resonator coupling, is also interesting to obtain.Under these conditions, the following result arises (see Appendix B): This is the dispersion relation of an SRR-loaded line without coupling between SRRs, which can be inferred from the biport model reported in Ref. 30 [and depicted in Fig. 3(b)] by applying Bloch mode analysis (see also Appendix B).
Although low-loss microwave substrates and very lowresistivity conductors are used, some losses are always present in real SRR-loaded lines.However, the dispersion relation given by (4) can still be considered as a reasonable approximation to the actual dispersion of the structure.Indeed, the analysis excluding losses suffices for the purposes of this 18 paper, i.e., the investigation of the effects of inter-resonator coupling on the dispersion characteristics and frequency response.In any case, the effects of losses may be easily evaluated by merely including resistors in the circuit model.
It is also worth mentioning that the dispersion characteristics of transmission lines loaded with SRRs (considering coupled and decoupled SRRs) and metallic rods was analyzed in Ref. 23.It was proven that interaction between MIWs propagating through an array of SRRs and incident electromagnetic waves (modeled by the equivalent circuit of a transmission line) may exist leading to the appearance of a stopband.Subsequently, an extended analysis was reported in Ref. 6 without the rods (i.e., by considering the same structure as in this work), and that stopband was found to be due to the presence of complex modes.In comparison to Refs.6 and 23, in this paper, we obtain the dispersion relation from the multiport equivalent circuit providing the details of the calculation of the transfer matrix, and we consider a real device based on planar transmission lines and rings, instead of assuming a theoretical generic system.Moreover, we provide numerical results inferred from an eigenmode solver and experimental evidence of stopband bandwidth enhancement caused by inter-resonator coupling.
A. Parameter extraction and equivalent circuit model validation
We have extracted the circuit parameters of the structure of Fig. 1.To this end, the electromagnetic simulation of an isolated unit cell was performed (by the Agilent Momentum commercial software).The circuit model of a decoupled unit cell is depicted in Fig. 3(a).We have extracted the circuit elements of the transformed model of Fig. 3(b) following the procedure reported in Ref. 30.Then, L s has been estimated as the self-inductance of an isolated (without the CPW structure) single split ring with the same average radius and ring width as the considered SRRs (in a quasi-static approximation, the total current flowing on the pair of SRR rings is independent of the position on the SRR 5 ).Thus, the equivalent inductance seen between the end terminals of the single split ring has been extracted from the electromagnetic simulation.By using the estimated L s (17.66 nH), the circuit elements of Fig. 3(a) have been obtained from the indicated transformation equations.Finally, we have inferred the mutual inductance M R of the circuits of Fig. 2 by curve fitting the circuit simulation to the electromagnetic simulation of a 2-cell structure.It is important to realize that since the SRRs of the input/output cells are not externally fed, the ports L2 (input cell) and R2 (output cell) have been left opened.Therefore, the transmission and reflection coefficients are referred to a two-port circuit (L1 and R1) rather than to the four-port circuit of the proposed model.
The extracted parameters are listed in the caption of Fig. 4. The comparison between the electromagnetic and circuit simulations of a unit cell and of two cascaded unit cells is depicted in Fig. 4, where good agreement is observed in the transmission and reflection coefficients.Concerning the modeling of higher order structures, Fig. 5(a) shows the frequency response for nine cascaded unit cells.As can be seen, the central frequency of the stopband shifts upwards as the number of cells increases, being this effect produced by the negative inter-resonator coupling.It can also be observed that the stopband inferred from the electromagnetic simulation is even slightly more shifted than the one predicted by the circuit simulation.It has been found (by including additional couplings between non-adjacent cells) that this discrepancy is due to the assumption of first neighbor approximation.In this regard, as Fig. 5(b) confirms, an increase in the inter-resonator distance reduces the impact on the frequency response shift caused by such approximation.In any case, the first neighbor approximation suffices for the purpose of the present work.
It is also important to point out that inter-resonator coupling splits the SRR resonance frequencies, so that the number of transmission zeros equals the number of resonators.Moreover, the stronger the coupling, the stronger will be the the splitting. 20As a result, the stopband bandwidth broadens with inter-resonator coupling, although this enhancement is limited since it saturates with relatively few cells.For instance, the stopband bandwidth (computed at À20 dB) obtained from the circuit simulation for an order-9 structure [Fig.5(a)] ranges from 1.879 GHz to 1.966 GHz.This corresponds to a bandwidth similar to the maximum achievable bandwidth that will be given by the dispersion relation for an infinite structure in subsection III B.
B. Dispersion relation validation
Once the circuit parameters have been extracted, we can obtain the pair of modal propagation constants given by expression (4).The results are depicted in Fig. 6.As in Ref. 8, in the first allowed band, there is a region with bi-valued propagation constant: one (forward) corresponding to transmission-line type propagation and the other (backward) related to magnetoinductive waves.Then, a region with a pair of conjugate complex propagation constants (complex modes) appears where forward and backward waves interfere with each other, followed by a region of evanescent waves.Finally, a forward wave transmission band emerges again.Hence, the enhancement of the stopband due to interresonator coupling is explained by the appearance of complex modes in the low frequency region of that stopband (the complex modes exist from 1.843 GHz to 1.961 GHz, and the evanescent modes extends up to 1.977 GHz).However, the magnetic coupling between SRRs of adjacent cells is limited and so it is bandwidth broadening.
We have also obtained the dispersion relation of a periodic structure composed of a cascade of the unit cell in Fig. 1 by means of the full-wave eigenmode solver of CST Microwave Studio.The results, also depicted in Fig. 6, reveal that there is good agreement with the analytical dispersion curve predicted by the circuit model (the bi-valued region is perfectly predicted by the eigenmode solver).However, since there is no electromagnetic field pattern with net current transfer in the stopband, the tool is not able to provide the dispersion curves in that region.
For comparison purposes, we have also considered a structure with higher inter-resonator distance providing much weaker coupling.The dispersion diagram, depicted in the inset of Fig. 6, reveals that the stopband bandwidth is significantly narrower.Therefore, these dispersion diagrams indicate that most of the stopband in the structure of Fig. 1 is related to the presence of complex, rather than evanescent, modes.In other words, as long as inter-resonator coupling is significant, complex modes may be the dominant mechanism of signal rejection (in the vicinity of SRR fundamental resonance) of these SRR-loaded structures.
IV. EXPERIMENTAL RESULTS
To experimentally validate the effects of inter-resonator coupling on bandwidth enhancement, two order-9 structures have been fabricated (Fig. 7): One of them with the unit cell of Fig. 1; the other by considering l ¼ 4.8 mm.The measured transmission and reflection coefficients are in good agreement with those given by the lossy electromagnetic simulation (see Fig. 8).The measured fractional stopband bandwidth (computed at À20 dB) is 5.2% and 2.4% for the structures of Figs.7(a) and 7(b), respectively.The measured stopbands are also in accordance with those obtained from the dispersion relation.Hence, the dispersion relation inferred from the multi-terminal circuit model is a powerful tool to gain insight into the stopband and the effects of interresonator coupling in SRR-loaded lines.
V. CONCLUSIONS
It has been demonstrated that SRR-loaded lines with tightly coupled resonators exhibit forward (transmission line-type), backward (magnetoinductive-related), and complex modes.The structures have been analyzed using multiport Bloch mode theory applied to the lumped element equivalent circuit model, and the dispersion characteristics have been obtained.It has been found that complex modes are responsible for bandwidth enhancement of the stopband.These complex modes have been interpreted as the destructive interference between forward and backward waves.Since backward waves are supported by the chain of SRRs, inter-resonator coupling is absolutely mandatory for complex modes to emerge.Indeed, the behavior of SRR-loaded lines with strongly coupled resonators is very similar to that of CSRR-loaded lines. 8The main difference is the nature of the propagating waves in the frequency region that supports backward waves (bi-valued region with multimode forward and backward propagation).In CSRR-loaded lines, the backward waves are electroinductive-like waves, whereas in SRR-loaded lines, backward transmission is due to magnetoinductive waves.The theoretical results have been validated by means of a numerical eigenmode solver, able to provide the dispersion relation, and also by measuring the transmission and reflection characteristics of two fabricated SRRloaded lines (one with weak inter-SRR coupling, and the other one with significant coupling).> > > > = > > > > ; ; and by choosing the "À" sign (i.e., the solution with physical meaning) in the last term, we obtain expression (8).
To demonstrate that (8) is the dispersion relation corresponding to the two-port that models the unit cell of an SRRloaded line without inter-resonator coupling, we apply Bloch mode analysis to the model of Fig. 3 where A is the first element of the transfer matrix of the twoport, and Z s and Z p are the impedances of the series and shunt branches of the network of Fig. 3(b).Calculation of (B5) for the circuit of Fig. 3(b) gives where x o ¼ (L s C s ) À1/2 .Using the element transformations of Fig. 3, expression (B6) can be rewritten as x 2 ; (B7) which, in turn, can be simplified to the dispersion relation shown in (8).Thus, it is clearly demonstrated that the general dispersion relation given in (4) for the four-port network of Fig. 2(c) is also able to account for the case of SRR-loaded lines with negligible inter-resonator coupling (i.e., M R ! 0).
Fig. 2 (
FIG. 1.Typical unit cell of a CPW transmission line loaded with a pair of SRRs designed to enhance coupling between resonators of neighboring cells.Dimensions are: W ¼ 9.1 mm, G ¼ 1.7 mm, l ¼ 3 mm, c ¼ d ¼ 0.15 mm, l 1 ¼ 2.8 mm, and l 2 ¼ 9.8 mm.The considered substrate is Rogers RO3010 with thickness h ¼ 1.27 mm and dielectric constant e r ¼ 11.2.The Bloch wave propagates from port 1 to port 2.
FIG. 3 .
FIG. 3. Lumped element equivalent circuit model (unit cell) of the structure in Fig. 1 without inter-resonator coupling (a), and the corresponding transformed model (b).Reprinted with permission from Naqui et al., International Conference on Electromagnetics Advanced Applications (ICEAA), Copyright 2013 by IEEE.18
FIG. 4 .
FIG. 4. Magnitude (a) and phase (b) of the lossless transmission (S 21 ) and reflection (S 11 ) coefficients for a unit cell and for two cascaded unit cells of the structure in Fig. 1.The extracted circuit parameters are: L ¼ 1.01 nH, C ¼ 1.40 pF, L s ¼ 17.66 nH, C s ¼ 0.45 pF, M ¼ 0.72 nH, and M R ¼ 1.17 nH.(a) is reprinted with permission from Naqui et al., International Conference on Electromagnetics Advanced Applications (ICEAA), Copyright 2013 by IEEE.18
FIG. 6 .
FIG. 6. Dispersion diagram for the structure of Fig. 1 inferred from an eigenmode solver and from its equivalent circuit model of Fig. 2(c).The circuit parameters are those indicated in the caption of Fig. 4. The dispersion diagram for the structure of Fig. 1 with l ¼ 4.8 mm is depicted in the inset.The attenuation constant a is not provided by the eigenmode solver. | 5,112.8 | 2014-05-20T00:00:00.000 | [
"Engineering",
"Physics"
] |
Capacitated Vehicle Routing Problem Solving using Adaptive Sweep and Velocity Tentative PSO
Vehicle Routing Problem (VRP) has become an integral part in logistic operations which determines optimal routes for several vehicles to serve customers. The basic version of VRP is Capacitated VRP (CVRP) which considers equal capacities for all vehicles. The objective of CVRP is to minimize the total traveling distance of all vehicles to serve all the customers. Various methods are used to solve CVRP, among them the most popular way is splitting the task into two different phases: assigning customers under different vehicles and then finding optimal route of each vehicle. Sweep clustering algorithm is well studied for clustering nodes. On the other hand, route optimization is simply a traveling salesman problem (TSP) and a number of TSP optimization methods are applied for this purpose. In Sweep, cluster formation staring angle is identified as an element of CVRP performance. In this study, a heuristic approach is developed to identify appropriate starting angle in Sweep clustering. The proposed heuristic approach considers angle difference of consecutive nodes and distance between the nodes as well as distances from the depot. On the other hand, velocity tentative particle swarm optimization (VTPSO), the most recent TSP method, is considered for route optimization. Finally, proposed adaptive Sweep (i.e., Sweep with proposed heuristic) plus VTPSO is tested on a large number of benchmark CVRP problems and is revealed as an effective CVRP solving method while outcomes compared with other prominent methods. Keywords—Capacitated vehicle routing problem; Sweep clustering and velocity tentative particle swarm optimization
I. INTRODUCTION
Vehicle Routing Problem (VRP) has become an integral part in logistic operations which determines optimal routes for several vehicles to serve customers [1].A proper selection of vehicle routes is very important to promote the economic benefits in operations.VRP is a hard optimization task to minimize total traveling distance of all the vehicles to serve all the customers from a depot.The general constrains of VRP are each customer is serviced exactly once (by a single vehicle) and total load of a route does not exceed capacity of the assigned vehicle [2].
The basic version of VRP is Capacitated VRP (CVRP) which considers equal capacities for all the vehicles [3], [4], [6].The simplest form of CVRP considers one depot and vehicles depart from the depot at the beginning and return to the depot at the end.In CVRP, all customers have known demands and known locations for the delivery.
CVRP is a complex optimization task and its objective is to minimize the total traveling distance for all vehicles to serve all customers.Mathematically, a CVRP is defined as Subject to x for V v (7) In this formulation, the objective function is expressed by (1) which states that the total traveling distance of all vehicles (i.e., CVRP cost) is to be minimized.Equation (2) represents the constraint that each customer must be visited once by one vehicle, where v i y =1 if vehicle v visits customer i otherwise it is zero.It is guaranteed in (3) and ( 4) that each customer is visited and left with the same vehicle, where v travels from customer i to customer j, and 0 otherwise.A constraint in (5) ensures that the total delivery demands of vehicle v do not exceed the vehicle capacity.Equations ( 6) and (7) express that vehicle availability should not be exceeded.
Various methods have been investigated to solve CVRP in last few decades.A number of methods are available that optimizes customer assignment under vehicles and routes of the vehicles together [5].On the other hand, the most popular way of solving CVRP is splitting the task into two different phases: firstly, assigning customers under different vehicles and secondly, finding optimal route for each vehicle [2].www.ijacsa.thesai.orgAmong several ways for customer node assignment, Sweep clustering algorithm is well studied due to its simplicity.The algorithm calculates polar angles of all the nodes and then assigns nodes into different clusters according to their angles [5], [10].The algorithm can be implemented using two different methods, forward Sweep (i.e., anti-clockwise) and backward Sweep (i.e., clock wise) [8].On the other hand, route optimization is simply a traveling salesman problem (TSP) and a TSP optimization method is employed for this purpose, in general [8], [9].
A number of CVRP studies are available using traditional TSP optimization methods with Sweep clustering.Nurcahyo et al. [8] investigated a Sweep based VRP for public transport of Semarang, Indonesia.Both forward Sweep and backward Sweep are considered for clustering; and route generation is accomplished through nearest neighbour algorithm of TSP.Han and Tabata [9] used Genetic Algorithm (GA) with Sweep algorithm to solve CVRP.In the method, a chromosome of GA is considered as a complete CVRP solution that is prepared from Sweep outcome.Suthikarnnarunai [7] used integer programming to generate TSP routes of Sweep clusters.Author also induced 2-opt exchange to improve a VRP solution exchanging nodes between tours.Aziz et al. [13] is also investigated nearest neighbour algorithm with Sweep clustering to solve CVRP.
Recently, a number of nature inspired swarm intelligence methods are investigated to generate vehicle route as the methods are found efficient to solve TSP.Yousefikhoshbakht and Khorram [11] used ant colony optimization (ACO) on Sweep clusters and then 3-opt local search are used for improving the VRP solutions.Reed et al. [12] investigated ACO with k-means clustering to solve the CVRP associated with collection of recycling waste from households.Venkatesan et al. [14] investigated Particle Swarm Optimization (PSO) to generate vehicle tour from Sweep clusters.PSO is also investigated in CVRP by Pornsing [4].
The objective of this study is to investigate effective CVRP solving method through adaptive Sweep where cluster starting angle is adaptive to problem.The most of the Sweep based methods, including the already discussed methods, considered standard Sweep for assigning customers under different vehicles and employed different methods to generate optimal routes for the vehicles.In standard Sweep, cluster formation starts from 0 0 and consequently advances toward 360 0 to assign all the nodes under different vehicles [7].Problem with such rigid starting is identified that total clusters formation may exceeds total number of available vehicles for some instances.And, starting from different user-defined angles identified better clustering and hence achieved better CVRP solution [17].In this study, a heuristic approach is developed to identify appropriate starting angle in Sweep clustering.On the other hand, velocity tentative particle swarm optimization (VTPSO), the most recent TSP method, is considered for route optimization.Finally, proposed adaptive Sweep plus VTPSO is tested on a large number of benchmark CVRP problems and outcomes are compared with other prominent methods.
The outline of the remaining paper is as follows.Section II explains the proposed CVRP solving method with adaptive Sweep and VTPSO.Section III is for experimental studies which presents outcomes of the proposed method in solving benchmark CVRPs as well as compares with other related methods.At last, Section IV gives a brief conclusion of the paper.
II. SOLVING CVRP USING ADAPTIVE SWEEP AND VELOCITY TENTATIVE PSO (VTPSO)
This section explains proposed CVRP solving method using adaptive Sweep and VTPSO.At first it explains proposed adaptive Sweep clustering.To make the paper selfcontained, VTPSO, the considered TSP route optimization method, is also explained briefly.
A. Clutering using Adaptive Sweep
Appropriate starting angle for cluster formation is an important matter in Sweep algorithm.Existing studies checked different fixed starting angles.But such trial and check method is required to set for every individual problem [17].Therefore, as an alternative, a heuristic method is investigated in this study which aim is to identify the appropriate cluster formation starting angle (Ɵs) for a given problem.
The proposed heuristic approach considers angle difference of consecutive nodes in angle basis ordered node list (ONL); and distance between the nodes and distances from the depot.The approach first calculates preference value (pƟ) of each consecutive nodes and maximum pƟ is considered as the outcome of starting angle (Ɵs).Suppose the depot and other two consecutive nodes are D, N1 and N2, respectively.Polar angles of the nodes are Ɵ1 and Ɵ2.The distances of the nodes from the depot are dN1 and dN2; and distance between the nodes is dN12.Fig. 1 shows the graphical representation of the matter for better understanding.Preference value (pƟ) for the starting angle between the nodes N1 and N2 means to place the nodes in two different clusters and is calculated using (8).
In the equation, and are the arbitrary constants to emphasis angle difference and node distances, respectively.According to first part of (8), the preference value increases with angular difference of the nodes (i.e., Ɵ -Ɵ ).The second part of the equation is minimum distance to travel the two nodes from depot.The outcome of the equation (i.e., pƟ value) will be large if both the nodes are far from the depot as well as distance between them is large.On the other hand, pƟ value will be low even larger angle difference when both the nodes are closed to depot.After calculating the pƟ values for all the consecutive nodes, the maximum value is considered as the starting angle.If pƟ value for nodes N1 and N2 is found maximum then cluster formation will be start from N2 for anti-clock wise cluster formation.Motivation of such starting is that these two nodes might not be same cluster.Staring from N2, cluster formation consequently advances assigning nodes into clusters considering vehicle capacity like standard Sweep.In such case N1 will be assigned in the last cluster.www.ijacsa.thesai.org
B. Route Generation using VTPSO
VTPSO [15] is the recent swarm intelligence method to solve TSP extending Particle Swarm Optimization (PSO).In PSO, every particle represents a tour and changes its tour at every iteration with velocity calculated considering the best tour encountered before by itself (called as particle best) and the best tour encountered by the swarm (called as global best).Swap Sequence (SS) and Swap operator (SO) based operation is considered for velocity calculation.A SO indicates two cities in a tour those positions will be swapped.Suppose, a TSP problem has ten cities and a solution is 1-2-3-6-4-5-7-8-9-10.A SO (4,6) gives the new solution S".
Here "+" means to apply SO(s) on the solution.
A swap sequence is made up of one or more swap operators.
Where, are the swap operators.Implementation of a SS means apply all the SOs on the solution in order.In traditional PSO, the new tour of TSP is considered after applying all the SOs of a SS and no intermediate measure is considered.On the other hand, VTPSO considers the calculated velocity SS as a tentative velocity and conceives a measure called partial search (PS) to apply calculated SS to update particle"s position (i.e., TSP tour).
VTPSO calculates velocity SS as like other PSO based methods.At each iteration step, it calculates velocity SS using (11) considering i) last applied velocity (v (t-1) ), ii) previous best solution of the particle (P i ), and iii) global best solution of the swarm (G).
Through PS technique, VTPSO measures performance of tours applying SOs of the calculated SS one after another, and the final velocity is considered for which it gives better tour.Therefore, PS technique explores the option of getting better tour considering the intermediate tours with a SS applying its SOs one by one.
Suppose
In the above cases Where, ( ) provides the minimum tour cost among 1 < j ≤ n.The detailed description of VTPSO for TSP is available in [15].
III. EXPERIMENTAL STUDIES
This section experimentally investigates the efficacy of proposed adaptive Sweep algorithm to cluster customers and VTPSO for route generation on a set of benchmark CVRP problems.A detailed observation has also given on a selected problem for better understanding of the way of performance improvement in proposed method.
A. Bench Mark Data and General Experimental Methodology
In this study, total 51 benchmark CVRPs have been considered from two different sets of Augerat benchmark problems which are A-VRP and P-VRP [16].In A-VRP, number of customer (i.e., nodes) varies from 32 to 80, total demand varies from 407 to 932, number of vehicle varies from 5 to 10 and capacity of individual is 100 for all the problems.For example, A-n32-k5 has 32 customers and 5 vehicles.On the other hand, in P-VRP, number of customer varies from 16 to 101, total demand varies from 246 to 22500 and vehicle capacity varies from 35 to 3000.Tables I and II show the brief description of the A-VRP and P-VRP benchmark problems, respectively.Two numeric values in a problem name present the number of nodes and vehicles associated with the problem.The detailed description of the problems is available in provider"s website 1 .According to Tables I and II, the selected benchmark problems belong to large varieties in number of nodes, vehicles and demands; and therefore, provide a diverse test bed.www.ijacsa.thesai.orgBenchmark problems are required to preprocess to use in the experiments.A customer is represented as a co-ordinate in a problem.Coordinates are updated considering depot as [0, 0] for easy calculation.Distance matrix is prepared using the coordinates.Polar angle of each customer is calculated for angle based sweep operation.Standard Sweep (i.e., Ɵ s = 0 0 ) does not have any parameter to set and it starts cluster formation from 0 0 (i.e., Ɵ s = 0 0 ).In adaptive Sweep, the values of and were set to 0.6 and 0.2, respectively and found effective for most of the problems.In few other problems and values are tuned between 0.2 and 0.6.Both anti clock and clock wise sweep operations are considered in both standard and adaptive Sweep algorithm.The experiments have been done on a PC (Intel Core i5-3470 CPU @ 3.20 GHz CPU, 4GB RAM) with Windows 7 OS.
B. Detailed Experimental Observation on a Selected Problem
This section presents detailed results for A-n53-k7 problem.In route optimization with VTPSO, the population size and number of iteration were set 100 and 200, respectively.For better understanding, experiments conducted for standard Sweep (Ɵ s= 0 0 ) along with adaptively selected angle.
C. Experimental Results and Performance Comparison
This section first identifies the proficiency of adaptive Sweep clustering over standard Sweep clustering in solving benchmark CVRPs.For the fair comparison, the population size and the number of iteration of VTPSO were 100 and 200, respectively.The selected parameters are not optimal values, but considered for simplicity as well as for fairness in observation.Finally, outcomes of the proposed method compared with the prominent methods.
Tables III and IV compare CVRP costs for clustering with standard Sweep and adaptive Sweep on A-VRP and P-VRP benchmark problems, respectively.Bottom of the tables shows average and Win/Draw/Lose summary.In adaptive Sweep, cluster formation starting angle is problem dependent and selected through proposed heuristic approach.Therefore, the starting angle is different for different problems as seen in the tables.On the other hand, standard Sweep is for only Sweep clustering with Ɵ s =0 0 .To identify the proficiency of proposed adaptive Sweep based approach, its outcomes have been compared with prominent CVRP methods.Among the selected methods, hybrid heuristic approach (HHA) [13], Sweep + Cluster Adjustment [18] and Sweep nearest [19] are also used Sweep based clustering to assign nodes to different vehicles but followed different approaches for route generation of www.ijacsa.thesai.org The comparative results in Table VI identified the proposed adaptive Sweep + VTPSO as the best for P-VRP benchmark problems.The proposed method is shown the best for 10 cases out of 24 cases and achieved average cost of 637.17.The proposed method outperformed HHA, Centroidbased 3-phase, Sweep + Cluster Adjust, Sweep Nearest on 23, 12, 13 and 6 cases, respectively, out of 24 cases.It is notable that Sweep Nearest tested only 10 problems.Between two existing Sweep based methods, HHA outperformed proposed method only for P-n16-k8 that is very small sized problem.Finally, outcomes of the proposed revealed the proficiency of adaptive Sweep in clustering and VTPSO in route optimizing.
IV. CONCLUSIONS
A two-phase CVRP solving method has been investigated through clustering with proposed adaptive Sweep and individual vehicle route optimizing with VTPSO.Adaptive Sweep is the extension of popular Sweep clustering where starting angle of cluster formation is determined through a heuristic approach based on nodes angle differences as well as distances from the depot.The experimental results on the benchmark problems revealed that adaptive Sweep is better than standard Sweep.Finally, proposed adaptive Sweep plus VTPSO is identified as a prominent CVRP solving method when outcomes compared with related existing methods in solving a large number of benchmark problems.
There are several future potential directions that follow from this study.In this study, angle difference and distance from the depot are considered to select starting angle.Scheme including node demand in selection criteria might improve performance and remain as future study.Moreover, it might be interesting to incorporate such motivation of cluster formation in other cluster first route second CVRP methods.
the tentative intermediate tours; and the final tour ( ) in PS is the tentative tour having the minimum tour cost.
Fig. 2
Fig. 2 is the graphical representation of the solutions of A-n53-k7 problem for standard Sweep plus VTPSO and adaptive Sweep plus VTPSO.In standard Sweep (Fig. 2(a)) nodes are divided into eight clusters and Cluster 8 is for remaining three nodes having total demand 29 only although vehicle capacity is 100.On the other hand, total CVRP demand are fulfilled by seven clusters by adaptive Sweep through adaptively selected Ɵ s = 220.6 0 (Fig. 2(b)) to start from node 3. It is visible from the figure that angle difference between nodes 33 and 3 is large and both are relatively far from depot.It is observed from the figure that several clusters are common in both solutions.Clusters 3, 4, 5, 6 and 7 of Fig. 2(a) are similar to clusters 7, 1, 2, 3 and 4 of Fig. 2(b), respectively.On the other hand, nodes of clusters 1, 2 and 8 of Fig. 2(a) are optimally assigned into clusters 5 and 6 in Fig. 2(b).With same VTPSO route optimization and summing up the individual tour costs, CVRP cost for standard Sweep and adaptive Sweep are 1174 and 1090, respectively.The figure clearly revealed the effectiveness of adaptive Sweep on CVRP outcome since both the cases VTPSO is used for individual vehicle route generation.
TABLE I .
DESCRIPTION OF A-VRP BENCHMARK PROBLEMS FOR CVRP
TABLE III .
CVRP COST COMPARISON FOR CLUSTERING WITH STANDARD SWEEP AND ADAPTIVE SWEEP ON A-VRP BENCHMARK PROBLEMS
TABLE IV .
CVRP COST COMPARISON FOR CLUSTERING WITH STANDARD SWEEP AND ADAPTIVE SWEEP ON P-VRP BENCHMARK PROBLEMS
/Lose Summary of adaptive Sweep over standard Sweep 16/1/7 Average 1310.11 1134.67 1181.44 1134.92 1163.41 Best/Worst 0/27 8/0 2/0 12/0 5/0 Pairwise Win/Draw/Lose Summary
From TableIII, it is observed that most of the cases adaptive Sweep outperformed its corresponding standard Sweep clustering.It is notable that for a particular problem, the outperformance of adaptive Sweep is only for different starting angle in Sweep because VTPSO is commonly used for vehicle route optimization in both the cases.As an example, for A-n33-k6 problem, standard Sweep (i.e, Ɵ s =0 0 ) achieved CVRP cost of 874.For the same problem the outcome of adaptive Sweep with adaptively selected starting angle 303.18 0 is 751.Adaptive Sweep cluster outperformed standard Sweep cluster in 16 out of 27 cases.Standard Sweep is found better than adaptive Sweep for only A-n69-k9 problem.For the problem, standard Sweep achieved CVRP cost 1254 but adaptive Sweep achieved slightly larger CVRP cost which is 1259.On the basis of average CVRP cost over 27 problems, adaptive Sweep outperformed standard Sweep.The average CVRP costs for standard Sweep and adaptive Sweep are 1200.26and 1163.41,respectively.In case of P-VRP benchmark problems, adaptive Sweep is also outperformed standard Sweep.The average CVRP costs for standard Sweep and adaptive Sweep are 645.38 and 637.17, respectively. | 4,533.2 | 2017-01-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Spectrum of cosmological correlation from vacuum fluctuation of Stringy Axion in entangled De Sitter space
In this work, we study the impact of quantum entanglement on the two-point correlation function and the associated primordial power spectrum of mean square vacuum fluctuation in a bipartite quantum field theoretic system. The field theory that we consider is the effective theory of axion field arising from Type IIB string theory compactified to four dimensions. We compute the expression for the power spectrum of vacuum fluctuation in three different approaches, namely (1) field operator expansion (FOE) technique with the quantum entangled state, (2) reduced density matrix (RDM) formalism with mixed quantum state and (3) the method of non-entangled state (NES). For massless axion field, in all these three formalism, we reproduce, at the leading order, the exact scale-invariant power spectrum which is well known in the literature. We observe that due to quantum entanglement, the sub-leading terms for these thee formalisms are different. Thus, such correction terms break the degeneracy among the analysis of the FOE, RDM and NES formalisms in the super-horizon limit. On the other hand, for massive axion field, we get a slight deviation from scale invariance and exactly quantify the spectral tilt of the power spectrum in small scales. Apart from that, for massless and massive axion field, we find distinguishable features of the power spectrum for the FOE, RDM, and NES on the large scales, which is the result of quantum entanglement. We also find that such large-scale effects are comparable to or greater than the curvature radius of the de Sitter space. Most importantly, in the near future, if experiments probe for early universe phenomena, one can detect such small quantum effects. In such a scenario, it is possible to test the implications of quantum entanglement in primordial cosmology.
Introduction
The concept of quantum entanglement is one of the most interesting features that one can study in the context of quantum mechanics. Using such idea one can study the instantaneous physical implication of local measurements [1][2][3]. There are several applications in the framework of quantum field theory in which the quantum entanglement play a significant role. For example, particle creation (EPR Bell pair [4]) through the bubble nucleation procedure has been explained using the idea of quantum entanglement where the quantum system is strongly correlated [5][6][7][8]. Also using the concept of quantum entanglement in QFT one successfully explains many phenomena like entropy bounds, phase transitions, anomalies, confinement, thermalization and quantum critical quenches, localization in quantum gravity and description of interior of black holes. Apart from that quantum entanglement has huge application in the context of quantum information theory, quantum cryptography and interferometry. The von-Neumann entropy and Rényi entropy are the appropriate measures of quantum entanglement the framework of condensed matter theory [9], in quantum information theory and in theoretical high energy physics. The idea of entanglement entropy in the context of quantum field theory is the best possible computational tool to quantify and study the nature of the long range effects of quantum correlation. However, the computation of entanglement entropy for a specific class of quantum field theories were not easy before the method proposed by Ryu and Takayanagi [10]. In this work, the authors have computed the entanglement entropy for a strongly coupled field theory set up with a gravity dual using the techniques of holography and the results are remarkable as it is in agreement with various expectations from the quantum field theory side [11].
Following this success, Maldacena and Pimentel in ref. [12] further proposed an explicit technique to compute the entanglement entropy in the framework of quantum field theory of de Sitter space with Bunch Davies quantum initial vacuum state. Here, the authors have studied the gravitational dual of the quantum field theory of de Sitter space using holographic techniques in detail. Further in ref. [13] the authors have extended this computation in the context of α vacua [14] in the same context. In ref. [15] and [16] the computation of quantum entanglement entropy and the formation of EPR Bell pair from stringy Axion were discussed with Bunch Davies and α vacua respectively.
Based on the physical set up used in our previous works [15] and [16], in this paper we have studied the cosmological implications of quantum entanglement by focussing on the long range effects of the two point correlation function computed from the mean square vacuum fluctuation of stringy Axion field with Bunch Davies and α quantum states as initial choice of vacua . We expect from this analysis that the signature and impact of quantum entanglement could be manifest in the correlation function even beyond the Hubble horizon scale. Our expectation is mainly due to the fact that de Sitter expansion of universe distinguish between a pair of Axions [17][18][19][20], known as EPR Bell pair which is created within causally connected Hubble region. For this purpose, we use three different techniques: 1. Field operator expansion (FOE) method with entangled state, 2. Reduced density matrix formalism (RDM) with mixed state and 3. Non-entangled state (NES) method. We implement the RDM formalism using the previous work done by Maldacena and Pimentel in ref. [12] in the context of de Sitter cosmology. In our computation we have explicitly included the effect of Stringy Axion in the small field regime and as a result we get perturbatively corrected contributions in the expression for the power spectrum derived using FOE, RDM and NES formalisms. Such correction terms can be interpreted as quantum effects which are appearing from the UV complete theory, such as a specific type of bipartite quantum field theory driven by axion. We note that the axion field which is being considered here, is actually originating from Type IIB string theory compactified on a Calabi-Yau three fold (CY 3 ), in presence of a NS5 brane sitting at the bottom of a long throat [21]. Most importantly, in the large wave number 1 limit (small scale or small wave length approximation [22]) we have shown the results for the power spectrum derived from these three formalism perfectly match with each other if we consider only the leading order contribution. However, the results are different for these three formalisms if we we include the contributions from next and next to next leading order. In a way one can say that such additional small perturbative correction terms play a pivotal role to distinguish between the FOE, RDM and NES formalisms. This is obviously an important information because using the present observational data on early universe cosmology [23,24] one can further constrain the present model and also test the appropriateness of these formalisms. Apart from this, for completeness, we have also analysed the behaviour of the power spectrum in the small wave number limit (large scale or large wave length approximation). We find that all these three formalisms yield distinctive results in terms of the momentum (quantum number) dependence of the power spectrum in order by order. But the lack of observational data on this particular regime does not allow us to test the appropriateness and correctness of the proposed methods. We hope that in near future when the observational data for this regime will be available, our results can further constrain the model and rule out two of the possibilities between the three formalisms discussed here. We would like to mention here that in our computation of the power spectrum for mean square vacuum fluctuation we have not considered the quantum fluctuation of the pseudo scalar Axion field as a classical back ground field, the approach which is mostly used in the context of the cosmological correlations from early universe. Instead , we have chosen the field operator of the Axion field itself as quantum operator whose fluctuation with respect to a quantum mechanical vacuum state (Bunch Davies and α vacua). Thus, in this paper, we have followed: 1. A complete quantum approach to compute the primordial power spectrum of mean square vacuum fluctuation, which is not usually followed in the context of cosmology.
2. For the specific structure of the axion effective potential , we have computed the explicit form of the corrections which are due to quantum effects.
3. For our calculation, we have used three different approaches at super horizon time scale hoping that the quantum corrections, at small and large wave number limits when confronted with observations, can select the most effective approach and the nature of quantum corrections.. From the cosmological perspective we believe this is a very important step forward.
The plan of the paper is as follows: In section 2, we begin our discussion with the computation of the wave function of the Axion field in a de Sitter hyperbolic open chart. For this purpose we have discussed the details of the background de Sitter geometrical set up in subsection 2.1. Further in subsection 2.2 and 2.3, we have solved the total wave function for Axion for Bunch Davies vacuum and generalised α-vacua respectively. Using these solutions we have derived the cosmological power spectrum of mean square quantum vacuum fluctuation in section 3. In subsections 3.1.1 and 3.1.2 we have discussed the quantum vacuum fluctuation using field operator expansion (FOE) formalism with entangled state for Axion. field. We have also derived the explicit form of the wave function in this formalism. This solution is used to derive the power spectrum by computing the two point quantum correlation function from mean square vacuum fluctuation. In subsection 3.2.1and 3.2.2 we have discussed the quantum vacuum fluctuation using reduced density matrix (RDM) formalism using mixed state for Axion field and we have derived the explicit form of the reduced density matrix in the de Sitter hyperbolic open chart. Further, this result is used to derive the power spectrum by computing the two point quantum correlation function from mean square vacuum fluctuation in large and small wave number limits for both massless and massve Axion fields. In subsection 3.3.1and 3.3.2 we have studied the quantum vacuum fluctuation using non entangled state (NES) formalism for Axion field and have discussed the NES formalism in detail. This result has been used to derive the power spectrum by computing the two point quantum correlation function from mean square vacuum fluctuation. Finally, section 4 has been devoted to summery and conclusion and future prospects . In Figure (1), we have presented a schematic diagram for the computation algorithm of long range effect of cosmological correlation function from quantum entanglement of axion in de Sitter open hyperbolic chart.
Wave function of axion in open chart
We briefly review here, for sake of completeness, the background geometry and the results for wave function of the axion field. The details can be found in our earlier work [].
Background geometry
We consider a time preserving space-like hypersurface S 2 in the open hyperbolic chart of the de Sitter space.. As a result S 2 is divided into two sub regions-interior and exterior which are identified by RI (≡L)/ RII (≡R). In terms of the Lorentzian signature an open chart in de Sitter space is described by three different subregions : where H =ȧ/a is the Hubble parameter and dΩ 2 2 represents angular part of the metric on S 2 . Now let us assume that the total Hilbert space of the local quantum mechanical system is described by H, which can be written using bipartite decomposition in a direct product space as, H = H INT ⊗ H EXT . Here H INT and H EXT are the Hilbert space associated with interior and exterior region and describe the localised modes in RI/ RII respectively.
In Figure (2) we have shown the schematic diagram for the geometrical construction and underlying symmetries of the bipartite quantum field theoretic system of de Sitter hyperbolic open chart. Corresponding Penrose diagrams are also drawn for completeness.
Wave function for Axion using Bunch Davies vacuum
Though our prime objective is to compute the cosmological correlation functions for axion field in de Sitter space, we need the results for the wave function of the axion field in the just mentioned geometrical set up. Note that he axion field under consideration is coming from RR sector of Type IIB string theory compactified on CY 3 in presence of NS 5 brane [21,26]. The effective action for the axion field is given by [21]: where µ 3 is the mass scale, f a is axion decay constant and the parameter b is defined as, b = Λ 4 G /µ 3 f a . Here Λ G depend on the string coupling g s , slope parameter α and details of SUSY breaking parameter. For φ << f a , effective potential for axion can be expressed as: where we introduce the effective mass of the axion as, m 2 Here axion decay constant follow a (conformal) time dependent profile, which is explicitly mentioned in refs. [].
In Figure (3) we have explicitly presented the behaviour of the above axion potential with respect to the dimensionless field value φ/f a .
Axion effective potential (for b=2) Figure 3. Behaviour of the axion effective potential obtained from Type IIB String Theory with respect to the dimensionless field value φ/f a , where f a is the axion decay constant. Further using Eqn (2.3) the field equation of motion for the axion can be written as: where the scale factor a(t) in de Sitter open chart is given by, a(t) = sinh t/H. Here the Laplacian operator L 2 H 3 in H 3 can be written as: which satisfy the following eigenvalue equation: Here Y plm (r, θ, φ) represents orthonormal eigenfunctions which can be written in terms of a radial and angular part as: where Y lm (θ, φ) is the spherical harmonics. Consequently, the total solution of the equations of motion can be written as: Here the total solution V Q (t, r, θ, φ) for Bunch Davies vacuum can be expressed as: where χ p,σ (t) forms a complete set of positive frequency function. Also this can be written as a sum of complementary (χ (c) p,σ (t)) and particular integral (χ (p) p,σ (t)) part, as given by: . (2.11) Explicitly the solution for the complementary part and the particular integral part can be expressed as: for L, (2.12) where the parameter ν is defined as: (2.14) In Figure (4) we have given a schematic diagram for the computation algorithm of solving the wave function of our universe in de Sitter hyperbolic open chart for stringy axion.
Wave function for Axion using α vacua
Here we use two subspaces in CPT invariant SO (1,4) isometric de Sitter space, which is identified as RI and RII respectively. Use the result obtained for Bunch Davies vacuum, and performing a Bogoliubov transformation the mode functions for the α-vacua can be expressed as: (2.15) where the α-vacua state are defined as: +l. (2.16) In this context, the α-vacua mode function F (α) σplm can be expressed in terms of Bunch Davies mode function V σplm (r, t, θ, φ) using Bogoliubov transformation as: Here V σplm (r, t, θ, φ) is the Bunch Davies vacuum states, which is defined as: After substituting Eq (2.17) and Eq (2.18) in Eq (2.15) we get the following expression for the wave function: (2.19) Finally, the solution of the time dependent part of the wave function can be recast as: where we use the following shorthand notation: Here we also use the shorthand notations P q , P q,n , for the Legendre polynomial. Also the coefficient functions (α σ q , β σ q ) and (α σ q,n , β σ q,n ), normalization constants N p , N pn for the complementary and particular part of the solution which are defined as: 2.23) In this section, we present our computation of the spectrum of Bunch Davies vacuum and α vacua fluctuation from two point correlation function . We will be discussing the computation of two point correlation function and their associated cosmological spectra from three completely different formalisms:-
Field operator expansion (FOE) method:
This method is useful for entangled quantum states with the wave function of the de Sitter universe for Bunch Davies and most generalised α vacua. Technically this formalism is based on the wave function χ I which we will explicitly derive . The cosmological spectrum is characterised by the two point correlation function and their associated power spectrum. Using such entangled state in this formalism one can construct the usual density matrix for Bunch Davies and most generalised α vacua.
Reduced density matrix (RDM) formalism:
This formalism is helpful for mixed quantum states and is useful for the construction of reduced density matrix in a diagonalised representation of Bunch Davies and α vacua by tracing over the all possible degrees of freedom from the region R. Technically the formalism is based on the wave function ψ I which we explicitly derive.
Non entangled state (NES) formalism:
This formalism in presence of non entangled quantum state which deals with the construction of wave function in the region L in which the total universe is described. Here we also use Bunch Davies and most generalised α vacua in the region L. Technically this formalism is based on the wave function φ I which we explicitly derive in this paper.
We will now derive the expression for the mean square fluctuation considering both Bunch Davies vacuum and α vacua using the results presented in the previous section. For this computation we will follow the steps which are outlined below: 1. First of all, we trace out all contributions which belong to the R region. As a result the required field operator is only defined in the L region. This method we use in FOE formalism where the quantum states for L and R region are entangled with each other. On the other hand, doing a partial trace over region R one can construct reduced density matrix which leads to RDM formalism. Instead, if we use the non entangled quantum state and compute the wave function solely in L region we will be lead to the NES formalism. Note that all of these three methods are used to compute mean square vacuum fluctuation or more precisely the quantum mechanical computation of two point correlation function for axion and the associated power spectrum.
2. Instead of doing the computation in |L basis we use a new basis |L , obtained by applying Bogoliubov transformation in |L . Consequently the field operators will act on |L and the FOE method is developed in this transformed basis. On the other hand, as mentioned earlier it will appear in the expression for the reduced density matrix to be used in the RDM formalism. But in the NES formalism this transformation is not very useful since in this case the total wave function is solely described by the quantum mechanical state appearing in the L region and the corresponding Hilbert space is spanned by only |L which forms a complete basis.
3. Further, we will compute the expressions for the mean square quantum vacuum fluctuation and the corresponding cosmological power spectrum after horizon exit using all the three formalisms i.e. FOE, RDM and NES. We will finally consider two limiting situations : long wave length and short wave length approximation for the computation of the power spectrum. Let us first compute the spectrum of vacuum fluctuation using field operator expansion (FOE). In figure (5) we have presented a schematic diagram for the computation algorithm of field operator expansion method for entangled state of axion in de Sitter hyperbolic open chart. To compute the vacuum fluctuation using FOE, we focus only with the left region L as it is completely symmetric to the right region R. We use the time dependent mode function for the left region L which we have presented in section 2. Thus instead of getting a (4 × 4) square matrix (when both sectors are considered) we have a (4 × 2) matrix which appears in the solution of the field equation as: where the index J = 1, 2 is appearing for the contribution from region L. To write down the total solution in region L we define the following matrices: where σ = ±1, I = 1, 2, 3, 4 and J = 1, 2. The Fourier mode of the field operator, which is also the total solution of the field equation for axion (in presence of source contribution) can be expressed as: where the operator Q I represent a set of creation and annihilation operators which are defined (in section 2) for Bunch Davies vacuum (α = 0) and α vacua (α = 0) as: for Bunch Davies vacuum for α vacua. (3.5) Here we have labeled the time coordinate t by t L since we are considering the left region L only.
To explicitly write down the expression for the amplitude of the normalized power spectrum, we start with the column matrix representation of the time dependent part of the solution of the wave function, given by: 6) where the entries of the column matrix for the complementary and particular integral part of the solution are given by the following expressions: (3.10) N p and N p,(n) in the above equations are the normalization constants for the complementary part and particular integral part of the solution as defined section 2.
Two point correlation function
To compute the expression for the two point correlation function for the vacuum fluctuation let us now concentrate on a single mode with fixed value of the SO(3, 1) quantum numbers p, l and m. As a result the mean square vacuum fluctuation of axion for any generalized arbitrary vacuum state (|Ω ) can be expressed as: Further explicitly writing the expression for the mean square vacuum fluctuation of axion for Bunch Davies vacuum we get the following simplified expressions: where we define the amplitude of the normalized power spectrum of axion as: Further using Eq (3.6) we compute the following expression, which is appearing in the expression for the amplitude of the normalized power spectrum: . (3.14) Using Eq (3.14), the amplitude of the normalized power spectrum of axion from Bunch Davies vacuum can be expressed in all time scales of region L as: . (3.15) However, it is not easy to extract any information from Eqn (3.15) for cosmological predictions. Hence, we consider the superhorizon time scales (t L >> 1) of region L. In such a case, the Legendre functions, appearing in the complementary part and the particular integral part of the time dependent solution, can be approximated as : Consequently, in the superhorizon time scales (t L >> 1) of region L eqn (3.14) can be further simplified as: (3.18) where the time independent function M(p, ν) is defined as: As a result, in the superhorizon time scales (t L >> 1) of region L the amplitude of the normalized power spectrum of axion from Bunch Davies vacuum can be expressed as: Here, it is important to note that in the superhorizon time scales (t L >> 1) of region L if we consider the massless case where we fix the mass parameter to be ν = 3/2, then the time dependent contribution can be approximated as: Consequently, in the superhorizon time scales of region L and for the massless axion case, the amplitude of the normalized power spectrum of axion from Bunch Davies vacuum can be expressed as: (3.22) This implies that in the massless case, the amplitude of the vacuum fluctuation gets frozen with respect to the time scale when the associated modes exit the horizon. Further to infer the exact wave number dependence of the amplitude of the normalized power spectrum from Bunch Davies vacuum we need to know the behaviour of the power spectrum at very short wavelengths (p, p n >> 1). In this limit it is expected that the power spectrum should match the result obtained for spatially flat universe. Note that in the short wave length approximation the time independent function M(p >> 1, ν) for any arbitrary mass parameter ν can be expressed as: where we have defined a new function G(p >> 1) in the short wave length limit as : (3.24) The above equation implies that for very large p, p n >> 1 one can rewrite this as, G(p) ∼ 1 + · · · , and all the · · · terms can be considered as small correction terms. Also for the mass less case (ν = 3/2) and in the short wave length approximation, the time independent function M(p, ν = 3/2) can be further simplified as: Finally, in the superhorizon time scales (t L >> 1) of region L, the amplitude of the normalized power spectrum of axion from Bunch Davies vacuum in the short wave length limit can be expressed as: Also for the massless case (ν = 3/2) in the superhorizon time scales (t L >> 1) of region L the amplitude of the normalized power spectrum of axion from Bunch Davies vacuum in the short wave length limit can be simplified as: Now, we generalize the above results for the two point correlation function and the associated power spectrum for α vacua. For α vacua the mean square vacuum fluctuation of axion in the short wave length limit can be expressed as: where we have defined the amplitude of the normalized power spectrum of axion in the short wave length limit as: (3.29) In the above equation, P BD (p, t L ) is defined as: We carry out the same approximations as earlier and we note that in the superhorizon time scales (t L >> 1) of region L the amplitude of the normalized power spectrum of axion in the short wave length limit from α vacua can be expressed as: where the normalized power spectrum in superhorizon scale for Bunch Davies vacuum P BD (p >> 1, t L >> 1) is defined in Equation (3.26). Here it is important to note that, with α = 0 then we can reproduce the results obtained for Bunch Davies vacuum. In figure (6(a)) and figure (6(b)) we have shown the behaviour of the power spectrum of the mean square vacuum fluctuation computed from FOE formalism in the short wave length regime for α = 0 and α = 0.1 and for fixed values of the mass parameter ν(= 3/2, 2, 5/2, 3, 7/2) respectively. In both the cases we have found almost similar behaviour. Additionally, in figure (6(c)) we have depicted the behaviour of the power spectrum with respect to the mass parameter ν with fixed values of the parameter α(= 0, 0.1, 0.2, 0.3, 0.4). It is clear from this figure that the power spectrum shows two distinct behaviour in 1/2 < ν < 1 and ν > 1 region. For 1/2 < ν < 1 region, the amplitude of the normalized power spectrum decreases to a certain value but just after ν = 1 it increases.
On the other hand, to know the exact wavenumber dependence of the amplitude of the normalised power spectrum from Bunch Davies vacuum in the long wavelength limit we need to know the behaviour of the power spectrum at p, p n << 1. In this limit it is expected that the power spectrum of axion match with the result obtained for spatially flat universe. Here the time independent function M(p << 1, ν) for any arbitrary mass parameter ν can be expressed as: where we have defined a new function G(p << 1) in the long wave length limit as : . (3.33) This implies that for very small wave numbers p, p n << 1, one can write, where all the· · · terms are small correction terms.
Also for the massless case (ν = 3/2) and in the long wave length approximation, the time independent function M(p << 1, ν = 3/2) can further be simplified as: Finally, in the super horizon time scales (t L >> 1) of region L the amplitude of the normalized power spectrum of axion from Bunch Davies vacuum, in the long wave length limit, can be expressed as: (3.35) and for the massless case (ν = 3/2) this simplifies to: Here it is important to note that both of Eq (3.35) and Eq (3.36) are valid after horizon exit. Next, we generalize the result for the two point correlation function and the associated power spectrum for α vacua. For α vacua the mean square vacuum fluctuation of axion in the long wave length limit can be expressed as: where the amplitude of the normalized power spectrum of axion at long wave length limit is defined as: with P BD (p << 1, t L ) as defined earlier.
In the super horizon time scales (t L >> 1) of region L the amplitude of the normalized power spectrum of axion in the long wave length approximation from α vacua can be expressed as: where P BD (p << 1, t L >> 1) is defined in Eq (3.35). It may be noted that, for α = 0 we get back the results obtained for Bunch Davies vacuum.
In figure (7(a)) , figure (7(b)) and in figure (7(c)) we have shown the behaviour of the power spectrum of the mean square vacuum fluctuation computed from FOE formalism in the small wave number regime. The values of α and the values of the mass parameter ν used here are same as those taken for large wave number regime. As expected, the behaviour for the the two limiting cases are distinct. However, the characteristics observed for α and ν dependences for both the cases are almost similar.
3.2 Quantum vacuum fluctuation using reduced density matrix (RDM) formalism (with mixed state) In this section, we study the features of the two point correlation function of the quantum vacuum fluctuations and the associated primordial power spectrum using the reduced density matrix formalism. In figure (8)
Reduced density matrix (RDM) formalism
We first write down the Fourier mode of the field operator, which is also the total solution of the field equation for axion in presence of source contribution. We start directly from the solution obtained in Eqn (2.20) and rewrite it in terms of the following matrix equation: where for the complementary part of the solution we have defined the following matrices: Similarly for the particular solution, we define the following matrices: where σ = ±1, q = R, L and I, J = 1, 2, 3, 4. The redefined normalization constant for the particular part of the solution N p,(n) can be expressed as, N p,(n) = 2 sinh πp n N pnσ p 2 − p 2 n . Further using Eqn (3.40) the Bunch-Davies mode function can be written as: where a I = (a σ , a † σ ) represents a set of creation and annihilation operators. We also define the following operators: where a (c) σ,n , a (p) † σ,n ) are the set of creation and annihilation operators which act on the complementary and particular part respectively. Thus, the operator contribution for the total solution is: where by inverting Eqn (3.44) we have expressed: The inverse matrices are defined as: where σ = ±1, q = R, L and I, J = 1, 2, 3, 4. For further computation, α-vacua are defined in terms of Bunch Davies vacuum state as: It is to be noted that for α = 0 we get, |α = 0 = |0 = |BD . Moreover, we can also write the R and L vacua as: with subscripts (c) and (p) representing the complementary and particular part respectively. Further assuming the bipartite Hilbert space (H α := H R ⊗ H L ) one can also write the α-vacua in terms of the R and L vacuum as: 50) where the matrices m ij andm ij,n are defined for the complementary and particular part of the solution obtained for Bunch Davies vacuum state. In other words by setting α = 0 we get the following expression for the Bunch Davies quantum state: Also the creation and annihilation operators for the R and L vacuum are defined in terms of new b type of oscillators using Bogoliubov transformation as: Here γ qσ , δ qσ ,γ qσ,n andδ qσ,n are the coefficient matrices. For our further computation we use the definition of α-vacuum state (and Bunch Davies vacuum state), which is very useful to compute long range cosmological correlation functions in de Sitter space. In the context of α-vacua the creation and annihilation operators are defined in terms of the constituents of R or L vacuum state as: where we use the definition of creation and annihilation operators in Bunch Davies vacuum as mentioned in Eq (3.53) and Eq (3.52). In this computation it is important to note that, under Bogoliubov transformation the original matrix γ qσ , δ qσ ,γ qσ,n andδ qσ,n used for Bunch Davies vacuum transform ( for α-vacua) as: Thus, after the Bogoliubov transformation, α-vacua state can be written in terms of R and L vacua as: Herem ij andm ij,n represent the entries of the matrices corresponding to the complementary and particular solution respectively and we will compute them by demanding d σ |α = 0, and keeping only linear terms of creation operators. This directly yields the following: cosh αm ij,nγjσ,n − sinh αm ij,nδjσ,n + cosh αδ * iσ,n − sinh αγ * iσ,n = 0∀ n. (3.59) From these two equations, the matrices corresponding to the complementary and particular part of the solution can be expressed as: Substituting the expressions for γ, δ, γ n and δ n we finally obtain the entries of the mass matrices for i, j = R, L as:m where we defined the T matrices as: ( 3.64) and the corresponding entries of the T matrices are given by: For the massless (ν = 3/2) axion case, we obtain the following simplified expressions: where we have defined the T (3/2) matrices as: (3.71) and the corresponding entries of the T (3/2) matrices are given by: In the above analysis, we have considered small axion mass (ν 2 > 0) limiting situations with an arbitrary parameter α, which corresponds to Bunch Davies vacuum state with the choice α = 0. For completeness, we also consider the large axion mass (ν 2 < 0 where ν → −i|ν|) limiting situation which is very important to study the imprints of quantum entanglement in cosmological correlation functions. In this large axion mass limiting situation, we actually consider a specific window of SO(1, 3) principal quantum number, which is bounded within the range 0 < p < |ν|.
Consequently, the entries of the coefficient matrixm can be approximated as: which for α = 0 yield a simplified expression for them with Bunch Davies vacuum state. We note that for general value of α and for large axion mass (ν 2 < 0 where ν → −i|ν|) , we always get real value form RR and imaginary value form RL . This is an important observation for our further analysis.
From the perspective of cosmological observation in the superhorizon time scale, we again consider two further limiting situations: (a) large wave number (p >> 1) or small wave length limit and (b)small wave number (p << 1) or large wave length limit.
Using these two limiting situations we can simplify the expression for the entries of the coefficient matrixm considering both small and large axion mass. We start with the expressions for small axion mass limit in large wave number (p >> 1) approximation: where we have defined the T matrices for p >> 1 limit as: ( 3.80) and the corresponding entries of the T matrices for p >> 1 limit are given by the following simplified expressions: For massless (ν = 3/2) axion, we get the following simplified expressions: where the T (3/2) matrices (for p >> 1) are given by: 3.87) and the corresponding entries of the T (3/2) matrices are given by : On the other hand, for small axion mass and for large wave number (p << 1) we have: where theT matrices are defined as: 3.94) and the corresponding entries of theT matrices (for p << 1 ) are given by : For the case of massless (ν = 3/2) axion, we get the following simplified expressions: with theT matrices defined as: and the corresponding entries of theT (3/2) matrices (for p << 1 ) are given by: For further analysis, it is convenient to change over to a suitable basis by tracing over all possible contributions from R and L region. To achieve this we perform another Bogoliubov transformation by introducing new sets of operators : satisfying the following conditions: Using these operators we write the α-vacuum state in terms of new basis represented by the direct product of R and L vacuum state as: p,n are to be determined shortly. We note that the the relationship between the new and the old basis is given by: The commutation relations between the creation and annihilation operators corresponding to the new sets of oscillators is taken as: These operations act on the α vacuum state in the following way: Further, one can express the new c type annihilation operators in terms of the old b type annihilation operators as: (3.112) Note thatŨ q ≡ diag (ũ,ū),Ṽ q ≡ diag (ṽ,v) ,Ū q,n ≡ diag Ũ n ,Ū n ,V q,n ≡ diag Ṽ n ,V n . From Equations (3.106) and (3.111), we obtain the following sets of homogeneous equations: For complementary solution :
Further, for the massless (ν = 3/2) axion field we get the following simplified expressions: In the large axion mass (ν 2 < 0 where ν → −i|ν|) limit the two solutions for the γ (α) p and Γ (α) p,n for α vacuum are given by: In this limit, we divide the total window of p into two regions, given by 0 < p < |ν| and |ν| < p < Λ C . In these regions of interest, the two solutions for γ (α) p in presence of α vacuum can be approximately written as: for 0 < p < |ν| e ∓πp (1 + tan α) 1 + tan α e 2π|ν| (1 + tan 2 α e −2πp ) for |ν| < p < Λ C /2π. (3.125) and for 0 < p < |ν| e ∓πpn (1 + tan α) 1 + tan α e 2π|ν| (1 + tan 2 α e −2πpn ) for |ν| < p < Λ C /2π. (3.126) Further, in the limit p >> 1 we get the following simplified results: Γ (α) p,n ≈ i 2 cosh 2 α + sinh 2 α e 2iπν + sinh 2α cos πν e iπν sech 2 α | cosh 2πp n | ± | cosh 2πp n | + 4 For massless (ν = 3/2) axion field this simplifies to : On the other hand, in the limit p << 1 we get the following results: which, for massless (ν = 3/2) axion field , simplifies to: 3.134) and are very useful information for the computation of spectrum of vacuum fluctuation. Further, the Fourier mode of the total compact solution in the region L in case of α vacua can be re-expressed in terms of the oscillators defined in the new basis (c,C) as well as the SO(1,3) quantum numbers (p, l, m) as: where the total wave functionψ I T is a column matrix and for the complementary and particular part of the solution the inverse matrix (G −1 ) I J and G −1 (n) I J are defined as: When we trace out the degrees of freedom over the right part of the Hilbert space, we obtain the following reduced density matrix for the left part of the Hilbert space : (ρ L (α)) p,l,m = Tr R |α α|, (3.137) where the α vacuum state is written in terms ofc type of oscillators as: , (3.138) Substituting Eq (3.138) in Eq (3.137), we get the expression for the reduced density matrix for the left part of the Hilbert space: p,n | 2r |n, r; p, l, m n, r; p, l, m| 3.140) and the states |k; p, l, m and |n, r; p, l, m are expressed in terms of the new quantum state |L as: Note that for α = 0, we get back the result obtained for Bunch Davies vacuum.
Two point correlation function
In this subsection we explicitly compute the two point correlation function and its significant role to obtain long range effect in the cosmological correlation using the generalised α and Bunch Davies vacuum. For this purpose and using the expression for the reduced density matrix, derived in the previous subsection, we first compute the mean square quantum vacuum fluctuation, which is expressed for α vacua as: Complementary part |Γ (α) p,r,s | 2r s, r; p, l, m|φ L (t L )φ † L (t L ) |s, r; p, l, m In the above, we have used the shorthand notation φ L (t L ) = φ Lplm (t) for the field. Note that, setting α = 0 in Eq (3.142) we get the result for the Bunch Davies vacuum which is given by: The contributions from the complementary and the particular part, as appearing in the right hand side of Eq (3.142) for each n-particle state are found to be: whereψ L T is given by : with the entries of the column matrix for the complementary and particular integral part of the solution being: The normalization constants N c and N c,(n) for the complementary part and particular integral part of the solution is defined as: The expression for (ū,v) for complementary solution and (Ū n ,V n ) for particular solution are given by the following expressions: For complementary part :
154)
For particular part : p,n ) for the complementary and particular part of the solution are defined earlier in equations (3.62-68) and equations (3.119-120) respectively. We have used Eq (3.113), Eq (3.114), Eq (3.115) and Eq (3.116) and also have imposed the normalization conditions, |ū| 2 −v| 2 = 1 and |ū| 2 −v| 2 = 1. Note that the structural form of the equations for α = 0 corresponding to Bunch Davies vacuum is exactly same as that of α vacua. Only the significant changes appear when we explicitly consider the entries of (m LR , m RR ) and (γ p , Γ p,n ) for the complementary and particular part of the solution. Now, substituting Eq. (3.144) and Eq. (3.145) in Eq (3.142) we get the following simplified expression for the mean square quantum vacuum fluctuation for α vacua as: Setting α = 0 we get the expression for the Bunch Davies vacuum as : We note that, to derive this expression we have used the following identities: The expression for |ψ L T | 2 , now comes out to be: Here also by fixing the parameter α = 0 one can get the expression for the square of the magnitude of the wave function for Bunch Davies vacuum in the newly defined Bogliubov transformed basis. Using Eq (3.161), the amplitude of the normalised power spectrum of axion from the generalised α vacua can be expressed in all time scales of region L as: .
(3.162) However, the above equation is very complicated to extract any physical information for further cosmological predictions. For this reason we consider the superhorizon time scales (t L >> 1) of region L, in which the Legendre functions appearing in the complementary part and the particular integral part of the time dependent solution can be approximated as the following simplified form: Consequently, in the superhorizon time scales (t L >> 1) of region L eqn (3.162) can be simplified for as: where the time independent function Q(p, α, ν) for generalised α vacua is defined as: As a result, in the superhorizon time scales (t L >> 1) of region L the amplitude of the normalised power spectrum of axion from generalised α vacua can be expressed as: We note that in the superhorizon time scales (t L >> 1) of region L if we consider the massless case by fixing the mass parameter ν = 3/2, then the time dependent contribution can be approximated as: From this we infer that for an arbitrary value of the parameter ν we can write: Consequently, in the super horizon time scales (t L >> 1) of region L considering the massless case (ν = 3/2) the amplitude of the normalised power spectrum of axion from generalised α vacua can be expressed as: Like the result in the case of field operator expansion method derived in the previous section, this result also implies that in the massless case (ν = 3/2) amplitude of the vacuum fluctuation gets frozen with respect to the time scale when the associated modes exit horizon. Further to know the exact wave number dependence of the amplitude of the normalised power spectrum from generalised α vacua we need to know the behaviour of the power spectrum at very short wavelengths (p, p n >> 1). In this limit it is expected that the power spectrum of axion should match with the result obtained for spatially flat universe. In the short wave length approximation the time independent function Q(p >> 1, α, ν) for any arbitrary mass parameter ν can be expressed for generalised α vacua as: where we have already defined the function G(p >> 1) in the earlier section. Here for very large wave number p, p n >> 1 one can write, G(p >> 1) ∼ 1 + · · · , where all · · · are small correction terms. This also implies to the interesting fact that for large wavenumber limit and for any values of the parameter α, the time independent function Q(p >> 1, α, ν) computed for generalised α vacua exactly matches with the result obtained for Bunch Davies vacua in the earlier section i.e. M(p >> 1, ν). This means that the final result is independent of the choice of the parameter α.
For the massless case (ν = 3/2) in the short wave length approximation, the time independent function Q(p >> 1, α, ν = 3/2) can further be simplified to: Additionally, we note that the following important contribution appearing in the normalised power spectrum for axion can be simplified, in the large wave number limit, as: Finally, in the super horizon time scales (t L >> 1) of region L, the amplitude of the normalised power spectrum of axion, in the short wave length approximation, can be expressed as: 1). (3.174) For the massless case (ν = 3/2), in the same scale and the same approximation, the above amplitude takes the form: It is important to note that both of Eq (3.174) and Eq (3.175) are valid after horizon exit. From the same results , we also observe that the normalised power spectrum from generalised α vacua,in the leading order, computed from reduced density matrix formalism is exactly same as that obtained in the previous sub-section, computed using field operator expansion method. For completeness, we present the result for the two point correlation function and the associated power spectrum for Bunch Davies vacuum by fixing the parameter α = 0 in our previous equations and they can be expressed as: For for the massless case (ν = 3/2) this can be further simplified to: In figure (9(a)) and figure (9(b)) we have shown the behaviour of the power spectrum of the mean square vacuum fluctuation computed from RDM formalism in the large wave number regime. We have considered α = 0 and α = 0.1 and fixed values of the mass parameter ν respectively. Additionally, in figure (9(c)) we have depicted the behaviour of the power spectrum with respect to the mass parameter ν for fixed values of the parameter α = 0, 0.1, 0.2, 0.3, 0.4. From the figures, we observe that the power spectrum shows two distinctive behaviour in 1/2 < ν < 1 and ν > 1 region. For 1/2 < ν < 1 region the amplitude of the power spectrum decrease to a certain value and just after ν = 1 it increases. Also note that in large wave number regime, the power spectrum obtained from RDM formalism behaves in the same as way as that obtained from FOE formalism in the previous section.
On the other hand, to know the exact wave number dependence of the amplitude of the normalised power spectrum from generalised α vacua in the long wave length approximation, we need to know the behaviour of the power spectrum for p, p n << 1. In this regime we expect that the power spectrum of axion should match with the result obtained for spatially flat universe. The time independent function Q(p << 1, α, ν) for the mass parameter ν = 3/2 can be expressed for generalised α vacua as: (3.178) where the function G(p << 1) is defined for ν = q/2 2 as: p,mmLR,m * +m RR,nm * RR,m p,nmLR,n * m RR,m (3.179) Here for very small wave number p, p n << 1 one can write, where all · · · are small correction terms. For Bunch Davies vacuum once we fix α = 0, we find that the function G(p << 1) only depends on the mass parameter ν for massive axion field.
On the contrary, for the case where ν = n/2 (which also includes the massless situation ν = 3/2) the expression G(p << 1) diverges due to the overall factor 1/| cos πν|. But we can avoid such unwanted divergent contributions by rewriting all the expressions for p, p n << 1 with ν = n/2 that we have mentioned earlier. In such a situation for the massless case the time independent function Q(p << 1, α, ν = 3/2) can be further simplified as: 3.180) where the function G(p << 1) is defined for ν = 3/2 as 3 : Here for very small wave number p, p n << 1 with ν = 3/2 and ν = 3/2 one can write, where all · · · are small correction terms. For Bunch Davies vacuum we get the same result as the function G(p << 1) for massless axion field (ν = 3/2) is independent of the parameter α. Moreover, it is important to note that the following contribution appearing in the normalised power spectrum for massive (ν = 3/2) and massless (ν = 3/2) axion field can be simplified in the small wave number limit as: Thus, in the superhorizon time scales (t L >> 1) of region L the amplitude of the normalised power spectrum of axion from generalised α vacua in the small wave number limit can be expressed as: ( 3.184) For the massless case (ν = 3/2) in the superhorizon time scales (t L >> 1) of region L, the amplitude of the normalised power spectrum of axion from generalised α vacua in the small wave number limit can be simplified in the present context as: For Bunch Davies vacuum state ( α = 0), the mean square vacuum fluctuation of axion can be expressed as: Also for the massless case (ν = 3/2) in the superhorizon time scales (t L >> 1) of region L the amplitude of the normalised power spectrum of axion from Bunch Davies vacuum in the small wave number limit can be simplified as: for α = 0 and α = 0.1 and for fixed values of the mass parameter ν = 1, 2, 3, 3, 4, 5 respectively. Moreover, in figure (10(e)) we have presented the behaviour of the power spectrum with respect to the mass parameter ν with fixed values of the parameter α = 0, 0.1, 0.2, 0.3, 0.4. For the mass parameter dependence here we get distinctive feature for RDM formalism compared to FOE formalism which we discussed in the last subsection and the NES formalism which we discuss in the next subsection. From the plot, it is observed that for ν = 1/2, 3/2, 5/2, 7/2 we get distinctive sharp peaks with constant and different magnitudes. On the other hand, in figure (10(b)) and figure (10(d)) we have shown the behaviour of the power spectrum in the small wave number regime for α = 0 and α = 0.1 with the fixed values of the mass parameter ν = 1/2, 3/2, 5/2, 7/2, 9/2. Here as the power spectrum is independent of the wave number, we get constant magnitude for different values of the mass parameter ν.
Quantum vacuum fluctuation with non entangled state (NES)
In this subsection, we describe the quantum vacuum fluctuation and its cosmological consequences using non entangled state (NES) formalism. In this formalism we assume that the wave function of the full de Sitter universe is described in the region L. So we do not use anyt information from the region R. In figure (11)
Non entangled state (NES) formalism
In the region L the total wave function of the universe is described by the non entangled state (NES) and for generalised α vacua it is given by: (3.188) where the normalisation factorsÑ b andÑ b,(n) are : . (3.190) We can also express the total wave function of the universe in terms of the oscillator mode expansion as given by:φ
Two point correlation function
Using the above wave function we can further derive the mean square vacuum fluctuation through the following two point correlation function : (3.192) where P (p, α, t L ) is the power spectrum for non entangled state involving generalised α vacua. We can also define the normalised power spectrum for non entangled state as: 3.193) To quantify the normalised power spectrum for non entangled state, it is crcial to derive the expression for the square of the magnitude of the total wave function of the universe in the region L, which is given by: ( 3.194) Further substituting the expressions for the normalisation factors, the above equation can be recast as: (3.195) Consequently, the normalised power spectrum for non entangled state with generalised α vacua can be written as: . (3.196) However, to extract further physical information from Eqn (3.162) for cosmological predictions, we consider the superhorizon time scales (t L >> 1) of region L. In this limit, the Legendre functions as appearing in the complementary part and the particular integral part of the time dependent solution can be approximated to the following simplified form: ( 3.198) Thus, in the superhorizon time scales (t L >> 1) of region L, eqn (3.195) can be further simplified as: 3.199) where the time independent function K(p, α, ν) for generalised α vacua is defined as: Like our result derived in the previous section, this result also implies that for the massless case (ν = 3/2), the amplitude of the vacuum fluctuation gets frozen with respect to the time scale when the associated modes exit horizon. Further, to know the exact wavenumber dependence of the amplitude of the normalised power spectrum from generalised α vacua, we need to know the behaviour of the power spectrum at very short wavelengths (p, p n >> 1). In this limit, it is expected that the power spectrum of axion in the non entangled case should match with the result obtained for spatially flat universe. The time independent function K(p, α, ν) in this limit and for arbitrary mass parameter ν can be expressed as: where the function U(p >> 1) is defined as: Quantumm correction factor for axion in short wave length limit Thus, for very large wave number (p, p n >> 1), we can write, U(p) ∼ 1 + · · · , where all · · · are small correction terms. This also implies that for large wavenumber and for any value of the mass parameter α, the time independent function U(p, α, ν), computed with generalised α vacua, matches with the result obtained for Bunch Davies vacua in the previous subsection at the leading order in M(p, ν). Also for the massless case (ν = 3/2) the time independent function K(p, α, ν = 3/2) in the short wave length limit can further be simplified as: Finally, in the superhorizon time scales (t L >> 1) of region L the amplitude of the normalised power spectrum of axion from generalised α vacua for non entangled state in short wave length limit can be expressed as: For the massless case (ν = 3/2) in the superhorizon time scales (t L >> 1) of region L, the amplitude of the normalised power spectrum in short wave length limit can be simplified to: . (3.209) Note that both the Eq (3.208) and Eq (3.216) are valid after horizon exit. From these results we also observe that the power spectrum computed from non entangled state formalism is same, at the leading order approximation, as that computed from the FOE and RDM formalism, computed in earlier subsections. This is true in the large wavenumber limit of superhorizon time scale in region L.
The result for the two point correlation function and the associated power spectrum for Bunch Davies vacuum can be obtained by setting α = 0 in the above equation and is found to be: For the massless case (ν = 3/2) it reduces to: In figure (12(a)) and figure (12(b)) we have presented the behaviour of the power spectrum of the mean square vacuum fluctuation computed inNES formalism for the large wave number regime. This is shown for α = 0 and α = 0.1 and for fixed values of the mass parameter ν = 3/2, 2, 5/2, 3, 7/2 respectively. For both the values of α, we get almost similar behaviour. In figure (12(c)) we have shown the behaviour of the power spectrum with respect to the mass parameter ν with fixed values of the parameter α = 0, 0.1, 0.2, 0.3, 0.4. Here for 1/2 < ν < 1 region and ν > 1 region mass parameter dependence show two distinctive features. In 1/2 < ν < 1 region amplitude of the normalised power spectrum initially decrease and then just after ν = 1 the amplitude of the power spectrum increase.
However, to examine the behaviour of the power spectrum in the long wavelength region and in the superhorizon time scale (t L >> 1), we take the limit p << 1. In the long wave length limit, the time independent function K(p, α, ν) for any arbitrary mass parameter ν can be expressed (for α vacua) as: where the function U(p << 1) is given by: Quantum correction factor for axion in long wave length limit For the massless case (ν = 3/2), this can be further simplified to: Moreover, in the superhorizon time scales (t L >> 1) of region L, the amplitude of the normalised power spectrum ( for α vacua ) for non entangled state (in the long wave length limit) can be expressed as: Also, for the massless case (ν = 3/2), this reduces to: The result for Bunch Davies vacuum is obtained by fixing α = 0 in above equation and is expressed as: which for the massless case (ν = 3/2) reduces to : In figure (13(a)) and figure (13(b)), we have shown the behaviour of the power spectrum of the mean square vacuum fluctuation in NES formalism in the small wave number regime for α = 0 and α = 0.1 with fixed values of the mass parameter ν = 3/2, 2, 5/2, 3, 7/2 respectively. Note that in both the cases we find almost similar behaviour. Also, in figure (13(c)) we have shown the behaviour of the power spectrum with respect to the mass parameter ν with fixed values of α = 0, 0. 1, 0.2, 0.3, 0.4. In this case we again observe two distinct regions of mass parameter dependence.
We have explicitly presented the comparison among FOE, RDM and NES formalism for α vacua in table (1). The same table is valid for Bunch Davis vacuum when α = 0. We have quoted the differences, among the findings from these formalism, for the primordial power spectrum from mean square vacuum fluctuation at large and small scales.
Summary
To summarize, in this work, we have addressed the following issues: • We have explicitly studied the power spectrum of mean squared vacuum fluctuation for axion field using the concept of quantum entanglement in de Sitter space. The effective action for the axion field, used here, has its origin from Type IIB String theory compactified to four dimensions. . For our analysis, we have chosen two initial vacuum states i.e. Bunch Davies and a generalised class of α vacua. The power spectrum of mean squared vacuum fluctuation is computed using three distinctive formalisms: (1) Field operator expansion (FOE), (2) Reduced density matrix (RDM) and (3) Non entangled state (NES). In all three cases, the computation has been done starting with two open charts in hyperbolic manifold of de Sitter space consisting of two regions: L and R. Though the starting point is same, the construction of these three formalisms are different from each other and have their own physical significance. Each of the formalism has been discussed in text of the papers and some details of approximations for them are presented in the appendix. Similarities and differences from each other are presented in a table.
• In case of FOE formalism we solve for the wave function in the region L and using this solution we compute the general expression for the mean square vacuum fluctuation and its quantum correction in terms of two point correlation function. The result is evaluated at all momentum scales. We considered two limiting approximation in the characteristic momentum scales, i.e. large wave number (small wave length in which the corresponding scale is smaller than the curvature radius of the de Sitter hyperbolic open chart) regime and small wave number (long wave length in which the corresponding scale is larger than the curvature radius of the de Sitter hyperbolic open chart) regime. We have observed distinctive features in the power spectrum of of mean squared vacuum fluctuation in these two different regimes. In the large wave number (small wave length) regime we found that the leading order result for the power spectrum is consistent with the known result for observed cosmological correlation function in the super horizon time scale. The correction to the leading order result that we computed for the power spectrum can be interpreted as the sub-leading effect in the observed cosmological power spectrum. This is a strong information from the perspective of cosmological observation since such effects, possibly due to quantum entanglement of states, can play a big role to break the degeneracy of the observed cosmological power spectrum in the small wave length regime. On the other hand, in the long wave length regime we found that the power spectrum follows completely different momentum dependence in the super horizon time scale. Since in this regime and in this time scale, at present, we lack adequate observational data on power spectrum we are unable to comment on our result with observation. But our result for the power spectrum in long wave length limit and super horizon time scale can be used as a theoretical probe to study the physical implications and its observational cosmological consequences in near future. Our result also implies that the mean square vacuum fluctuation for axion field, in super horizon time scale, gets enhanced in long wave length regime and freezes in the small wave length regime. We also observe that for a massive axion, the power spectrum is nearly scale invariant in all momentum scales. On the other hand, for massless axion we observe exact scale invariance only in large wave number (small wave length) regime and for the Bunch Davies initial quantum state. For generalised α initial state, we find slight modification in the corresponding power spectrum of the mean square vacuum fluctuation. The modification factor is proportional to exp(−2α) which is valid for all values of the parameter α. It also implies that for large value of the parameter α we get additional exponential suppression for the power spectrum. This information can be used to distinguish between the role of Bunch Davies vacuum (α = 0) and any α vacua quantum initial state during analysis of observational data.
• In RDM formalism, the wave function for the axion field is solved in L and R regions of the de Sitter open chart. This solution has been used to compute the mean square vacuum fluctuation and its quantum correction for both Bunch Davies and α vacuum state.
Corresponding results are evaluated at all momentum scales by partially tracing out all the information from the region R. Like in the case of FOE, we considered the small and large wavelength approximations in the characteristic momentum scales and found distinct features in the corresponding power spectrum. In the small wave length regime again the leading order result, in super horizon time scales matched with known result (same as FOE). However, the sub-leading order result for the power spectrum is different from the result obtained from FOE formalism which distinguishes the two approaches. Moreover, in the long wave length regime the power spectrum has completely different momentum dependence compared to FOE formalism. We also notice that the enhancement of mean square vacuum fluctuation for axion field, in long wave length regime, is different (slower) in nature compared to FOE formalism but the freezing in short wavelength regime is of same nature. The observation on scale invariance of power spectrum in this formalism remains similar to that in FOE formalism.
• In the last formalism i.e.NES, the wave function of axion field is solved in the region L of the de Sitter hyperbolic open chart. With the help of this solution, t we computed the mean square vacuum fluctuation using Bunch Davies and α vacuum state configuration. The corresponding result is evaluated at all momentum scales. Like the previous two cases, here also we reverted to two limiting approximations i.e. large wave number (small wave length ) regime and small wave number (long wave length) regime. We again observed distinctive behaviour in the power spectrum in these two different regimes. In the large wave number (small wave length) regime, the leading order result for power spectrum matches with the known result for observed cosmological correlation function just as the cases of FOE and RDM formalism. However, the sub-leading order result s completely different FOE as well as RDM formalism. Thus, it is the sub-leading terms which distinguish these formalisms from each other and they can be confronted with future observational data. On the other hand, in the small wave number (long wave length) regime, even the leading order result for the power spectrum differs, in momentum dependence, compared to the result obtained from FOE and RDM formalism. Also the nature of enhancement of the mean square vacuum fluctuation in NES formalism is found to be different from that in FOE and RDM formalism but the nature of freezing and the observation on scale invariance of power spectrum remains same in all the three cases.
• For completeness, we discuss the actual reason for the results obtained for the power spectra from quantum entangled state as appearing in FOE formalism and the mixed state which is used to construct the RDM formalism. To do so, we consider two subsystems, L and R using which one can construct the quantum mechanical state vector of axion field as |Ψ axion . In our computation, these subsystems are defined in the region L and R respectively in the de Sitter hyperbolic open chart. Now using this state vector of axion field we can define the density matrix as : ρ axion = |Ψ axion Ψ axion |, (4.1) in both the subsystems, L and R for FOE and RDM formalism and only the system L for NES formalism. Using this density matrix we can express the expectation value (for the total system) of a quantum mechanical operator O axion , applicable for FOE and RDM formalism, as: This is an important observation as it is related to the measurement and quantification of any physical cosmological observable in the quantum regime. But in the case of NES formalism one can rewrite Eq (4.2) as : where the operator O L axion solely in the region L is defined by the following expression for NES formalism: (4.4) Also in NES formalism the density matrix ρ L axion for the region L is described by the following expression: (4.5) This implies that in NES formalism, the physical operator is solely described by the information from the region L and consequently the expectation value of such operator satisfy the following condition: The above analysis can help us to explain the differences between the power spectra of mean square vacuum fluctuation obtained from FOE, RDM and NES formalism on large scale (or small wave number or large wave length regime). It clearly points towards the fact that in FOE and RDM formalism the creation and annihilation operators for axion field includes new set of creation and annihilation operators coming from the Bogoliubov transformation from one quantum basis to the other. This means that the field operator in the FOE formalism also involves these extra creation and annihilation operators even if the computation is being performed on a particularly specified temporal slice defined in the region L of the Hilbert space. On the other hand, after applying the partial trace over the degrees of freedom from the region R, the mixed quantum state, using which we formulate the RDM formalism, is prepared by the creation and annihilation operators in the region L of the Hilbert space. Thus, in RDM formalism, the field operator is only defined in the region L and not in the region R of the Hilbert space. This implies that the field operator defined before partially tracing over the degrees of freedom from region R for FOE formalism is different from the field operator in region L used in RDM formalism since for this case we have performed the partial trace over the degrees of freedom in region R. Thus, any general quantum mechanical operator defined in the framework of FOE is not same as that of RDM formalism.
Before we conclude, we point out that apart from the quantification of the mean square vacuum fluctuation in the formalisms we discussed here, we have also computed the entanglement entropy using von Neumann measure and the Renyi entropy in our previous work [15,16].
where the time independent function M(p, ν) is defined as:
A.1 For large wave number
Further to know the exact wave number dependence of the amplitude of the normalized power spectrum from Bunch Davies vacuum we need to know the behaviour of the power spectrum at very short wavelengths (p, p n >> 1). After taking this limit it is expected that the power spectrum of axion match with the result obtained for spatially flat universe. In general for an arbitrary value of the mass parameter ν, we get the following approximated contributions in the short wavelength limit (p, p n >> 1), which are explicitly appearing in the expression for the amplitude of the normalized power spectrum from Bunch Davies vacuum: Further, we apply Stirling's formula to approximate Gamma functions for large wavenumbers p, p n >> 1 to simplify the expression for the power spectrum: Consequently, we get the following simplified expressions in large wavenumber (p, p n >> 1) limit: As a result, in the short wave length approximation the time independent function M(p >> 1, ν) for any arbitrary mass parameter ν can be expressed as: where we define a new function G(p >> 1) in the short wave length limit as given by:
A.2 For small wave number
Similarly to know the exact wavenumber dependence of the amplitude of the normalised power spectrum from Bunch Davies vacuum in the long wavelength limit we need to know the behaviour of the power spectrum for p, p n << 1. In this limit it is expected that the power spectrum of axion should match with the result obtained for spatially flat universe. In general for an arbitrary value of the mass parameter ν, we get the following approximated contributions in the long wavelength limit (p, p n << 1), which are explicitly appearing in the expression for the amplitude of the normalised power spectrum from Bunch Davies vacuum: As a result, the time independent function M(p << 1, ν) for any arbitrary mass parameter ν can be expressed as: M(p << 1, ν) = 2 2(ν−1) (Γ(ν)) 2 π G(p << 1), (A.49) where we define a new function G(p << 1) in the long wave length limit as given by: G(p << 1) = π |Γ ν + 1 2 | 2 1 + |Γ ν + 1 2 | 2 Γ ν + 1
B Quantum correction to the power spectrum in RDM formalism
At the super horizon time scales (t L >> 1) of region L one can write the amplitude of the RDM power spectrum as: where the time independent function Q(p, α, ν) for generalised α vacua is defined as:
B.1 For large wave number
Further to know the exact wave number dependence of the amplitude of the normalised power spectrum from generalised α vacua we need to know the behaviour of the power spectrum at very short wavelengths (p, p n >> 1). After taking this limit it is expected that the power spectrum of axion should match with the result obtained for spatially flat universe.
In general for an arbitrary value of the mass parameter ν, we get the following approximated contributions in the short wavelength limit (p, p n >> 1), which are explicitly appearing in the expression for the amplitude of the normalised power spectrum from generalised α vacua: Consequently, we get the following simplified expressions for large wavenumber p, p n >> 1 limit in the case of generalised α vacua: As a result, in the short wave length approximation the time independent function Q(p >> 1, α, ν) for any arbitrary mass parameter ν can be expressed for generalised α vacua as: Q(p >> 1, α, ν) = 2 2(ν−1) (Γ(ν)) 2 p 3 π G(p >> 1) = M(p, ν) ∀α, (B.33) where we have already defined the function G(p >> 1) in the earlier section of the Appendix.
B.2 For small wave number
Similarly to know the exact wave number dependence of the amplitude of the normalised power spectrum from generalised α vacua in the long wave length approximation we need to know the behaviour of the power spectrum at p, p n << 1. After taking this limit it is expected that the power spectrum of axion should match with the result obtained for spatially flat universe. In general for an arbitrary value of the mass parameter ν, we get the following approximated contributions in the in the long wave length approximation, which are explicitly appearing in the expression for the amplitude of the normalised power spectrum from generalised α vacua: On the other hand, if we set ν = q/2 (including the massless case for ν = 3/2) in the previous expressions obtained for general ν then due to the presence of the overall factor 1/| cos πν| the final expression for the power spectrum in small wave number limit diverges. This is very obvious from the obtained expressions but one can be able to avoid such unwanted divergent contributions very easily. To serve this purpose let us rewrite all the expressions for p, p n << 1 with ν = q/2 that we have mentioned earlier: | 17,795.4 | 2018-09-08T00:00:00.000 | [
"Physics"
] |
Reversible switching between pressure-induced amorphization and thermal-driven recrystallization in VO2(B) nanosheets
Pressure-induced amorphization (PIA) and thermal-driven recrystallization have been observed in many crystalline materials. However, controllable switching between PIA and a metastable phase has not been described yet, due to the challenge to establish feasible switching methods to control the pressure and temperature precisely. Here, we demonstrate a reversible switching between PIA and thermally-driven recrystallization of VO2(B) nanosheets. Comprehensive in situ experiments are performed to establish the precise conditions of the reversible phase transformations, which are normally hindered but occur with stimuli beyond the energy barrier. Spectral evidence and theoretical calculations reveal the pressure–structure relationship and the role of flexible VOx polyhedra in the structural switching process. Anomalous resistivity evolution and the participation of spin in the reversible phase transition are observed for the first time. Our findings have significant implications for the design of phase switching devices and the exploration of hidden amorphous materials.
M aterials that contain no long-range structural order (for example, glass or amorphous phase) are fundamentally interesting to both the basic sciences and for practical industrial applications 1,2 . Glass, which has been considered a metastable and kinetically frozen disordered state, can be synthesized by rapidly quenching a melt from high temperature to prevent crystallization. Another frequently reported but distinct approach to amorphous phases or glasses is via pressure-induced amorphization (PIA) 3,4 . The disordering process in PIA differs significantly from the thermally-induced disorder in substance melting, where a much higher density is expected. PIA has been observed in many materials such as ice [5][6][7] , a-quartz SiO 2 (refs 8-10), AlPO 4 (ref. 11), R-Al 5 Li 3 Cu (ref. 12), and so on, and is widely accepted as an important condensed matter phenomenon. In some cases, mechanical instability caused by 'thermodynamic melting' and increased atomic coordination are considered to contribute most to PIA 8,13 . While in other cases, such as ZrW 2 O 8 , negative thermal expansion (NTE) may have some possible connection with the PIA [14][15][16][17] . However, determining the underlying mechanism still remains one of the most fascinating challenges and some recent evidence even shows that previously reported PIAs were due to the fragmentation of bulky particles into nanocrystals and strongly correlated pressurization environments 3 . Furthermore, new high-density PIA generated materials are expected to drive new theoretical approaches for modelling this phenomenon, which is essential for discovering their practical applications. Recently, pressureinduced crystallization revealed the topological ordering in metallic glasses at room temperature 18 . But upon heating, the recovered amorphous phase tends to return to its stable crystalline phase to minimize the system's energy, depending on the kinetic barrier it needs to overcome. Such examples include the main group compounds SnI 4 , LiKSO 4 , Ca(OH) 2 , clathrasils and berlinite AlPO 4 (refs 19-23). Molecular dynamics calculations on PIA-AlPO 4 (ref. 23) showed that the presence of non-deformable, fourfold coordinated PO 4 interlinking units in the crystal structure have a crucial role in the reversible amorphization and crystallization phase transition, where they acted as templates around which the original structure and even the original orientation were restored 23,24 . So far, reversible PIA has only been observed in the above-mentioned main group compounds, where only the lattice participates in the intriguing structure transformation. The study of more variables (that is, charge, spin and orbital) involved in PIA and the subsequent recrystallization processes could potentially allow a more comprehensive understanding of the order-disorder transformation mechanism and the electronic behaviour in highly disordered materials.
Discovering new external-stimuli responsive compounds with switchable ground states is a major objective in material science because these materials often lead to unusual phenomena or useful functionalities 25,26 . When considering PIA materials as metastable and dense phases distinct from their crystalline forms, it is important to achieve a controllable phase switching between either the PIA glass and their crystalline polymorphisms or even between two PIA phases.
In this work, we report a reversible structural switch between the metastable crystalline phase VO 2 (B) and its PIA glass, in the form of nanosheets. The phase switch between these two metastable phases is realized with low-pressure compression (B20 GPa) and relative low-temperature annealing (B200°C). The phase transformation and underlying mechanism is thoroughly studied using in situ synchrotron X-ray diffraction (XRD), infrared, Raman spectroscopy and theoretical calculations. High-pressure electronic transport properties and magnetic properties are also studied to provide direct evidence of the participation of charge and spin during the phase transformation, for the first time. Our findings may provide a new research platform for the exploration of novel amorphous materials and phenomena under controllable external stimuli.
Results
Material and crystal structure. There are several crystalline phases of VO 2 at ambient conditions: VO 2 (A), VO 2 (B), VO 2 (M), VO 2 (R) and other metastable phases [27][28][29][30][31] . Among them, VO 2 (M) is the thermodynamically stable phase but it undergoes an insulator-to-metal transition (ITM) under high pressure to a metallic phase VO 2 (R) around room temperature. The structures and physical properties of the phases differ significantly due to the distinct coordination environments of V atoms in the VO x polyhedra, the V 4 þ -V 4 þ interactions and the various cross linking manners ( Supplementary Fig. 1). VO 2 (B) is a thermodynamically metastable phase adopting an anisotropic layered structure, and is frequently used as a battery material 32 . Its unique structural features, such as well-embedded layers and hierarchical V-O bonding, also make VO 2 (B) an interesting candidate for structural stability studies under high pressure. In this work, VO 2 (B) nanosheets were synthesized from high pure V 2 O 5 raw material via a hydrothermal route with citric acid as the reducing agent 33 . The product consisted of well-shaped nanosheets that were several micrometres in length/width and tens of nanometres in thickness, and its single crystal nature is proven by the well-arranged spots in the selected-area electron diffraction pattern (Fig. 1a,b). The structure of VO 2 (B) is characterized by the layered feature in the bc plane with a twofold connected O1 atom between the layers (Fig. 1c). Less electron charge distribution between the layers makes the [100] direction more compressible, and there is also a distinct gap along the [001] direction. Overall, the VO 2 (B) structure has a hierarchical structure consisting of condensed face-sharing VO x polyhedra. More structural features of VO 2 (B) along different axes are shown in Supplementary Fig. 2.
PIA at room temperature. The in situ synchrotron XRD patterns of VO 2 (B) nanosheets, as a function of pressure without pressure transition medium, are shown in Fig. 2a. There was no structural transition until the onset of amorphization around 20 GPa. The XRD pattern at ambient conditions is wellindexed with the monoclinic space group C2/m and lattice constants a ¼ 12.054(3) Å, b ¼ 3.693(1) Å, c ¼ 6.424(2) Å and b ¼ 106.96(1)°, in good agreement with other reported values 12,13 . As discussed above, from a structural chemistry viewpoint, the VO 2 (B) nanosheet was expected to show an anisotropic compressibility under pressure. The lattice parameters at various pressures before PIA were obtained by Reitveld refinements of these XRD patterns, as shown in Supplementary Fig. 3. During compression, the diffraction peaks became broader and weaker from about 17 GPa, and completely vanished around 20 GPa, indicating the loss of long-range ordering in the pressure-amorphized state. To check the shear and nonhydrostaticity effect on the PIA, we repeated the experiments using neon and helium as pressure-transmitting medium (PTM) for comparison. In all cases, the PIA process occurred and the amorphous PIA-VO 2 (B) was preserved to ambient conditions after releasing pressure. In contrast, no PIA was observed in the thermal-stable VO 2 (M) phase up to 55 GPa 34 .
Spectroscopic techniques can probe the short-range structural features of local atomic coordinates. We employed infrared and Raman spectroscopy to examine the local structural evolution of VO 2 (B) nanosheets during compression and decompression. Figure 2b displays the infrared spectrum of VO 2 (B) as a function of pressure up to 23.6 GPa. Characteristic peaks located around 1,000 cm À 1 (at ambient conditions) are assigned to the shortest V 4 þ ¼ O1 bonds pointing perpendicularly into the bc interlayer. The mode frequency barely shifted (o50 cm À 1 ) towards high wave numbers upon compression. However, the broadening of the vibrational mode increases with pressure, and finally merging with other broad bands down to 600 cm À 1 is a more profound change. These were associated with angle deformations of the VO x polyhedral upon compression and eventually a disordered state resulted in amorphization. This evidence clearly indicates the nature of the degenerated chemical environments of oxygen around the V 4 þ centres. As pressure increases, the enhanced oxygen coordination around vanadium atoms in VO 2 (B) is expected to lead to the dynamical lattice instability, which triggers the PIA. It is interesting that the preserved PIA-VO 2 (B) has similar local vibrational modes to the pristine VO 2 (B) nanosheets, indicating that the short-range structure features of VO 2 (B) are preserved, as indicated in Fig. 2d. Moreover, Raman spectroscopy was employed to evaluate the contribution of the electron-phonon interactions to the PIA phase transition of VO 2 (B) (Fig. 2c). At low pressure, only five bands near 198, 260, 570, 780 and 1,020 cm À 1 were observed with moderate intensity in the VO 2 (B) nanosheets. Typically, the peak located at 1,020 cm À 1 can be assigned to the V 4 þ ¼ O1 stretching modes of terminal oxygen atoms. Upon compression above 10 GPa, all the bands weakened and finally vanished. Meanwhile, broad bands between 600 and 1,000 cm À 1 were observed corresponding with the PIA process. As anticipated, similarly to the infrared results, all of the atomic location information was preserved with a more disordered state as indicated by the broadening of the vibrational bands in Fig. 2e, which closely resembles the Raman changes during the ITM process from VO 2 (M) to metallic VO 2 (R).
Thermal-driven recrystallization and the phase diagram. The PIA-VO 2 (B) phase returns to the pristine VO 2 (B) structure upon annealing at relatively low temperature (B200°C by in situ annealing experiment) for a short time (B5 min). Figure 3 shows the XRD pattern of the starting VO 2 (B) nanosheets and PIA-VO 2 (B) before and after annealing at 250°C (50°C above the critical temperature to guarantee a complete recrystallization). To ensure that the PIA process was achieved, a higher pressure of 31.5 GPa (far beyond the PIA starting point) was applied. As discussed, the loss of the X-ray diffraction peaks in the recovered PIA-VO 2 (B) sample suggests the absence of long-range ordering. After annealing at 250°C in a vacuum environment for 5 min, surprisingly, we noticed that the diffraction pattern from this heat-treated sample showed the same powder characteristic peaks as the VO 2 (B) phase with a monoclinic space group C2/m, lattice Supplementary Fig. 4) and a little expanded cell volume (273.5 Å 3 versus 272.2 Å 3 of the pristine sample). This structure switching phenomenon is distinct from the previously reported reversible PIA phenomena with the following characteristics: 23,24 (1) In the cases of some so-called 'memory glasses', such as AlPO 4 , the PIA phases can be restored to their initial crystalline structures spontaneously at room temperature once pressure is released. In contrast, PIA-VO 2 (B) can exist as a metastable, high density, intermediate phase, and the small active energy barrier between PIA-VO 2 (B) and crystalline VO 2 (B) enables feasible control of the phase switch; (2) The reversible PIA and recrystallization processes are guaranteed to occur between VO 2 (B) and PIA-VO 2 (B) (except some thermodynamically stable phases such as VO 2 (M)), due to the relatively low operating temperature (compression at room temperature and annealing at B200°C). The illustration that dynamically low temperature can hinder both the pressure-and temperature-induced traditional phase transitions indicates that The latter will be discussed in detail in the electron transport property section. Recent investigations by more powerful and precise techniques show that PIA is highly related to the pressurization environments and crystallinity of the starting materials. Some of the phenomena previously reported as PIA were likely due to either the formation of multiple polymorphic phases, or even that the PIA process did not occur if single crystal samples were adopted or more hydrostatic compression was applied [8][9][10]35 .
To check this issue in the VO 2 (B) system and obtain the exact conditions where the phase transformations occurred, in situ powder XRD experiments were conducted. Firstly, the PIA process of VO 2 (B) with different PTMs were evaluated. Figure 4a displays the integrated (110) peak intensity of VO 2 (B) as a function of pressure without a PTM or with Ne, He as the PTMs. VO 2 (B) nanosheets under all of these three conditions showed the onset of PIA around 10 GPa. Without PTM, the PIA process accomplished concluded around 20 GPa or a little higher when a better hydrostatic pressure condition was given (30 and 35 GPa for neon and helium as PTMs, respectively). This indicates that the PIA of VO 2 (B) is somewhat related to the deviatoric stress 36 , which is reasonable considering the hierarchical structural features of VO 2 (B). Fortunately, the value of 20 GPa was low enough to obtain bulk samples using a large volume press apparatus for routine magnetism measurements. Figure 4b shows the (110) peak intensity of PIA-VO 2 (B) as a function of the annealing temperature. The recrystallization starts from as low as 100°C and completes around 200°C. The relatively low annealing temperature makes the switch between PIA-VO 2 (B) and crystalline VO 2 (B) feasible. The pressure-and temperature-induced phase transformations of VO 2 (B) nanosheets observed in our work are summarized schematically in Fig. 4c with other known VO 2 crystalline phases. Under compression at room temperature, the expected phase transformation from metastable VO 2 (B) to thermodynamically stable VO 2 (M) does not occur. This indicates a higher kinetic barrier (E barrier 2) of atomic diffusion, and the large surface energy from the nanosheets at room temperature. The formed high-density PIA-VO 2 (B) returned to the pristine metastable VO 2 (B) structure after short-time annealing at relatively low temperature (250-300°C, 10 min), instead of transforming to the thermodynamically stable VO 2 (M) phase. This indicates a relatively low kinetic barrier transforming to metastable VO 2 (B) compared with the high crystallization energy associated with the VO 2 (M) phase and demonstrate that hidden structural switching can be realized using a compression and heating route, as shown in Fig. 4c. Such an intriguing phase transition is the first ever example observed in a 3d metal involved material. Similar structure switching behaviour was observed in PbTe nanoparticles 37 , but in our case the switching between two thermally metastable phases was highly dependent on the temperature parameter. We believe that the key factor enabling this reversible phase transformation is the temperature, which is kinetically low enough for sufficient atomic mobility, which thus causes the elastic deformation of the structure. However, when the annealing temperature was high enough to surpass the VO 2 (B) to VO 2 (M) phase transition temperature (B550°C) 33 , the PIA-VO 2 (B) can also transfer to VO 2 (M) via the intermediate VO 2 (B) phase.
First-principles calculation of the pressure-phase relationship.
The experimental results evidently show the pressure-and thermal-controlled structure switching of VO 2 (B) nanosheets. To provide further thermodynamic and kinetic insight into the pressure-phase relationship during the phase transformation, we performed density functional theory (DFT) simulations using CASTEP code 38 under ambient pressure at room temperature, and that VO 2 (M) and VO 2 (R) have a higher density than VO 2 (A) and VO 2 (B) (Supplementary Fig. 5). Upon compression up to 100 GPa, VO 2 (B) was thermodynamically metastable compared with VO 2 (M) and VO 2 (R). However, no crystal-to-crystal phase transition from VO 2 (B) to VO 2 (M) or VO 2 (R) was observed, due to the high-kinetic energy barrier between them at room temperature. Figure 5 shows both the calculated pressurevolume equations of states of the four crystalline VO 2 phases and the experimental data of VO 2 (B) nanosheets upon compression. The experimental data of the VO 2 (B) nanosheets exhibit a larger unit cell volume than those theoretically predicted, which is reasonable considering the nanosheet surface defects. This large cell volume falls into the traditional VO 2 glass region. Upon compressing to around 20 GPa, the VO 2 (B) nanosheets passed through the ideal pressure-volume boundary of VO 2 (B) and became amorphous with a higher density. The stable region of PIA-VO 2 (B) is indicated in Fig. 5, which passed between VO 2 (B) and VO 2 (M) under high pressure. Several possibilities have been proposed for the underlying mechanism of a PIA process, such as the mechanical instability beyond the Born stability conditions and 'thermodynamic melting' 1,9 . In the case of VO 2 (B), we propose that the combination of kinetic hindrance of the phase transformation to thermodynamically stable VO 2 (M) and the increased atomic coordination are the driving forces of PIA. Upon moderate temperature annealing, the preference of VO x polyhedra in thermodynamically metastable PIA-VO 2 (B) drives the structure to return to the pristine VO 2 (B).
In the P-V phase diagram, one can clearly see the difference between PIA-VO 2 (B) and VO 2 glass. It is important to note that other hidden phases of VO 2 may exist, either in the low-density glass range or the high-density pressure-induced amorphous range. In this study, we observed profound structure switching behaviour of PIA-VO 2 (B) at kinetically moderate temperatures, which demonstrates an interesting phenomenon of how temperature has a critical role in governing phase transformations under high pressure. The exploration of yet more interesting physical properties related to high density VO 2 glass with a short-range order may greatly boost the development of condensed matter science.
Discussion VO 2 phases as the famous strongly correlated family always shows interesting magnetic and electric properties. To gain more insight into the participation of charge and spin interactions during the PIA, in situ electrical resistance within a diamond anvil cell (DAC) and ex situ magnetic susceptibility measurements (using an amorphized sample synthesized by a large volume press apparatus) were performed. Figure 6 shows the resistance change during compression and decompression, and those at 7.5 and 26 GPa (before and after amorphization) as a function of temperature. At the first stage (Po10 GPa), the resistance drop can be associated with the broadening and partial overlapping of the valence and conduction bands, caused by the normal pressure-induced shortening of V-V distances and bending of the V-O bonds. A steep increase in the resistance, which is caused by the onset and completion of the PIA, dominates the profile between 10 and 18 GPa. During the PIA process, point defects accompany the breaking of the long-range order, and thus the electron transport is suppressed by these increased defects, which serve as scattering centres. After the PIA process, band overlapping proceeds, as revealed by the continuous decrease of the electrical resistance, and finally drops to 30 GPa; nearly the same level as the crystalline VO 2 (B) before PIA. However, after decompression, the resistance increases by two orders of magnitude than the pristine VO 2 (B) sample (Fig. 6b), which indicates a reinforced electronic localization in PIA-VO 2 (B). The temperature dependence of the electric resistance before and after the PIA process (7.5 and 26.0 GPa) proves that PIA-VO 2 (B) remains a semiconductor but has poorer electrical conductivity (Fig. 6c). We also fabricated PIA-VO 2 (B) samples using a large volume press apparatus at room temperature and 20 GPa. The decrease of the effective Bohr magnetic moment in PIA-VO 2 (B) from 1.35m B to 1.15m B per V 4 þ , as derived from the ex situ magnetic susceptibility measurement, suggests that the electron localization may be due to the formation of more V-V pairs/dimers ( Supplementary Fig. 6). More investigations are required to further probe the role of charge and spin in the PIA phenomenon and structure memory of VO 2 (B) nanosheets.
In conclusion, we report a precise control of the structure switching between crystalline and pressure-induced amorphous phases in VO 2 (B) nanosheets for the first time. The PIA-VO 2 (B) has no long-range periodicity, but preserves the inherited VO x polyhedral from VO 2 (B) with highly distorted local ordering. Upon moderate temperature annealing, these VO x polyhedra release excess energy and restore the pristine long-range periodicity. Spectral evidence and DFT calculations provide additional insight into the phase relationship between crystalline VO 2 phases and PIA-VO 2 (B) glass. The dynamically moderate annealing temperature provides a key mechanism of energetic control on the hidden reversible amorphization of metastable VO 2 (B). Preliminary evidence for the participation of charge and spin interactions during PIA were also observed for the first time.
The robust control of the phase transition between the metastable crystalline VO 2 (B) phase and the PIA-VO 2 (B) amorphous phase, via compression and low-temperature annealing, reveals the structure switching behaviour within a tetravalent vanadium-based material for the first time. This highlights exploration of thermodynamically hindered phase transformations with pressure and temperature tuning. Further investigations on the physical mechanism behind the phenomenon are expected, including orbital, electron and magnetic interactions.
Methods
Sample preparation. VO 2 (B) nanosheets were synthesized from high pure V 2 O 5 raw material via a hydrothermal route, with citric acid as the reducing agent. Briefly, 0.182 g V 2 O 5 powder (1 mmol) and 0.288 g citric acid (1.5 mmol) were added into 20 ml distilled water and continuously stirred to obtain the precursor. The precursor was then transferred into a Teflon-lined autoclave (25 ml capacity, 80% filling). The autoclave was heated to 200°C at a rate of 3°C min À 1 and maintained at 200°C for 5 h, followed by air cooling to room temperature by switching the power off. The resulting precipitates were washed with deionized water and dried at 80°C overnight. Transmission electron microscopy (TEM) techniques were applied to check the morphology of the as-synthesized nanosheets at ambient pressure.
In situ high-pressure characterizations. A symmetric DAC was employed to generate high pressure. A stainless steel gasket was pre-indented to a 40 mm thickness, followed by laser-drilling the central part to form a 120 mm diameter hole to serve as the sample chamber. Pre-compressed VO 2 (B) powder pellets and a small ruby ball were loaded in the chamber. Helium or neon was used as the pressure-transmitting medium and the pressures were determined by the ruby fluorescence method 39 . The in situ high-pressure angular-dispersive XRD experiments were carried out at the 16BM-D station of the High-Pressure Collaborative Access Team (HPCAT) at the Advanced Phonon Source (APS), Argonne National Laboratory (ANL). A focused monochromatic X-ray beam about 5 mm in diameter (FWHM) and with wavelengths of 0.4246 Å was used for the diffraction experiments. The diffraction data were recorded by a MAR345 image plate and processed with the Fit2D programme. High-pressure infrared spectroscopic experiments were performed at the U2A beamline of National Synchrotron Light Source (NSLS), Brookhaven National Laboratory and the mid-IR beamline of the Canadian Light Source. The infrared spectra were collected in transmission mode by a Bruker FTIR spectrometer using a nitrogen-cooled mid-band MCT (MCT-A) detector. The recorded frequencies were in the range of 600-8,000 cm À 1 with a resolution of 4 cm À 1 . High-pressure Raman spectra were measured by a Raman spectrometer with a 532.1 nm excitation laser at HPCAT. For the ex situ annealing experiments, the PIA-VO 2 (B) samples were removed with the gaskets after depressurization. They were then annealed at different temperatures (200, 250, 300, 350, 400, 450, 500, 550°C) each for 5 min within a vacuum furnace, and the resulting phases were checked with synchrotron XRD. While in the in situ annealing experiments, the bulky PIA-VO 2 (B) sample was made using a large volume press apparatus and sealed in a glass tube. Then XRD patterns were taken at 30, 60, 90, 120, 160, 190 and 220°C with electric heating, respectively. The holding time at each temperature was 5 min. Structure refinements were performed with the FULLPROF programme 40 .
Resistance and magnetic susceptibility measurements. In situ electrical resistance was measured by a four-probe resistance measurement system consisting of a Keithley 6,221 current source, a 2182A nanovoltmeter and a 7,001 voltage/current switch system. A DAC device was used to generate pressures up to 30 GPa, and a cubic boron nitride layer was inserted between the steel gasket and diamond anvil to provide electrical insulation between the electrical leads and gasket. Four gold wires were arranged to contact the sample in the chamber for resistance measurements. For the magnetism measurement, 0.1 g sample was made using a large volume press apparatus at 20 GPa for 1 h at room temperature. The DC magnetic susceptibility was measured using a SQUID magnetometer (Quantum Design) with an applied magnetic field of 500 Oe.
DFT calculations. Total energy and pressure-volume equations of state for the four VO 2 crystalline phases were calculated at the atomic level using the first-principles plane-wave pseudopotential method 41 based on the density functional theory, with the CASTEP package 38 . The exchange-correlation function is described by the local density approximation 42 . The ion-electron interactions were modelled with ultrasoft pseudopotentials 43 for all constituent elements, where the V 4s 2 3d 3 and O 2s 2 2p 4 electrons were treated as the valence electrons, respectively. The kinetic energy cut-off of 380 eV and Monkhorst-Pack k-point meshes 44 spanning o0.04 Å À 3 in the Brillouin zone were chosen. The starting structure models were obtained from the Inorganic Crystal Structure Database (ICSD) for VO 2 (A), VO 2 (B), VO 2 (M) and VO 2 (R). The cell parameters and atomic positions within the unit cell of these four phases, under hydrostatic pressure ranging between 0 and 100 GPa (with the interval of 5 GPa), were fully optimized using the quasi-Newton method 45 . The convergence thresholds between the optimization cycles for energy change, maximum force, maximum stress, and maximum displacement were set as 5.0 Â 10 -6 eV per atom, 0.01 eV Å À 1 , 0.02 GPa and 5.0 Â 10 -4 Å, respectively. The optimization terminates when all of these criteria are satisfied. All these computational parameters were tested to ensure the sufficient accuracy for the present purposes.
Data availability. The data that support the findings of this study are available from the corresponding author upon request. | 6,274.8 | 2016-07-18T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Uncoupling Ceramide Glycosylation by Transfection of Glucosylceramide Synthase Antisense Reverses Adriamycin Resistance*
Previous work from our laboratory demonstrated that increased competence to glycosylate ceramide conferred adriamycin resistance in MCF-7 breast cancer cells (Liu, Y. Y., Han, T. Y., Giuliano, A. E., and M. C. Cabot. (1999) J. Biol. Chem. 274, 1140–1146). This was achieved by cellular transfection with glucosylceramide synthase (GCS), the enzyme that converts ceramide to glucosylceramide. With this, we hypothesized that a decrease in cellular ceramide glycosylation would result in heightened drug sensitivity and reverse adriamycin resistance. To down-regulate ceramide glycosylation potential, we transfected adriamycin-resistant breast cancer cells (MCF-7-AdrR) with GCS antisense (asGCS), using a pcDNA 3.1/his A vector and developed a new cell line, MCF-7-AdrR/asGCS. Reverse transcription-polymerase chain reaction assay and Western blot analysis revealed marked decreases in both GCS mRNA and protein in MCF-7-AdrR/asGCS cells compared with the MCF-7-AdrR parental cells. MCF-7-AdrR/asGCS cells exhibited 30% less GCS activity by in vitro enzyme assay (19.7 ± 1.1 versus 27.4 ± 2.3 pmol GC/h/μg protein,p < 0.001) and were 28-fold more sensitive to adriamycin (EC50, 0.44 ± 0.01 versus12.4 ± 0.7 μm, p < 0.0001). GCS antisense transfected cells were also 2.4-fold more sensitive to C6-ceramide compared with parental cells (EC50= 4.0 ± 0.03 versus 9.6 ± 0.5 μm,p < 0.0005). Under adriamycin stress, GCS antisense transfected cells compared with parental cells displayed time- and dose-dependent increases in endogenous ceramide and dramatically higher levels of apoptotic effector, caspase-3. Western blotting showed that adriamycin sensitivity, introduced by asGCS gene transfection, was independent of P-glycoprotein and Bcl-2 expression. In summary, this work shows that transfection of GCS antisense tempers the expression of native GCS and restores cell sensitivity to adriamycin. Therefore, limiting the potential to glycosylate ceramide, which is an apoptotic signal in chemotherapy and radiotherapy, provides a promising approach to combat drug resistance.
mide production is one cause of cellular resistance to apoptosis induced by either ionizing radiation or tumor necrosis factor-␣ and adriamycin (2)(3)(4)(5)(6)(7). Accumulation of glucosylceramide (GC), 1 a simple glycosylated form of ceramide, is a characteristic of some multidrug-resistant cancer cells and tumors derived from patients who are less responsive to chemotherapy (8,9). The study of GC metabolism, as a molecular determinant of the drug-resistant phenotype, has been a subject of recent attention. Modification of ceramide metabolism by blocking the glycosylation pathway has been shown to increase cancer cell sensitivity to cytotoxics (10 -12). Further, drug combinations that enhance ceramide generation and limit glycosylation have been shown to enhance kill in cancer cell models (11,12). Other work has shown that ceramide toxicity can be potentiated in experimental metastasis of murine Lewis lung carcinoma and human neuroepithelioma cells by inclusion of a glucosylceramide synthase inhibitor (13,14). These findings assign biological significance to ceramide metabolism as it relates to circumvention of resistance to antineoplastic agents.
The increased capacity for ceramide glycosylation in GCStransfected human breast cancer cells conferred resistance to adriamycin and to tumor necrosis factor-␣ (7,15). Both agents are known to activate ceramide generation and potentiate apoptosis (1,2,7,15). From this, we hypothesized that transfection of asGCS, to limit cellular ceramide glycosylation, would overcome adriamycin resistance. By introducing asGCS to modulate GCS activity in adriamycin-resistant human breast cancer cells, we successfully decreased native GCS expression and restored cellular sensitivity to adriamycin and to C 6 -ceramide. The present study shows further that ceramide generation is a major factor in the cytotoxicity of adriamycin and suggests that asGCS would be a novel force to overcome adriamycin resistance.
Giemsa staining was performed as described (17). Cells were seeded in 60-mm dishes (10 5 cells/dish) in 10% FBS RPMI 1640 medium and grown for 2 days at 37°C. After rinsing with PBS, cells were fixed with 50% methanol/PBS, followed by methanol, and stained with KaryoMAX Giemsa stain stock solution (Life Technologies, Inc.). Following washing with deionized water, cells were photomicrographed. The population doubling time of each cell line was measured. Briefly, cells were seeded in 24-well plates (10 4 cells/well) in 10% FBS RPMI 1640 medium and grown for 24-, 48-, 72-, and 96-h periods. After rinsing with PBS, cells were dispersed with trypsin/EDTA, suspended in medium, and counted by hemocytometer. pcDNA 3.1/his A-asGCS and pcDNA 3.1/his-GCS Expression Vectors and Transfection-pCG-2, a Bluescript II KS containing GlcT-1(Ref. 18; terminology for GCS) in the EcoRI site, was kindly provided by Dr. Shinichi Ichikawa and Dr. Yoshio Hirabayashi (The Institute of Chemical and Physical Research, Saitama, Japan). The full-length cDNA of human GCS was subcloned into the EcoRI site in the pcDNA 3.1/His A with Xpress TM tag peptide (Invitrogen) in the upstream region. Xpress tag fuses at the N terminus of the cloned gene; therefore, GCS will be expressed as Xpress-GCS. The antisense and sense orientation of GCS cDNA was analyzed with Vector NTI 4.0 and doubly checked by restriction digestion. When MCF-7-AdrR cells reached 20% confluence, pcDNA 3.1-asGCS or pcDNA 3.1-GCS (10 g/ml, 100-mm dish) was introduced by co-precipitation with calcium phosphate (Mammalian Transfection Kit, Stratagene, La Jolla, CA). The transfected cells were selected in RPMI 1640 medium containing 10% FBS and 400 g/ml G418. Each G418-resistant clone, isolated utilizing cloning cylinders, was propagated and later screened by GCS enzyme assay. pcDNA 3.1/his A plasmid, without GCS DNA, was used in control transfection.
Glucosylceramide Synthase Assay-To determine the levels of GCS in the G418-resistant clones, a modified radioenzymatic assay was utilized (7,19). Cells were homogenized by sonication in lysis buffer (50 mM Tris-HCl, pH 7.4, 1.0 g/ml leupeptin, 10 g/ml aprotinin, 25 M phenylmethylsulfonyl fluoride). Microsomes were isolated by centrifugation (129,000 ϫ g, 60 min). The enzyme assay, containing 50 g of microsomal protein, in a final volume of 0.2 ml, was performed in a shaking water bath at 37°C for 60 min. The reaction contained liposomal substrate composed of C 6 -ceramide (1.0 mM), phosphatidylcholine (3.6 mM), and brain sulfatides (0.9 mM). Other reaction components included sodium phosphate buffer (0.1 M), pH 7.8, EDTA (2.0 mM), MgCl 2 (10 mM), dithiothreitol (1.0 mM), -NAD (2.0 mM), and [ 3 H]UDPglucose (0.5 mM). Radiolabeled and unlabeled UDP-glucose were diluted to achieve the desired radiospecific activity (4,700 dpm/nmol). To terminate the reaction, tubes were placed on ice, and 0.5 ml of isopropanol and 0.4 ml of Na 2 SO 4 were added. After brief vortex mixing, 3 ml of t-butyl methyl ether was added, and the tubes were mixed for 30 s. After centrifugation, 0.5 ml of upper phase, which contained GC, was withdrawn and mixed with 4.5 ml of EcoLume for analysis of radioactivity by liquid scintillation spectroscopy.
RNA Analysis-Cellular mRNA was purified using a mRNA isolation kit (Roche Molecular Biochemicals). Equal amounts of mRNA (5.0 ng) were used for RT-PCR. Under upstream primer (5Ј-CCTTTCCTCTC-CCCACCTTCCTCT-3Ј) and downstream primer conditions (5Ј-GGTT-TCAGAAGAGAGACACCTGGG-3Ј), a 302-base pair fragment in the 5Ј-terminal region of the GCS gene was produced using the ProSTAR HF single-tube RT-PCR system (High Fidelity, Stratagene) in a thermocycler (Mastercycler Gradient, Eppendorf). mRNAs were reverse transcribed using Moloney murine leukemia virus reverse transcriptase at 42°C for 15 min. DNA was amplified with TaqPlus Precision DNA polymerase in a 40-cycle PCR reaction, using the following conditions: denaturation at 95°C for 30 s, annealing at 60°C for 30 s, and elongation at 68°C for 120 s. RT-PCR products were analyzed by 1% agarose gel electrophoresis stained with ethidium bromide. -Actin (Life Technologies, Inc.) was used as control for even loading.
Cytotoxicity Assay-Assays were performed as described previously (7,11). Briefly, cells were seeded in 96-well plates (2,000 cells/well) in 0.1 ml RPMI 1640 medium containing 10% FBS and cultured at 37°C for 24 h before addition of drug. Drugs were added in FBS-free medium (0.1 ml), and cells were cultured at 37°C for the indicated periods. Drug cytotoxicity was determined using the Promega 96 Aqueous cell proliferation assay kit (Promega, Madison, WI). Absorbance at 490 nm was recorded using a Microplate Fluorescent Reader, model FL600 (Bio-Tek, Winooski, VT).
Analysis of Ceramide-Analysis was performed as described previously (7,8). Cells were seeded in 6-well plates (60,000 cells/well) in 10% FBS RPMI 1640 medium. After 24 h, cells were shifted to 5% FBS medium with or without adriamycin and grown for the indicated times. Cellular lipids were radiolabeled by adding [ 3 H]palmitic acid (2.5 Ci/ml culture medium) for 24 h. After removal of medium, cells were rinsed twice with PBS (pH 7.4), and total lipids were extracted as described (8). The resulting organic lower phase was withdrawn and evaporated under a stream of nitrogen. Lipids were resuspended in 100 l of chloroform/methanol (1:1, v/v), and aliquots were applied to TLC plates. Ceramide was resolved using a solvent system containing chloroform/acetic acid (90:10, v/v). Commercial lipid standards were cochromatographed. After development, lipids were visualized by iodine vapor staining, and the ceramide area was scraped into 0.5 ml of water. EcoLume counting fluid (4.5 ml) was added, the samples were mixed, and radioactivity was quantitated by liquid scintillation spectrometry.
Caspase-3 Assay-Caspase-3 activity was assayed by DEVD-AFC cleavage, using the ApoAlert Caspase-3 assay kit (CLONTECH, Palo Alto, CA). The assay was performed as described previously (15). Cells were seeded in 100-mm dishes (500,000 cells/dish) in 10% FBS RPMI 1640 medium. After 24 h, cells were shifted to 5% FBS RPMI 1640 medium without or with adriamycin and grown for 24 and 48 h. Following harvest, cells (10 6 /vial) were lysed on ice for 10 min with 50 l of lysis buffer, and cell debris was removed by centrifugation at 4°C at 10,000 ϫ g for 5 min. The soluble fraction was incubated with 50 M conjugated substrate DEVD-AFC in a 100-l reaction volume at 37°C for 60 min. The free AFC fluoresce was measured at excitation 400 nm and emission 505 nm using a FL600 Microplate Fluorescence Reader. The caspase-3 inhibitor, acetyl-Asp-Glu-Val-Asp-aldehyde, was used to exclude nonspecific background in the enzymatic reaction.
Statistics-All data represent the means Ϯ S.D. Experiments were repeated two or three times. Student's t test was used to compare mean values.
RESULTS
Expression of GCS Antisense-The structure of pcDNA 3.1/ his A-asGCS is shown in Fig. 1A. The GCS antisense was cloned into the EcoRI site, just downstream from the anti-Xpress tag sequence in pcDNA 3.1/his A. This plasmid was introduced into MCF-7-AdrR cells by calcium phosphate coprecipitation. G418 was used to select transfectants. We found that the number of G418-resistant clones in MCF-7-AdrR as-GCS transfected cells was much lower than in MCF-7-AdrR cells transfected with pcDNA3.1/his A vector (54/10 6 versus 251/10 6 ). G418-resistant clones were further selected by meas-uring GCS activity using the cell-free radioenzymatic assay. In all, fifty-four G418-resistant clones of MCF-7-AdrR asGCStransfected cells were obtained, and we identified one clone that exhibited a stable 30% decrease in GCS activity (Fig. 1B). Compared with 27.4 Ϯ 2.3 pmol of GC synthesized by MCF-7-AdrR parental cells, GCS activity in MCF-7-AdrR/asGCS was decreased to 19.7 Ϯ 1.1 pmol of GC (Fig. 1B, p Ͻ 0.001). There were no differences in GCS activities between the pcDNA 3.1/ his A vector-transfected cells and parental MCF-7-AdrR cells (Fig. 1B).
The asGCS-transfected and parental MCF-7-AdrR cells were stained with Giemsa. Representative photomicrographs are shown in Fig. 1C. MCF-7-AdrR/asGCS cells, including nuclei, are flatter and larger than the dome-shaped, more stellate MCF-7-AdrR cells. The asGCS cell line is also more cuboidal with less dense cytoplasm. The population doubling times for both cell lines were similar, 32 and 30 h for MCF-7-AdrR/ asGCS and MCF-7-AdrR cells, respectively.
Consistent with diminished GCS activity, GCS mRNA and GCS protein were reduced in MCF-7-AdrR/asGCS cells, compared with MCF-7-AdrR cells. Total mRNA was isolated from both cell lines and reverse transcribed and amplified through RT-PCR. A representative RT-PCR gel electropherograph is shown in Fig. 2A. As with that revealed by densitometric scanning, the mRNA in MCF-7-AdrR/asGCS cells was reduced 3-fold compared with that in MCF-7-AdrR cells (25.4% versus 77.5% of -actin). GCS protein in cell lysates was resolved by SDS-polyacrylamide gel electrophoresis and identified using GCS antiserum. Western blotting showed that the total amount of GCS protein in MCF-7-AdrR/asGCS cells decreased by 32% compared with MCF-7-AdrR parental cells (77,520 and 112,860 optical density units, respectively) (Fig. 2B, right and center bands). However, MCF-7-AdrR cells that were transfected with pcDNA 3.1/his A-GCS expressed greater amounts of GCS (Fig. 2B, left band, AdrR/GCS). MCF-7-AdrR/GCS cells were developed by stable transfection of sense orientation pcDNA 3.1/his A-GCS vector in MCF-7-AdrR cells. This GCStransfected cell line displays 80% higher GCS activity than MCF-7-AdrR cells as measured by radioenzymatic assay. After transfection with pcDNA 3.1/his A-GCS vector, although the expressed GCS was fused with Xpress tag (-Asp-Leu-Tyr-Asp-Asp-Asp-Lys-), the upward shift in molecular mass (about 800 daltons) was undetectable by Western blot (Fig. 2B). To evaluate the expression of transfected GCS antisense gene, we employed a Xpress antibody to detect the production of Xpress-GCS fused protein (Fig. 1A). We did not find the GCS-Xpress tag in either MCF-7-AdrR or MCF-7-AdrR/asGCS cells (Fig. 2C). However, the tag protein was highly expressed in MCF-7-AdrR GCS transfected cells (Fig. 2C, center band). In MCF-7-AdrR/asGCS cells, what appears to be the Xpress-asGCS protein (Fig. 2C, faint band) had a higher molecular mass compared with Xpress-GCS protein of MCF-7-AdrR/GCS and was present at only 15% the level of the latter (Fig. 2C, center band).
Ceramide Generation and Caspase-3 Activity under Adriamycin Stress-To further elucidate the dynamics of ceramide metabolism in drug sensitivity, we measured ceramide generation in the two cell lines. We found that adriamycin exposure dramatically elevated ceramide levels in GCS antisense-transfected cells. As shown in Fig. 4, adriamycin treatment increased the levels of ceramide in MCF-7-AdrR/asGCS cells in a time-and dose-dependent manner. At 24 and 48 h post-treatment, ceramide levels in MCF-7-AdrR/asGCS cells increased 200 and 250%, respectively (Fig. 4A). In sharp contrast, adriamycin treatment did not greatly modify ceramide levels in MCF-7-AdrR cells, which at 48 h increased only 16% above control. The result of increasing adriamycin dose on ceramide metabolism in the cell lines is shown in Fig. 4B. Adriamycin at 0.5, 1.0, and 2.5 M enhanced ceramide levels by 181, 188, and 246%, respectively, in MCF-7-AdrR/asGCS cells (Fig. 1B), whereas MCF-7-AdrR cells displayed minimal response over the same dose range.
In mammalian cells, ceramide induces apoptosis directly through effector caspases, such as caspase-3 (21,22). To identify whether an alteration in ceramide metabolism in asGCS cells is related to adriamycin sensitivity via signal cascades, we analyzed caspase-3 activity in the parental and transfected cell lines. The data demonstrate that increased effector caspase-3 activity is consistent with changes in ceramide metabolism. At 10 M adriamycin, the EC 50 in MCF-7-AdrR cells, caspase-3 activity in MCF-7-AdrR/asGCS increased 290 and 980% over control, at 24 and 48 h, respectively (Fig. 5). In contrast, adriamycin treatment increased caspase-3 by 160% in MCF-7-AdrR cells, albeit only at 48 h (Fig. 5). In summary, caspase-3 activity in the GCS antisense-transfected cells was 3-and 6-fold greater in response to adriamycin treatment than observed in parental cells (p Ͻ 0.0001). This suggests that impaired GCS activity permits cells to maintain high levels of ceramide under adriamycin stress, activating caspase-3 for progression of programmed cell death.
After transfection with pcDNA 3.1/his A-asGCS plasmid, we found that MCF-7-AdrR/asGCS cells expressed lower levels of GCS, based upon both mRNA and protein (Fig. 2). GCS enzy- matic activity was also found to be lower in MCF-7-AdrR/ asGCS cells (Fig. 1B). Because of markedly decreased expression of Xpress-asGCS tag (Western blot, Fig. 2C), it is likely that binding of asGCS mRNA to native GCS mRNA blocks GCS translation and diminishes GCS protein in the antisense transfected cells. It is noteworthy that the EC 50 for adriamycin was reduced 28-fold (Fig. 3), whereas in the cell-free enzyme assays GCS activity was reduced by only 30% in MCF-7-AdrR/asGCS cells (Fig. 1B). Similarly, in previous work, we have shown that GCS transfection by an inducible expression system conferred adriamycin resistance in MCF-7 cells (7). In MCF-7-GCStransfected cells, GCS activity was enhanced 4-fold, and the EC 50 of adriamycin increased 11-fold compared with MCF-7 cells (7). Other factors including the existence of GCS isoforms, substrate specificities, and enzyme compartmentalization may also play a role in GCS effects on adriamycin sensitivity. For example, GCS catalyzes ceramide glycosylation, the first step in the biosynthesis of glycosphingolipids (28). A recent GCS knockout study showed that embryonic lethality was the consequence of homozygosity, revealing a vital role for GCS during development and differentiation in mice (29). In present study, G418 survival of the asGCS-transfected clones was minimal compared with survival of the asGCS-free plasmid transfectants. This implies that GCS antisense blocks ceramide glycosylation that is essential for cell development, and only the partially blocked clones are able to survive the selection conditions. In addition, molecular specificity of ceramide has been demonstrated, as some species, C 16 -ceramide for example, are more prevalent in apoptosis signaling (30). Cellular ceramide response to DNA damage has been shown to rely on mitochondrion-dependent caspases (31).
Ceramide can be generated by de novo biosynthesis and sphingomyelin degradation via the action of sphingomyelinases (1,32,33). Intracellular levels of ceramide are elevated by a variety of stimuli and/or agents that induce apoptosis, including Fas ligand engagement of CD95, ionizing radiation, ultraviolet radiation, chemotherapeutic drugs and genotoxic chemicals, and several cytokines (1-7, 15, 33-35). Ceramideinduced cellular death is one mechanism of adriamycin-induced toxicity (7,8,12,14). Cellular ceramide impacts a variety of signaling molecules and pathways (33). Of these various effects, ceramide induction of the stress-activated protein kinase cascade and inhibition of complex III activity in the mitochondrial respiratory chain have been linked to the induction of apoptosis (36 -38). Capspase-3, one of the effector caspases in the stress-activated protein kinase apoptotic signaling pathway, is activated by cell-permeable ceramide as well as endogenous ceramide generated in response to extracellular stimuli (15,39,40). In present study, adriamycin treatment increased cellular ceramide with activation of caspase-3 in the GCS antisense transfected cells but not in parental cells. Therefore, the diminished capacity for glycosylation promotes adriamycin-induced cytotoxicity via ceramide-linked activation of caspase-3.
P-glycoprotein, a well characterized drug resistance mechanism (41), is highly expressed in MCF-7-AdrR cells (18). In previous work on the conversion of cells toward drug resistance, increased expression of P-glycoprotein in MCF-7 cells transfected with GCS sense was not observed (7). Much in line, in the present study we did not observe decreased expression of P-glycoprotein in chemosensitive MCF-7-AdrR/asGCS cells (Fig. 6). This suggests that the reversal of adriamycin resistance conferred by asGCS is not related to P-glycoprotein. Bcl-2 in dephosphorylated form is a strong anti-apoptosis effector involved in ceramide-induced apoptosis signaling pathways (42)(43)(44). We did not find that increased Bcl-2 in GCS modulates MCF-7 cells (7), nor in this study was altered Bcl-2 expression found in GCS antisense-transfected MCF-7-AdrR cells. These data reinforce the idea that up-regulation and down-regulation of GCS regulates adriamycin sensitivity by a mechanism divorced from Bcl-2.
In keeping with our previous report (7), the GCS gene knockout data presented here further demonstrate that GCS is one cause of adriamycin resistance. This positions antisense technology as a promising tool for reversal of certain forms of chemotherapy resistance.
FIG. 5. Caspase-3 activity under adriamycin stress. Cells were treated without or with adriamycin (10 M) for 24 and 48 h. After harvest, the soluble fraction obtained after cell lysis (10 6 cell/tube) was incubated with DEVD-AFC substrate at 37°C for 60 min as detailed under "Experimental Procedures." The fluorescence of cleaved AFC was measured at 505 nm. *, p Ͻ 0.0001, compared with MCF-7-AdrR cells treated with adriamycin for each corresponding treatment period.
FIG. 6. P-glycoprotein and Bcl-2 expression in MCF-7-AdrR and MCF-7-AdrR/as GCS cells. Detergent-soluble cellular protein was isolated from the respective cell lines and subjected to SDS-polyacrylamide gel electrophoresis (50 g/lane). Protein was transferred to nitrocellulose, and the immunoblot was incubated with the specified antibody. A, P-glycoprotein Western blots. C219 monoclonal antibody was used to recognize P-glycoprotein. B, Bcl-2 Western blots. Ab-1 monoclonal antibody was utilized to blot Bcl-2 protein. MCF-7 cells were used as a positive control for Bcl-2. | 4,646.6 | 2000-03-10T00:00:00.000 | [
"Biology",
"Medicine"
] |
Optical spectroscopic study of Eu 3 + crystal field sites in Na 3 La 9 O 3 ( BO 3 ) 8 crystal
Time-resolved line-narrowed fluorescence spectroscopy of Eu ions in a new oxyborate Na3La9O3(BO3)8 crystal shows the existence of four independent symmetry crystal field sites for the rare-earth ion. A crystal field analysis and simulation of the experimental results have been performed in order to parametrize the crystal field at the Eu sites. A plausible argument about the crystallographic nature of these sites is given. ©2008 Optical Society of America OCIS codes: (140.3380) Laser Materials; (300.6320) Spectroscopy, high resolution. References and links 1. P. Becker, “Borate materials in Nonlinear Optics,” Adv. Mater. 10, 979-992 (1998). 2. C. Chen, Z. Lin, and Z. Wang, “The development of new borate-based UV nonlinear optical crystals,” Appl. Phys. B 80, 1-25 (2005). 3. B. Braun, F. X. Kartner, U. Keller, J. P. Meyn, and G. Huber, “Passively Q-switched 180-ps Nd:LaSc3(BO3)4,” Opt. Lett. 21, 405-407 (1996). 4. D. Jaque, J. Capmany, J. G. Sole, Z. D. Luo, and A. D. Jiang, “Continuous-wave laser properties of the selffrequency-doubling YAl3(BO3)4:Nd crystal,” J. Opt. Soc. Am. B 15, 1656-1662 (1998). 5. D. Jaque, J. Capmany, J. A. Sanz Garcia, A. Brenier, G. Boulon, and J. Garcia Sole, “Nd ion based self frequency doubling solid-state lasers,” Opt. Mater. 13, 147-157 (1999). 6. D. Jaque, J. Capmany, J. García Solé, “Red, green and blue laser light from a single Nd:YAl3(BO3)4 crystal based on laser oscillation at 1.3 μm,” Appl. Phys. Lett. 75, 325-327 (1999). 7. C. Cascales, C. Zaldo, U. Caldiño, J. García Solé, and Z. D. Luo, “Crystal field analysis of Nd energy levels in monoclinic NdAl3(BO3)4 laser,” J. Phys.: Conden. Matter 13, 8071-8085 (2001). 8. F. Druon, S. Chénais, F. Balembois, P. Georges, A. Brun, A. Courjaud, C. Hönninger, F. Salin, M. ZavelaniRossi, F. Augé, J. P. Chambaret, A. Aron, F. Mougel, G. Aka, and D. Vivien, “High-power diode-pumped Yb-GdCOB laser: from continuous-wave to femtosecond regime,” Opt. Mater. 19, 73-80 (2002). 9. P. Gravereau, J. P. Chaminade, S Pechev, V. Nikolov, D. Ivanova, and P. Pechev, “Na3La9O3(BO3)8, a new oxyborate in the ternary system NaO2-La2O3-B2O3:preparation and crystal structure,” Sol. State Sc. 4, 993998 (2002). 10. R. Balda, V. Jubera, C. Frayret, S. Pechev, R. Olazcuaga, P. Gravereau, J. P. Chaminade, M. Al-Saleh, and J. Fernández, “First luminescence study of the new oxyborate Na3La9O3(BO3)8:Nd ,” Opt. Mater. 30, 122-125 (2007). 11. C. Cascales, J. Fernández, and R. Balda, “Investigation of site-selective symmetries of Eu ions in KPb2Cl5 by using optical spectroscopy,” Opt. Express 13, 2141-2152 (2005). 12. C. Cascales, P. Porcher, J. Fernández, A. Oleaga, R. Balda, and E. Dieguéz, “Crystal field studies in Eu doped Bi12SiO20 and Bi12SiO20: V 5+ crystals,” J. Alloys Comp. 323-324, 260-266 (2001). 13. G. Blasse, A. Bril, and W. C. Nieuwpoort, “On the Eu fluorescence in mixed metal oxides. Part IThe crystal structure sensitivity of the intensity ratio of electric and magnetic dipole emission,” J. Phys. Chem. Solids 27, 1587-1592 (1966). 14. C. Görller-Walrand and K. Binnemans, “Rationalization of Crystal-Field Parametrization,” in Handbook on the Physics and Chemistry of Rare Earths, K. A. Gschneidner Jr. and L. Eyring, eds., (Elsevier Science, Amsterdam, 1996), Vol. 23, pp. 121-283. 15. B. G. Wybourne, Spectroscopic Properties of Rare Earths, (Wiley, New York, 1965). 16. C. Cascales M. D. Serrano, F. Esteban-Betegon, C. Zaldo, R. Peters, J. Johannsen, M. Mond, K. Peterman, G. Huber, L. Ackermann, D. Rytz, C. Dupré, M. Rico, U. Griebner, and V. Petrov, “Structural, spectroscopic and tunable laser properties of Yb-doped NaGd(WO4)2,” Phys. Rev. B 17, 174114:1-15 (2006). #91301 $15.00 USD Received 3 Jan 2008; revised 7 Feb 2008; accepted 10 Feb 2008; published 12 Feb 2008 (C) 2008 OSA 18 February 2008 / Vol. 16, No. 4 / OPTICS EXPRESS 2653 17. C. Görller-Walrand, P. Vandevelde, I. Hendrickx, P. Porcher, J-C. Krupa, and G. S. D. King, “Spectroscopic study and crystal field analysis of Eu in the YAl3(BO3)4 huntite matrix,” Inorg. Chim. Acta 143, 259-270 (1988).
Introduction
Since the discovery of laser in the sixties very intense research has been carried out in the field of nonlinear optics aimed to expand the frequency range provided by the known laser materials.New laser sources based on nonlinear optical (NLO) properties of different materials are of common use today, not only in laboratory research but in other fields such as laser diagnosis and therapy, optical telecommunications and signal processing, integrated optics, and many other related fields.Moreover, the development of powerful laser pump diodes has increased the interest to investigate new nonlinear materials for laser applications.
Among NLO materials, the interest in borate compounds has increased in recent years due to their good optical properties such as good transparency in the ultraviolet, high damage threshold, and good nonlinearity which make them promising materials not only for NLO devices [1,2] but also for potential applications in the field of lasers [3][4][5][6][7][8].
The extraordinary versatility of the borate structure facilitates the design of new compounds.Recently a new oxyborate of formula Na 3 La 9 O 3 (BO 3 ) 8 has been discovered in the ternary Na 2 O-La 2 O 3 -B 2 O 3 diagram and its structure resolved [9].The unit cell is hexagonal with space group P-6 2m (189) and the lanthanum occupies two different crystallographic sites in the structure with coordinations eight and nine.In a very recent work, the authors have presented the first spectroscopic characterization of Nd 3+ ions in this Na 3 La 9 O 3 (BO 3 ) 8 crystal by using steady-state and time resolved laser spectroscopy [10].This study shows the existence of at least two different crystal field sites for Nd 3+ ions in this material in accordance with the existence of two non equivalent crystallographic lanthanum sites.However, a careful examination of the excitation spectra of these ions shows the presence of a complex structure which suggests the existence of other possible crystal field sites for the rare-earth (RE) in this crystal.
It is worthy to mention that the optical properties of rare-earth doped crystals are closely related to local structure and bonding at the ion site.The existence of different crystal field sites may produce spectral broadening and/or multiple emission lines which can influence the energy extraction from the material as well as the wavelength tuning capability when it is used as a lasing medium.As a consequence, the knowledge of the precise crystal field structure of the rare-earth in a given material is of paramount importance to understand its potentialities for lasing applications.
In order to clarify the nature of the RE environments in Na 3 La 9 O 3 (BO 3 ) 8 crystal we have undertaken the study of the site-resolved luminescence of Eu 3+ in this crystal, taking into account the adequacy of the dopant ion as a structural probe.Since the 5 D 0 state is nondegenerate under any symmetry, the structure of the 5 D 0 → 7 F J emission is only determined by the splitting of the terminal levels caused by the local crystal field.Moreover, as the 7 F 0 level is also nondegenerate, site-selective excitation within the inhomogeneous broadened 7 F 0 → 5 D 0 absorption band can be performed by using the fluorescence line narrowing (FLN) technique to distinguish among different local environments around the rare-earth ions [11,12].On the ground of the experimental results a crystal-field analysis and simulation of the energy level schemes have also been performed in order to parametrize the crystal-field around the Eu 3+ ions.As a conclusion, we found evidences about the existence of at least four symmetry independent crystal field sites for the RE ions in this crystal.A plausible argument about the crystallographic nature of these sites is finally given.
Experimental techniques
Single crystals were grown by a self flux method, using an excess of the constituents as solvent in the pseudo ternary phase diagram Na 2 O-La 2 O 3 (Eu 2 O 3 )-B 2 O 3 .Analytical grade purity of Na 2 CO 3 -La 2 O 3 (Eu 2 O 3 )-H 3 BO 3 reactants with molar ratio 28.51%, 21.52% and 49.95% were weighed (about 100g), ground and mixed, sintered at successively 400°C and 650°C, then melted at 1160°C in a 50 cm 3 Pt crucible in several batches.0.5 mol% Eu 3+ doping was chosen.
The growth experiments were carried out in a Kanthal resistance furnace equipped with an Eurotherm controller for temperature and cooling rate regulation.Melting and crystallization temperatures were first determined by using the dipping of a Pt wire.After homogenization of the melt at 1180°C during 24 hours, the temperature was slowly decreased to 1130°C (10°C/h) while the cooling rate was decreased to 0.2°C/h until total solidification took place, and then the furnace was cooled down to room temperature (30°C/h).Crystals grown on the surface of the melt were separated mechanically.
Resonant time-resolved FLN spectra were performed by exciting the sample with a pulsed frequency doubled Nd:YAG pumped tunable dye laser of 9 ns pulsed width and 0.08 cm -1 linewidth and detected by an EGG&PAR Optical Multichannel Analyzer.The measurements were carried out by keeping the sample temperature at 10 K in a closed cycle helium cryostat.
For lifetime measurements, the fluorescence was analyzed with a 0.25 m Jobin-Ybon monochromator and the signal detected by a Hamamatsu R636 photomultiplier.Data were processed by a Tektronix oscilloscope.
FLN spectra
Time-resolved line-narrowed fluorescence spectra of the5 D 0 → 7 F 0-6 transitions of Eu 3+ doped Na 3 La 9 O 3 (BO 3 ) 8 crystal were obtained at 10 K by using different resonant excitation wavelengths into the 7 F 0 → 5 D 0 transition, and at different time delays after the laser pulse.
Depending on the excitation wavelength the emission spectra present different characteristics concerning the number of observed 5 D 0 → 7 F J transitions, their relative intensity, and the magnitude of the observed crystal-field splitting for each 7 F J state. Figure 1 shows the spectra corresponding to the 5 D 0 → 7 F 0,1,2 transitions obtained with a time delay of 10 μs after the pump pulse, at four different pumping wavelengths 581.9, 581.7, 580.4 and 580 nm, which selectively show the presence of four main isolated Eu 3+ sites.
We shall hereafter refer to the optical features of these spectra as originating from sites A (λ exc = 581.9nm), B (λ exc = 581.7 nm), C (λ exc = 580.4nm), and D (λ exc = 580 nm).The presence of the line for the 5 D 0 → 7 F 0 transition in each spectrum indicates a site of C nv , C n or C s symmetry for Eu 3+ .These symmetries allow the transition as an electric dipole process, according to the group theory selection rules, with a linear term in the crystal-field expansion [13].The symmetry characteristics of these Eu 3+ optical centers can be inferred through the comparison among the number of possible and experimentally observed 5 D 0 → 7 F 0-6 transitions [14], and thus some symmetry point groups can be initially supposed for these Eu 3+ optical centers.
The spectra obtained with excitation wavelengths 581.9 and 580.0 nm display, in each case, two Stark levels for the 5 D 0 → 7 F 1 transition and four levels in the hypersensitive 5 D 0 → 7 F 2 region.These results indicate that Eu 3+ in A and D sites can be in the presence of a rather higher hexagonal, trigonal or tetragonal symmetry.Given the scarce number of energy levels observed for the 5 D 0 → 7 F J transitions with J>2 we can reasonably extract no more information about specific symmetry point groups from these spectra, a task that must be undertaken under the detailed consideration of the Na 3 La 9 O 3 (BO 3 ) 8 crystal structure, as will be developed in the following Section.On the contrary, the spectrum obtained with the excitation wavelength 581.7 nm shows three Stark levels for the 5 D 0 → 7 F 1 transition, and five and seven levels for the 5 D 0 → 7 F 2 and 5 D 0 → 7 F 3 emissions, respectively, which means that the degeneracy of these three states is completely lifted, that is, the Eu 3+ B optical center is located in a crystal site with C 2v or lower symmetry.Finally, in the spectrum collected with excitation wavelength 580.4 nm, the two and three energy levels for Eu 3+ site for 5 D 0 → 7 F 1 and symmetries.It is worth noticing that some minor peaks appearing in the spectra of the less intense emissions from sites C and D, probably associated to contributions from Eu 3+ ions placed in residual/and or interstitial sites have been disregarded.Table 1 (see Appendix) summarizes the FLN spectral characteristics of A, B, C, and D crystal field sites together with a plausible assignment of crystallographic cationic site for Eu 3+ .Energy levels observed for transitions from 5 D 0 to the ground 7 F J manifold for these main four Eu 3+ sites are included in Table 2 (see Appendix).
Regarding the relative intensity of the emission coming from the different sites, it is worthy to mention that the highest intensities corresponds to sites A and B, being the intensity from site A around 100 times higher than the one from site B and around three orders of magnitude higher than intensities from sites C and D.
Lifetimes
As could be expected, if there are different sites for the Eu 3+ ion, the lifetime of state 5 D 0 should depend on the excitation wavelengths.We have measured the lifetime of the 5 D 0 state at different excitation wavelengths which correspond to those at which the Eu 3+ sites are selectively resolved, and collected the luminescence at the highest intensity Stark component of the 5 D 0 → 7 F 2 transition.The experimental decays are well described by a single exponential function to a good approximation.The values of the measured lifetime are 1.87 ms, 1.73 ms, and 1.32 ms for sites A, B, and C respectively.The low intensity of the emission from site D makes it difficult to measure its lifetime accurately.
Crystal-field analysis and simulation of the energy level schemes
The detailed description of the theoretical background of the crystal field analysis and the methods followed to reproduce the experimental sequences of energy levels for Eu 3+ in A, B, C, and D sites have been previously described [11,12].In each case, the one-electron crystal field Hamiltonian can be expressed [15] as a sum of products of tensor operators , with real k q B and complex k q S parameters as coefficients, these later appropriated to the Eu 3+ site symmetry in the host, detailed as follows: for A and D sites (Eq.1), B site (Eq.2) and C site (Eq.3).
Schemes of 19, 34, 19, and 18 observed Stark levels, included in Table 2, from the total number of 37, 49, 33, and 37, were considered in the simulation of the sequence of Eu 3+ 7 F J energy levels in sites A, B, C, and D, with C 4v , C 2 , C 3v , and C 4v crystal fields, respectively.Resulting simulated energy levels are also collected in Table 2, and values of their corresponding crystal field parameters and figures of merit of respective fits are included in Table 3 (see Appendix).
Correlation of FLN isolated Eu 3+ sites with the crystal structure
The presence of the above observed Eu 3+ optical centers must be explained by considering which sites of the Na 3 La 9 O 3 (BO 3 ) 8 crystal structure can accommodate Eu 3+ cations.Thus, the assignment of each A, B, C or D site to a specific site in the crystal structure must be led by the symmetry-related characteristics of the optical centers resolved in the FLN spectra.Though Eu 3+ ion usually substitutes lanthanide cations in most of lanthanide-based compounds, in some mixed oxides, containing monovalent cations, these ions may have the same, or nearly the same, oxygen coordination than the one at the lanthanide site giving rise to some structural disorder [16] which facilitates the occupancy of these sites by the RE ions if charge compensation is allowed; therefore, we start this correlation with the inspection of the symmetry characteristics of their oxygen environments.
Following the previous structure description [9], from which the same numbering of atoms has been kept in the subsequent text and in Figs. 2 to 4, Na 3 La 9 O 3 (BO 3 ) 8 crystals present the symmetry of the hexagonal space group P 6 2m (No. 189), with lattice parameters (Å) a = 8.9033(3), c = 8.7131(3), V = 598.14(4),and Z =1; (see Fig. 2).In this oxyborate host La atoms occupy two different crystal sites, 3g and 6i, coordinated to eight and nine oxygen atoms, respectively.The La1O 8 polyhedron can be described as a distorted square antiprism (SAP), with C 4v symmetry, and La2O 9 is a distorted monocapped square antiprism (MSAP) with C 2v (or lower) symmetry; [see Figs.
surrounded by six oxygen atoms, being the NaO 6 coordination polyhedron described as a highly distorted octahedron; [see Fig. 3(c)].This coordination is quite unusual for trivalent lanthanides [14], and therefore we have considered an extended oxygen environment which includes oxygen atoms that are not only the nearest neighbors indicated above.Four additional O3, from close La1O 8 , La2O 9 and B3O 3 polyhedra, are found at a distance of 3.185(3) Å from Na + , in such a way that the oxygen distribution of the current NaO 10 polyhedron can be described as a tetracapped trigonal prism (TTP), with C 3v symmetry, which is one of the most frequently observed coordination polyhedra for lanthanide systems; [see Fig. 3(d)].From the above mentioned crystallographic symmetries, the characteristics of which are included in Table 1, it seems reasonable to attribute the spectra of sites A and B to Eu 3+ located in environments derived from the replacement of La 3+ in La1O 8 and La2O 9 polyhedra, respectively.Moreover, the C 3v symmetry of the extended NaO 10 coordination polyhedra could account for the crystal field characteristics found for Eu 3+ at site C where some kind of additional charge compensation should be expected.
Up to now the La and Na crystal sites can explain the main three among four isolated Eu 3+ sites in the FLN spectra.Therefore, an additional cationic site possessing the C 4v symmetry suggested by the spectroscopic characteristics of the remaining D spectrum, should be identified in the Na 3 La 9 O 3 (BO 3 ) 8 crystal structure.3) 3 (z = 0.32) and La1O4 (z = 1/2, on the mirror plane); (see Fig. 4).Within this picture, the three B1O 3 , B2O 3 and B3O 3 triangles are running in rows along of the c axis as shown in Fig. 4. If the sites of boron cations act as perturbed Eu 3+ sites induced by the Eu 3+ -doping itself, they would manifest the C 4v symmetry corresponding to the remaining D site in the FLN spectrum.Let us revise the local extended oxygen environments around the three B cations: For Eu 3+ in the B1 site, three O4 at a distance 3.420(3) Å, and three O1 at 3.917(3) Å, will constitute its extended environment with C 3v local symmetry.Correspondingly, Eu 3+ in the B2 site is surrounded by six O3, all of them at 3.084(3) Å.When the substitution in the B3 site is considered, Eu 3+ is surrounded by three O2 at 3.145(3) Å, three O3 at 3.419(3) Å, and three O4 at 3.694(3) Å, which could result in a C 4v local symmetry.This last perturbed Eu 3+ site, surrounded by nine oxygen atoms, can be thought of as the origin of the D center, which moreover can be distributed in an ordered way through out the crystal, and thus require nearby cationic vacancies for charge compensation.In conclusion, according to the above mentioned symmetry characteristics of the FLN spectra for Eu 3+ located in the A, B, C, and D sites, the simulations of the corresponding energy level sequences performed for C 4v , C 2 , C 3v , and C 4v crystal field potentials, respectively, yield 7 F J schemes in very good agreement with the experimental data, as can be seen in Table 1.
Initially the spectrum for Eu 3+ in site B was simulated by considering the C 2v potential, but the agreement between observed and calculated energy levels was found to improve by introducing the complex k q S parameters of symmetry C 2 , which in turn agrees with the fact that the La2O 9 site, to which the Eu 3+ B-spectrum corresponds, is a very distorted MSAP.
The C 4v characteristics of the Eu 3+ spectrum in site A, the most abundant one, fully reflect the nature of the La1O 8 environment, with La1 located on the mirror plane in the c-axis.The crystal field parameters involved in the description of C 4v are the same as for D 4h and D 4 potentials, but the presence of the 5 D 0 → 7 F 0 transition undoubtedly discards these latter symmetries.
Spectra for Eu 3+ -A and D sites, both with C 4v symmetry characteristics, have been reproduced through very different sets of crystal field parameters (see Table 3), which lead to very different Eu 3+ local environments.
On the other hand, the inferred existence of an extended NaO 10 environment for Eu 3+ in site C can be understood on the basis of the poor or incomplete effective shielding of Eu 3+ by the six nearest coordinated oxygen atoms that form the distorted octahedral coordination in the crystallographic description of the structure.Thus, the crystal field generated by the ligands in the first coordination sphere is, in this case, not a good enough approximation for the crystal field perturbation felt by the Eu 3+ doping cation, an effect which was previously described in other well known Eu 3+ -doped borate layered crystal, YAl 3 (BO 3 ) 4 [17].
Conclusion
By using the fluorescence line narrowing technique we have demonstrated the existence of four different local environments around the RE ions in Na 3 La 9 O 3 (BO 3 ) 8 crystal.On the ground of the experimental results, the crystal-field analysis and simulation of the energy level schemes allow to connect the predicted symmetry of the resolved sites with the crystal structure.In conclusion, though RE ions may occupy the crystallographic sites for La1, La2, Na, and B3 the luminescence results suggest that the first possibility is the most likely to occur.
Fig. 2 .
Fig. 2. Projection of the structure of Na 3 La 9 O 3 (BO 3 ) 8 on the ab plane.Larger blue and yellow spheres represent La1 and La2 cations, respectively, medium cyan spheres stand for Na cations, red, pink, and violet triangles are indicating B1O 3 , B2O 3 and B3O 3 groups, respectively, and the smallest green spheres are the oxygen atoms. | 5,397 | 2008-02-18T00:00:00.000 | [
"Chemistry",
"Materials Science",
"Physics"
] |
Testing the Potential of Osl, Tt-osl, Irsl and Post-ir Irsl Luminescence Dating Printer-friendly Version Interactive Discussion Climate of the past Discussions Testing the Potential of Osl, Tt-osl, Irsl and Post-ir Irsl Luminescence Dating on a Middle Pleistocene Sediment Record of Lake El'gygytgyn,
Introduction Conclusions References Tables Figures Back Close Full Screen / Esc This discussion paper is/has been under review for the journal Climate of the Past (CP). Please refer to the corresponding final paper in CP if available. Abstract Introduction Conclusions References Tables Figures Back Close Full Screen / Esc Abstract Lake El'gygytgyn is a 12 km wide crater lake located in remote Chukotka in the far East Russian Arctic about 100 km to the north of the Arctic Circle. It was formed by a meteorite impact about 3.58 Ma ago. This study tests the paleomagnetic and proxy data-based Mid-to Late-Pleistocene sediment deposition history using novel luminescence 5 dating techniques of sediment cores taken from the centre of the 175 m deep lake. For dating polymineral and quartz fine grains (4–11 µm grain size range) were extracted from nine different levels from the upper 28 m of sediment cores 5011-1A and 5011-1B. Polymineral sub-samples were analysed by infra-red stimulated luminescence (IRSL) and post-IR infra-red stimulated luminescence (post-IR IRSL) using single aliquot re-10 generative dose (SAR) sequences. SAR protocols were further applied to measure the blue light optically stimulated luminescence (OSL) and thermally-transferred OSL (TT-OSL) of fine-grained quartz supplemented by a multiple aliquot TT-OSL approach. According to an independent age model, the lowest sample from 27.8–27.9 m below lake bottom level correlates to the Brunhes-Matuyama (B/M) reversal. Finally, the SAR 15 post-IR-IRSL protocol applied to polymineral fine grains was the only luminescence technique able to provide dating results of acceptable accuracy up to ca. 700 ka. Major factors limiting precision and accuracy of the luminescence chronology are, for some samples, natural signals already approaching saturation level, and overall the uncertainty related to the sediment water content and its variations over geological times.
Introduction
Luminescence dating is long established as a reliable tool to provide absolute chronologies for Late Pleistocene sediments from numerous depositional environments.The event being dated by luminescence is the exposure of the sediments to sunlight prior to deposition and coverage.Most dating studies are based on quartz optically stimulated luminescence (OSL) applied on sediment archives of aeolian, fluvial or lacustrine Introduction
Conclusions References
Tables Figures
Back Close
Full origin.However, the use of quartz for OSL dating is typically limited to the last 100-150 ka because of saturation effects of the quartz luminescence signal with increasing dose.Infrared stimulated luminescence (IRSL) of potassium-rich feldspars has the potential to extend the datable age range because dose-response curves from K-rich feldspars or polymineral fine grains show a much higher saturation dose compared to quartz.But it has been known for several decades now that the IRSL signal can show anomalous fading (Wintle, 1973).This signal loss over burial time often gives rise to significant age underestimation.Fading correction procedures have been developed (Lamothe and Auclair, 1999;Huntley and Lamothe, 2001) to correct for these underestimations but they are inapplicable at samples, in which dose response curves advance the saturation level.Recent studies (Thomsen et al., 2008;Buylaert et al., 2009;Li and Li, 2011) have presented a measuring protocol, in which a high temperature IRSL signal is measured after a low temperature IRSL signal.This post-IR IRSL is less prone to fading but has the same high saturation dose limits as the low temperature IRSL.These characteristics make this a method of great importance for dating Middle and Late Pleistocene deposits and it has become the preferred protocol for feldspar dating if fading corrections are unreliable or not valid (Buylaert et al., 2012).
Only very few studies have focussed on luminescence dating of lake sediments from the Upper and Middle Pleistocene so far (e.g.Forman et al., 2007;Juschus et al., 2007;Lowick and Preusser, 2011;Lukas et al., 2012).This is probably due to the fact that such environments are afflicted with several complications.One important complication is the accurate estimation of the palaeo-water content.This variable is the most crucial, because water attenuates external radiation and causes a lowering of the dose rate received by the dosimeters, i.e. the sediment grains.The dose rate is a substantial part of the age equation and faulty dose rate calculations lead to inaccurate ages.A second problem that can affect the dose rate determination in lacustrine environments is the presence of radioactive disequilibria in the uranium decay chain (Krbetschek et al., 1994).Similar to changes in water content, this imparts a non-constant dose rate on the sampled material through time.A third question that has to be considered comprises Introduction
Conclusions References
Tables Figures
Back Close
Full the potential sediment transport processes and remobilisation processes by turbidites or landslides.Lake El'gygytgyn is located in Central Chukotka, NE Russia, ∼ 100 km north of the Arctic Cycle (Fig. 1) and even today the surface is frozen and covered with ice for about 9 months of the year (Melles et al., 2012).Fluvial and aeolian input is thus limited to the few summer months when the lake surface is open water and the light conditions allow a reasonable bleaching of the sediment grains.However, the crater is roughly 18 km in diameter and the catchment is less than three times the lake's surface area (Melles et al., 2011).About 50 small inlet streams drain into the lake (Nolan et al., 2002) and it is relevant if the distance of just a few km is sufficiently long to enable full bleaching of the transported sediment.Today, the fluvial input to the lake is very low and much of the sediment is deposited at the mouth of the inflows in shallow lagoons, which are dammed by gravel bars formed by wave and lake ice action (Melles et al., 2011).Temporary deposition in these lagoons and further transport through the clear surface waters, giving a Secchi transparency depth of 19 m in summer (Melles et al., 2012), might enable a reasonable bleaching of the suspended matters.In addition, as a result of short transport distances, sediments may have experienced insufficient cycling prior to deposition, which may lead to poor OSL-sensitivity in quartz samples (Preusser et al., 2006;Fuchs and Owen, 2008), which in turn makes extraction of a dateable signal sometimes challenging.
The samples analysed in this study originate from cores A and B of ICDP site 5011, which was drilled from February until April 2009 employing a 100 tons drilling platform on the artificially thickened ice cover in 170 m water depth in the central part of Lake El'gygytgyn (Melles et al., 2011; Fig. 1).An independent age model for the 3.6 Ma core composite is provided by magnetostratigraphy and tuning of proxy data to the regional insolation and global marine isotope stratigraphy (Melles et al., 2012;Nowaczyk et al., 2012).The objective of our study was to test different approaches of luminescence dating on the exceptionally long sediment record drilled, and provide complemental information on the core stratigraphy.
Sample preparation
The cores from ICDP site 5011 were taken in transparent plastic liners of 10 cm diameter.For luminescence dating, 10 cm thick pieces were taken from replicate core parts and wrapped with black tape to prevent further light exposure.These liner pieces were then opened under subdued red-light conditions in the luminescence sample preparation laboratory in Cologne.The sediment was pushed out of the liner and a small block of 3×3×6 cm was cut out of the inner core to eliminate any grains that were exposed to light during cutting and preparing of the liner pieces.The small block was then treated with hydrochloric acid, hydrogen peroxide and sodium oxalate to remove carbonate, organics and dissolve coagulations.Due to the lack of sufficient sand-sized minerals, the 4-11 µm fine grain fraction was prepared following Frechen et al. (1996).Quartz was separated by etching the polymineral fine grains with hexafluorosilicic acid for 7 days.
The remaining sediments from the liner were dried, homogenised and prepared for gamma-ray spectrometry.The effective water content was determined by weight loss after drying the bulk samples and is given in relation of weight water to dry mass.Comparing the results with the original water content, measured at a second correlative core soon after drilling, most of the values are in fairly good agreement (Table 1), indicating that no significant water loss has happened between coring and sample preparation.Only sample 1A1H3, taken from a turbidite layer, shows a significant discrepancy.
Dosimetry
Radionuclide analyses (uranium, thorium, potassium) were carried out by gamma-ray spectrometry in the Cologne laboratory and at the VKTA Rossendorf e.V. (D.Degering, Dresden), respectively (see Table 1).157 g of dry sample material was packed in 90 × 25 mm polystyrol containers and stored for 4 weeks to compensate for radon loss induced by sample preparation.Introduction
Conclusions References
Tables Figures
Back Close
Full In water-lain sediments disequilibria in the uranium decay series are quite common.The impact of changes in the sediment dose rate on age estimates can be substantial but is hardly quantifiable over large time scales.An underestimation of dose rate compared to the effective dose rate over the whole time period of burial would result in an age overestimation.The Cologne gamma-ray spectrometer has a good sensitivity in the higher energy part of the spectrum and 238 U is hence quantified by peaks of 226 formation about disequilibria in the early decay chain and can reveal uranium loss or uptake since uranium is mobile in oxidising aqueous solutions.Radium is also known to be mobile in lake sediments and can accumulate in individual horizons.The samples analysed in Dresden show only a small decrease between the early isotopes and 226 Ra but two of the three samples show a slightly more significant decrease between 226 Ra and 210 Pb (Fig. S1).This indicates a radon loss and may give evidence for the uranium series not being in the state of equilibrium.However, for all the three samples analysed here the activities of the daughters 226 Ra and 210 Pb still agree within 2-sigma errors with the activity of the mother (Fig. S1).Hence, the impact on age calculation is presumably not massive.With regard to the decay rates of the nuclides this decrease can be attributed to mobilisation processes within the last about 100 yr but it is not possible to quantify the frequency of the mobilisation processes over the last several hundred thousand years and, therefore, it is not viable to qualify the real impact on the age calculation.Dose rates (see Tables 3 and 6) and ages were calculated using the "age" software (version 1999) by R. Gr ün, Canberra, which includes the dose conversion factors published by Adamiec and Aitken (1998).Alpha efficiency was set to 0.035 ± 0.02 for quartz samples and 0.07 ± 0.02 for polymineral fine grain samples, following Rees-Jones (1995).Figures
Back Close
Full
Attenuation factors and water content
The cosmic contribution to the dose rate is usually calculated according to the sampling depth.It can be neglected for the given samples, because the influence of cosmic radiation on the minerals is completely attenuated by overlying sediment and water column.More important is the impact of the sediment water content.Attenuation of ionising radiation is more effective in sediments with water-filled interstices and has strong effects on resulting age estimates.The "as found" water contents are hardly representative for the moisture conditions during the time span of burial, especially not for such a long period of time as considered in this study.Mechanical compaction by overlying sediments and mass movements may have reduced or increased the water content during burial times and are difficult to quantify.Full saturation of the sediment can be measured using laboratory methods (Lowick and Preusser, 2009) but a retrospective determination of water content changes through time is far from being straightforward.We therefore took the measured water content for age calculation and added three more age estimates calculated with fictive water contents in the tables to illustrate the impact of water content variation on age calculation (see Tables 3 and 6).
Luminescence measurements and results
Luminescence measurements were carried out on an automated Risø TL/OSL Reader (TL-DA 12) with a calibrated 90 Sr/ 90 Y beta source delivering 4.08 Gy min −1 .A U340 filter was used for quartz measurements and an interference 410 nm filter for IRSL measurements on polymineral samples.Several different dating techniques had to be tested in this study to evaluate the most appropriate protocol for achieving reliable luminescence age estimates for such old, i.e. > 300 ka, sediments (Table 2).This expanded dating programme evolved from the fact that the sediments analysed here are comparably old and cover a time span which is not often securely dated by luminescence techniques.Introduction
Conclusions References
Tables Figures
Back Close
Full
SAR-OSL on fine grain quartz
The standard SAR protocol (Murray and Wintle, 2000) was applied on three samples, using blue stimulation for 50 s at 125 • C, a pre-heat of 240 • C and a cut heat of 220 • C. Equivalent dose (D e ) values were determined using the first 0.6 s of the OSL decay curve, and subtracting the background of the last 5 s.Another approach using the early background subtraction method (Ballarini et al., 2007) with an integral of 0-0.4 s for D e determination and a background integral of 1.0-1.4s, as described by Lowick and Preußer (2011), did not improve the dataset and was hence rejected.Dose response curves were fitted with a single exponential plus linear function.An experimental doseresponse curve measured for sample 1B2H2 with 12 dose steps up to a 1237 Gy fitted well to a single saturated plus linear function and did not show any saturation effects (Fig. 2).Validation of the protocol parameters was verified by pre-heat plateau tests and dose recovery tests at different temperatures.The validation tests both confirmed an appropriate pre-heat temperature between 220 • C and 240 • C and a perfect dose recovery at 240 • C with a measured to given dose ratio of 1.00 (Fig. 3a and b).IR tests for feldspar contamination were made standard practice.
The results obtained for the standard SAR-OSL protocol on quartz underestimate the expected age range significantly.Maximum D e -values remained below 410 Gy and none of the three samples reached the saturation level.The maximum D e was obtained for sample 1A6H1B.With regard to the age model provided by Melles et al. (2012) and Nowaczyk et al. (2012) sample 1A6H1B from 17.88-17.93m blf (below lake floor) is supposed to be deposited about 465 ka ago.Sample 1A9H2 was taken between 27.817-27.917m blf and is supposed to be 757 ka years old.The SAR-OSL results of 137 ± 20 ka and 129 ± 13 ka for these two samples are far below the expected age range and show no increase in age with depth (Table 3 and Fig. 5).Although the dose response curves show no saturation effects with increasing dose, 400 Gy appears to be the maximum D e for fine grain quartz.Thus, a quartz age beyond 200 ka is not attainable.Introduction
Conclusions References
Tables Figures
Back Close
Full Although the SAR-OSL protocol is the most accepted protocol and has proved very successful when dating quartz, there are several studies reporting age underestimations for quartz beyond the Eemian or even as young as 70 ka (Murray et al., 2008).Similar observations were made by Lowick and Preusser (2011), Lowick et al. (2010), Lai (2010), Timar et al. (2010), who all reported on underestimations with fine grain quartz.The majority of these studies show that the samples meet the standard validation criteria for the SAR protocol, which usually allows the assumption to obtain reliable OSL ages.They all observed well-shaped dose response curves, which are best fitted to a single exponential plus linear function.While the characteristics of the dose response curve suggested that determination of D e -values up to 400 Gy should be possible, Lai (2010) showed that age determination was only reliable up to 230 Gy.Lowick and Preusser (2011) and Lowick et al. (2010) reported on sedimentary quartz from Northeastern Italy.Their OSL ages agree well with biochronological constraints up to 140 Gy (70 ka) but increasingly underestimate the age beyond this point.They assume the underestimations are caused by the presence of different components of luminescence signals with different luminescence characteristics but have no real explanation for them.They also report on the slow growing dose response curve beyond 400 Gy, which is represented by a linear component.
A decomposition of the L x /T x quotient from our study revealed an obvious answer to the question, why the dose response curve shows a linear response to high doses.
Plotting the background-subtracted regenerated signals (L x ) and the backgroundsubtracted test dose signals (T x ) against the administered beta dose shows a saturation of the L x -curve above 400-500 Gy.The corresponding test dose signal (T x ) shows a small rise up to the 500 Gy regenerative dose and a decrease beyond, although the test dose was always kept constant (Fig. 2).The ratio then seems to indicate a rising L x /T x with dose but the uncorrected OSL signal is in saturation.Hence, signal growth of the sensitivity-corrected OSL signals is an artefact of the response to the test dose that is commonly well below the saturation level.Here, sensitivity correction as Introduction
Conclusions References
Tables Figures
Back Close
Full conducted as part of the SAR protocol results in erroneous equivalent dose estimates in dose ranges beyond the saturation level of the uncorrected regenerated OSL.
Thermally transferred OSL (TT-OSL)
Thermally transferred OSL (TT-OSL) was proposed by Wang et al. (2006) to extend the age range of quartz and to provide quartz OSL dates for Middle Pleistocene sediments.The TT-OSL signal is measured after the depletion of the conventional OSL signal and a subsequent pre-heat, which is applied to induce the thermal transfer of charge.The TT-OSL signal has a saturation limit at least an order of magnitude greater than the fast component of the conventional OSL signal (Wang et al., 2007) but, in contrast to initial suggestions by Wang et al. (2006), it is considerably less light sensitive than the fast bleaching OSL component (Tsukamoto et al., 2008;Jacobs et al., 2011).
The simplified SAR protocol for TT-OSL developed by Porat et al. (2009) (Table 2) was tested on 3 different samples but results showed that it was not suited for the quartz from Lake El'gygytgyn.Significant sensitivity changes and a non-linear dose response prevented the fitting of a sensible dose response curve to the data.A dose recovery test carried out on 1A3H1 failed to meet the validation criteria; the average measured dose overestimated the given dose by more than 25 % and resulted in a ratio of 1.32.
The recycling ratio was poor (more than the acceptable 10 % deviation) and the recuperation of 10-20 % was significantly exceeding the acceptance level of 5 % (Murray and Wintle, 2000).An additional hot bleach (OSL shine down at 300 • C for 100 s), as proposed by Stevens et al. (2009), was then added after the TT-OSL measurement to remove the residual charge carried over from the regenerated dose measurements to the test dose measurement cycle.This lead to a perfectly linear dose response curve (Fig. 4), however, the overestimation was hardly reduced, the recycling ratio remained poor, and the recuperation still varied between 4 % and 10 %.Independent tests of this modified TT-OSL protocol applied on modern test quartz with an artificial beta dose of 206 Gy yielded a perfect linear growth curve, a very small underestimation of about Introduction
Conclusions References
Tables Figures
Back Close
Full 10 % deviation and the recuperation was negligible.This result underlines the general validity of the modified TT-OSL protocol, but also that it is not suitable for the samples of this study.
To avoid the problem of charge transfer a multiple aliquot regenerative dose (MAR) TT-OSL approach was designed and tested.Twelve and 16 discs, respectively, were prepared from two samples (see Table 3).The natural OSL and the natural TT-OSL (step 2-5 from the given SAR protocol, Table 2) were recorded and followed by a hot bleach (300 s blue stimulation at 280 • C).The same discs were then irradiated with 4 dose steps up to 814 Gy and step 2-5 from the SAR protocol were repeated.The first 0.4 s of the natural TT-OSL signal were used for short shine normalisation and the residual level was determined by repeating step 2-5 on the first three discs.D e -values were determined by integrating the first 1.2 s and the average signal of the last 10 s was subtracted for background.The results show a linear growth curve and a good response to the short shine normalisation.Sample 1A1H3 overestimates the expected age range that is inferred from independent age model if the measured water content, which is comparably high, is used for age calculation.A water content of about 70 %, as measured for many of the other samples, would yield an age of about 160 ka.This would match the expected age range and gives a good example for the impact of the water content on the age calculation.The result of sample 1A4H2, however, underestimates the expected age range significantly (see Table 3).Hence, with respect to the independent age control TT-OSL protocols were considered not to yield the required reliability for dating the samples under study here.
SAR-IRSL 50 on polymineral fine grain
The SAR-IRSL 50 protocol (Table 2) proposed by Wallinga et al. (2000) for coarse grain feldspars was slightly modified and applied on the polymineral fine grain fraction.This fraction contains the natural fine silt mineral composition including quartz, though the IR stimulated luminescence emitted in the 410 nm range is dominated by potassiumrich feldspar.Feldspar is known to have a much higher saturation dose than quartz and Introduction
Conclusions References
Tables Figures
Back Close
Full is therefore often used if quartz is not suited for various reasons.However the available age range and the precision of feldspar dating is often hampered by anomalous fading, a spontaneous signal loss (Wintle, 1973) that leads to age underestimations of feldspar ages.Corrections for this signal loss and the corresponding age underestimations have been proposed (Lamothe and Auclair, 1999;Huntley and Lamothe, 2001), but these corrections are not valid for old samples with large palaeodoses and, hence, non-linear dose response curves.IRSL measurements were carried out for 350 s at 50 • C (IRSL 50 ) through an interference filter (410 nm, 5 mm).Dose recovery pre-heat tests were conducted to determine the appropriate pre-heat temperature and to verify the protocol parameters.A set of 15 sub-samples of sample 1A1H2 received a hot bleach, i.e. a 100 s IRSL stimulation at 280 • C in the reader and a subsequent beta dose of 206 Gy.Three discs each were measured at different pre-heat temperatures using the same thermal treatments for the regenerative dose and the test dose.The average D e 's obtained for the different pre-heat temperatures show no plateau (Fig. 5), but a constant decrease with temperature and only a presumably accidental match of the administered dose at a preheat temperature of 230 • C. In addition, the D e is strongly dependent on the integral size used for D e calculation showing a constant increase of about 17 % within the first 25 measurement channels.This performance is usually taken as evidence for signal instability, i.e. for fading, but fading tests revealed only very small fading rates of 1.2 ± 0.4 % decade −1 (1B3H2) and 0.6 ± 0.4 % decade −1 (1A3H1).Summarising these observations, the polymineral fine grains are not suitable for the standard SAR-IRSL 50 dating protocol.With regard to the insufficient pre-heat plateau, any dating approaches were abandoned because the obtainable results would at best be minimum ages which underestimate the true deposition age significantly.Using polymineral fine grain from younger Lake El'gygytgyn sediments, Juschus et al. (2007) and Forman et al. (2007) described only small fading rates and they forebear from doing fading corrections.They obtained a good agreement with the expected age range up to ∼ 160 ka, but underestimate the expected age range significantly beyond 160 ka (Forman et al., 2007).
CPD Introduction Conclusions References
Tables Figures
Back Close
Full 2009) described a high temperature IRSL signal, which is measured at increased temperatures after a first IR shine down.This post-IR IRSL signal appears to be less prone to fading and is more stable than the low-temperature IRSL signal measured at 50 • C (IRSL 50 ).
The pIRIR 290 protocol (Table 2) proposed by Thiel et al. (2011) for middle and upper Pleistocene loess from Austria was applied on 8 of the polymineral fine grain samples.All sub-samples were prepared on aluminium discs and all measurements were made on a Risø TL-DA-12 with a 5 mm 410 nm interference filter.Thiel et al. ( 2011) integrated the first 2.4 s for D e determination.With regard to a weak scatter that was observed at some sub-samples within the first 6-8 channels in the D e vs. stimulation time-plot, the integral of 1-12 s was taken for D e determination and the last 20 s were subtracted as background.In contrast to the SAR-IRSL 50 measurements, which showed an increasing D e with increasing integral, the D e of the high temperature pIRIR 290 does not depend on the size of the integral but shows an extended plateau up to approximately 30 s.With regard to the small number of sub-samples measured (Table 5), the median of the individual measurements was finally used as best representative for the average equivalent dose.Relating to the bleaching experiments, the signal left after bleaching for 3 h under natural sunlight (20 Gy) was subtracted from the pIRIR 290 D e 's (for discussion see also Thiel et al., 2012;Buylaert et al., 2012).All data are given with total uncertainties at a 1-sigma confidence level.Thomsen et al. ( 2008) and Buylaert et al. (2009) have demonstrated that the post-IR IRSL signal is easy to bleach down to a certain residual level of 10-20 Gy, but that it is necessary to verify this assumption, since bleaching characteristics are strongly dependent on the sediment type and mineralogy of the feldspars.A sample set of 1A1H2 was prepared and bleached for 3.15 h and 7.5 h under natural sunlight on a bright cloudless day in June.Another sample set was bleached in a H öhnle solar simulator for 4.5 h, 6.5 h and 36 h.The results (Table 4) confirm that the pIRIR 290 signal is generally easy
Conclusions References
Tables Figures
Back Close
Full to bleach, even if the low temperature IRSL signal (L x ), which is measured prior to the pIRIR 290 signal (see Table 2), is bleached a little faster.After 3.5 h under natural sunlight the pIRIR 290 D e of sample 1A1H2 is already reduced to about 20 Gy and it keeps decreasing with further exposure time.Artificial bleaching of different samples in a H öhnle solar simulator for 4.5, 6.5 h and 36 h gives a residual below 20 Gy.After 36 h in a solar simulator there is still a residual of 14 Gy, but the bleaching level is sample dependent and not only dependent on the bleaching time.However, this experiment illustrates that the post-IR IRSL signal of the lake sediments can be bleached to a reasonable level in a reasonable time.
The impact of these residuals on the ages can be considered to be quite low.For example, the residual of 20 Gy measured after 195 min of sunlight bleaching for sample 1A1H2 (see Table 4), corresponds to 7.7 ka using the dose rate based on the measured water content.The pIRIR 290 age of this sample is 179 ± 20 ka after subtraction of the residual.Thus, the residual is only 4 % and within the range of the 1-sigma error in age.Dose recovery tests at 1A1H2 after a natural sunlight exposure for 450 min and a given dose of ∼ 470 Gy failed to pass the criteria.After subtracting the residual of 14 Gy the measured to given dose ratio was still 1.38±0.09,but a valid protocol should be able to deliver a ratio within 10 % deviation from unity.Another dose recovery test was carried out on five discs of sample 1A3H2 after 390 min light exposure in the solar simulator and an artificial beta dose of ∼ 300 Gy.The measured to given dose ratio was still 1.37 ± 0.06, showing a massive overestimation, too.A further dose recovery test on sample 1A3H1 was carried out after 2160 min bleaching in the solar simulator.
The sub-samples were stored for 6 months after bleaching, then received a beta dose of ∼ 300 Gy and the average measured to given dose ratio was 1.07, which is within the 10 % range and thus acceptable.excellent results, illustrating that the sensitivity correction of the protocol is working well.The average recycling ratio was 1.03 ± 0.02 and the recuperation was low, ranging between 1.0 and 1.5 %.The results obtained for the dose recovery tests using optical bleaching immediately followed by a beta dose seem to indicate some kind of charge transfer after irradiation and during the first pre-heat.Although the natural signal is thoroughly bleached after a few hours under natural or artificial sunlight (Table 4), it is not possible to recover a given dose if measured immediately after bleaching.The resulting D e overestimates the given dose significantly.If the sample is stored for some months after bleaching, the overestimation is reduced and the given dose can be recovered within 10 % deviation.Cleaning out the samples with a hot bleach before the first beta dose yields acceptable dose recovery ratios, suggesting no significant charge transfer or recuperation processes.At this stage, we do not have satisfactory explanation for the different behaviours with and without delay between bleaching and irradiation.We interpret this charge transfer observed after optical bleaching and immediate radiation as a laboratory artefact, which is not relevant for natural samples because under natural conditions bleaching and dosing will happen much slower.However, further experiments with varying pause times between bleaching, artificial irradiation and first pre-heat are necessary to analyse these observations in more detail.
Discussion on pIRIRSL 290 results
From the comparison of the luminescence dates with the independent age model it becomes evident that only the post-IR IRSL protocol yielded reliable dating results significantly beyond 200 ka (Table 6).The pIRIR 290 D e -values (Table 5) show -in contrast to the IRSL values measured at 50 • C -a constant increase with depth.Three of the samples are in very good agreement with the expected age range, if the measured water content is used for age calculation.Samples from the age range between 200 and 300 ka overestimate the expected age range significantly (Fig. 6) by about 30 %.Many reasons are conceivable for this, such as saturation effects as visible for sample
CPD Introduction Conclusions References
Tables Figures
Back Close
Full 1A3H2, erroneous water contents or dose rate underestimation by means of disequilibria or even insufficient bleaching.Juschus et al. (2007) described decreasing water contents with depth for their sediment core with minimum values around 50 % at the base.There is no such trend visible for the samples of this study and the question remains if the measured water content represents the environmental conditions during the geological times since deposition.
The samples from the lower part of the profile are in good agreement with the expected age and even sample 1A9H2 from the B/M boundary with an expected natural dose of > 2700 Gy gave a D e of more than 2400 Gy (Fig. 7) and an age of > 700 yr (Fig. 6).The dose response curves are best fitted to an exponential plus linear function and the natural sensitivity corrected signals exceed the highest regenerated dose point of 2430 Gy.The D e obtained for 1A9H2 lies in the extrapolated part of the dose response curve and the true saturation level was not reached.D e and age are hence just given as minimum values.In this part of the core section, the natural signal of the post-IR IRSL is slowly approaching the saturation level, but the dose response curve is rather steep compared to sample 1A6H1B (Fig. 7).Saturation dose experiments with up to 7 regeneration dose points and a maximum dose of 2500 Gy carried out on sample 1A6H1B and 1A4H2 still allow to fit an exponential plus linear function to the data but the natural pIRIR 290 signal lies in the upper slow rising linear part of the dose response curve and is close to saturation (Fig. 8).This slow rise considerably limits the reliability of the three samples 1A9H2, 1A4H2B and 1A3H2 and is taken as one of the reasons for the comparably large relative standard deviation observed for these samples.Polymineral fine grain samples usually do not show a significant scatter because a large number of grains on the sample disc are emitting luminescence signals.The pIRIR 290 D e data sets of these samples show RSD-values between 18 % and 29 % (Table 5).Even if the number of aliquots is small and the validity is limited, this scatter is most likely ascribed to the shape of the dose response curve, indicating a natural dose close to saturation level.The scatter for the IRSL measurements at 50 • C is much smaller.Introduction
Conclusions References
Tables Figures
Back Close
Full A similar dating approach by Thiel et al. (2011) on polymineral fine grains extracted from Austrian loess from below the B/M boundary was not successful because the natural of the pIRIR 290 signal was in the saturating part of the dose response curve, which they defined as > 1600 Gy.In our study the pIRIR 290 saturation dose was not reached up to 2400 Gy for sample 1A9H2, which is correlated to the B/M boundary.D e -values derived for the low temperature IRSL signal measured as part of the pIRIR 290 measurement sequence (see Table 2) do not exceed a maximum value of 540 Gy (see Table 5).Whereas pIRIR 290 D e -values continue to rise with depth, the low temperature IRSL ages show no increase with depth, which further illustrates the unreliability of these estimates (Table 6).A similar trend was described by Forman et al. (2007), who reported on significant underestimations beyond 160 ka for multiple aliquot low temperature IRSL measurements on polymineral fine grains.Finally, we deduce that the predominant agreement with the age model and the continuous age increase with depth (see Table 6) seem to speak for the validity of the pIRIR 290 data.
No fading corrections were made, neither for the IRSL 50 nor the pIRIR 290 D e -values.Juschus et al. (2007) reported on small fading ratios between 0.84 and 0.99 and followed Forman et al. (2007), who determined fading ratios between 0.92 and 0.99 and did not apply fading corrections.The same trend was observed for two samples from this study, which gave fading rates of 1.2 ± 0.4 % decade −1 (1B3H2) and 0.6 ± 0.4 % decade −1 (1A3H1) for IRSL 50 .Although these rates are small, an influence cannot be excluded.However, the introduced uncertainties associated with fading corrections in the considered age range are supposed to exceed any advantage of correction here.Buylaert et al. (2012) calculated fading rates for low temperature IRSL and post-IR IRSL at samples of different origins and concluded that fading rates of 1-1.5 % decade −1 are most likely artefacts of measurement procedures.We, hence, have abandoned any fading corrections for IRSL 50 and pIRIR 290 dose estimates.
CPD Figures Back Close
Full
Conclusions
The samples from Lake El'gygytgyn turned out to be very challenging for luminescence dating and the reasons therefore are manifold.Although the general luminescence properties of the sample material, like signal intensity, recuperation, dose response, sensitivity, and others, are acceptable for the fine-grained quartz and polymineral frac- have passed all validation tests and result in ages that increase with depth down to the B/M boundary.Four out of eight samples are in fairly good agreement with the ages calculated from the magnetic polarity, though their errors are large.However, the results obtained for the uppermost sample and the samples from the MIS 7 sediment record seem to show systematic overestimations but still an increase in age with depth.
It is not possible to verify if this overestimation is partly due to insufficient bleaching.Polymineral fine grain measurements do not allow conclusions about the bleaching level prior to deposition.Thus, for verification of the dating results obtained in this study it might be interesting for further projects to analyse the effective bleaching of a modern sample from the delta of one of the small inlet streams of Lake El'gygytgyn.Introduction
Conclusions References
Tables Figures
Back Close
Full Full Full Full Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Recent studies published byThomsen et al. (2008) andBuylaert et al. ( Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Fife other discs of sample 1A3H1 and 1A3H2 received a hot bleach in the reader (step 7-10 of the protocol) instead of an optical bleach, a beta dose of ∼ 300 Gy and were then measured.The measured to given dose ratio was 0.99 ± 0.01 for 1A3H1 and 0.95 ± 0.01 for 1A3H2, thus passing the validity test.The other two routine tests, which are commonly checked by default, gave Discussion Paper | Discussion Paper | Discussion Paper | Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper |
Fig. 1 .Fig. 2 .
Fig. 1.Location of Lake El'gygytgyn in Northeastern Russia (inserted map) and schematic cross-section of the El'gygytgyn basin stratigraphy showing the location of ICDP Sites 5011-1 and 5011-3.Lz1024 is a 16-m long percussion piston core taken in 2003 that fills the stratigraphic gap between the lake sediment surface and the top of drill cores 1A and 1B (from Melles et al., 2012).
tions, respectively, the SAR protocol for OSL and IRSL 50 dating techniques failed to deliver satisfactory and reliable dating results.Quartz OSL results underestimate the palaeomagnetic time frame significantly, for example date the sample from the 780 ka B/M boundary to only about 150 ka.A single aliquot TT-OSL protocol was inappropriate because insufficient sensitivity corrections prevented a reliable curve fit.A multiple aliquot TT-OSL protocol for fine grain quartz was tested, but failed for the older samples as well.The low temperature IRSL at 50 • C measured prior the high temperature IRSL at 290 • C during the pIRIR 290 sequence gave a maximum age of 180 ka, showing no age increase with depth.Only post-IR IRSL measurements with 290 • C stimulation temperature (pIRIR 290 )
Table 2 .
Compilation of measurement protocols used in this study.
Table 3 .
SAR-OSL dating results and multiple aliquot regenerative dose (MAR) TT-OSL dating results obtained on fine grain quartz (4-11 µm).Ages and dose rates are given for different assumed and the measured water content (weight-%) to point out the significance of the water content on the age estimates.All analytical results are presented with their 1-sigma error.
Table 5 .
Equivalent doses calculated for the post-IR IRSL signal measured at 290 • C and the IRSL signal measured at 50• C in the course of the pIRIR 290 measurement sequence (see Table2).All analytical results are presented with their 1-sigma error.Number of discs taken for D e determination and total number of discs measured.b Relating to the bleaching experiments, the residual signal equivalent to 20 Gy was subtracted from the D e .Introduction a | 9,479.4 | 2012-09-28T00:00:00.000 | [
"Environmental Science",
"Geology"
] |
Detection and mitigation of DDoS attacks based on multi-dimensional characteristics in SDN
Due to the large computational overhead, underutilization of features, and high bandwidth consumption in traditional SDN environments for DDoS attack detection and mitigation methods, this paper proposes a two-stage detection and mitigation method for DDoS attacks in SDN based on multi-dimensional characteristics. Firstly, an analysis of the traffic statistics from the SDN switch ports is performed, which aids in conducting a coarse-grained detection of DDoS attacks within the network. Subsequently, a Multi-Dimensional Deep Convolutional Classifier (MDDCC) is constructed using wavelet decomposition and convolutional neural networks to extract multi-dimensional characteristics from the traffic data passing through suspicious switches. Based on these extracted multi-dimensional characteristics, a simple classifier can be employed to accurately detect attack samples. Finally, by integrating graph theory with restrictive strategies, the source of attacks in SDN networks can be effectively traced and isolated. The experimental results indicate that the proposed method, which utilizes a minimal amount of statistical information, can quickly and accurately detect attacks within the SDN network. It demonstrates superior accuracy and generalization capabilities compared to traditional detection methods, especially when tested on both simulated and public datasets. Furthermore, by isolating the affected nodes, the method effectively mitigates the impact of the attacks, ensuring the normal transmission of legitimate traffic during network attacks. This approach not only enhances the detection capabilities but also provides a robust mechanism for containing the spread of cyber threats, thereby safeguarding the integrity and performance of the network.
www.nature.com/scientificreports/loss of critical system information.Among various attacks targeting SDN, Distributed Denial of Service (DDoS) attacks are a common, easily organized, and highly impactful type of cyber attack.Attackers typically use forged IP addresses or control a large number of zombie hosts to continuously send attack packets from any terminal connected to the forwarding device, causing the switch or controller to become overloaded and unable to respond to normal network service requests promptly.This can lead to a degradation or even paralysis of the SDN network's service quality 3 .Therefore, the detection of DDoS attacks and the mitigation of their effects are gradually becoming a hot issue in the field of SDN application research.
DDoS attack detection in an SDN environment refers to the use of certain technical means to inspect and analyze traffic data within the SDN to uncover potential attack behaviors within the network.Traditional detection methods include: Statistical-based methods 4 , Information theory-based methods 5 , Clustering-based methods 6 , Machine learning-based methods [7][8][9] .However, these methods generally face several issues: • Redundancy in data features, which complicates the analysis process.
• High computational overhead, as the models require significant processing power and time to analyze data.
• Insufficient extraction of feature information, leading to suboptimal detection accuracy.
• The need for improved accuracy in detection methods.
Attack mitigation primarily involves using certain means or strategies to reduce the impact and damage of DDoS attacks on SDN networks.There are typically two methods of implementation: One is to reduce the entry of attack traffic into the network, mitigating the shock effect of DDoS attacks on the network.The other is to divert the traffic from the network devices under attack to devices with lighter loads, ensuring the overall service quality of the SDN does not significantly deteriorate through load balancing.However, neither of these methods can eliminate the impact of DDoS attacks on SDN networks.
Deep learning can leverage neural networks to extract high-order features from unstructured data 10 , enabling an end-to-end working model from raw data input to result in output.It has a wide range of applications in fields such as natural language processing, medical image analysis, and financial data forecasting.
In response to the issues associated with traditional DDoS attack detection methods in SDN, we propose a two-stage attack detection and mitigation method based on deep learning by analyzing the organization form and traffic characteristics of DDoS attacks in SDN.In the attack detection phase, we first use the changes in statistical information from switch ports to make a preliminary judgment on the location of the attack source.Then, we conduct feature extraction based on the traffic data output from the suspicious switches, and further extract feature information in the "time, frequency, and spatial" domains of the input feature data using wavelet decomposition and convolutional neural network technology to classify the feature data with a classification function.In the mitigation phase, we utilize graph theory and dynamic deletion strategies to trace and isolate the attack source to mitigate the further adverse impact of the attack on the SDN.
Contributions
The main contributions of our work are: • A two-stage attack detection mechanism was designed, which achieves a preliminary detection of attack behaviors in the network by collecting statistical feature information from switches without adding extra blocks; further detection of traffic features from suspicious switches is conducted to achieve fine-grained detection of attack traffic.• A multi-scale anomaly detection module was designed, utilizing wavelet transform and convolutional neural networks to extract multi-dimensional feature information from traffic data, and using a simple classifier to complete the identification and detection of anomalous traffic.• Utilizing graph theory knowledge and restriction-based mitigation strategies, trace and isolate the attack source host, thereby preventing new attack traffic from entering the network, mitigating the impact of network attacks on SDN, and ensuring the normal operation of the network.
The rest of this paper is organized as follows."Related work" section introduces the main organizational forms of DDoS attacks targeting SDN and provides a review of current research on DDoS attack detection and mitigation in SDN; "Deep learning-based attack detection and mitigation" section provides a detailed introduction to the attack detection and mitigation method based on deep learning; "Experimental results and analysis" section conducts detection experiments and analysis on the proposed method; Finally, "Conclusion" section concludes the paper.
Related work
The SDN architecture consists of three main components: the application plane, the control plane, and the forwarding plane, as shown in Fig. 1.The application plane primarily serves users and typically includes network services and applications such as traffic control, load balancing, and intrusion detection.The control plane, which is composed of controllers, is responsible for establishing forwarding rules and managing the forwarding devices.It connects to the application plane via a northbound interface and responds to the application plane's requests.The forwarding plane, composed of network devices such as switches and routers, connects to the control plane via a southbound interface and executes the forwarding rules defined by the control plane.It also regularly reports network status information back to the control plane.An SDN network can have a single controller or multiple controllers, which are interconnected through an east-west interface.A single controller can manage multiple forwarding devices, and a single forwarding device can be controlled by multiple controllers.The open design of SDN offers broad application prospects, but it also faces the threat of emerging and ever-changing network attacks.Research teams from both domestic and international sources, in response to the characteristics of DDoS attacks on SDN, have proposed some targeted detection and mitigation methods.This section primarily combs and summarizes the common forms of DDoS attacks in current SDN environments, as well as some of the more popular methods for attack detection and mitigation.Open design of SDN offers broad application prospects, but it also faces the threat of emerging and ever-changing network attacks.Research teams from both domestic and international sources, in response to the characteristics of DDoS attacks on SDN, have proposed some targeted detection and mitigation methods.This section primarily combs and summarizes the common forms of DDoS attacks in current SDN environments, as well as some of the more popular methods for attack detection and mitigation.
Organizational forms of DDoS attacks in SDN
The OpenFlow protocol is currently the most widely used southbound interface technology in the field of SDN.Typically, an OpenFlow switch always sends the relevant information of any new flow it receives to the controller for instructions on how to handle it.As a result, most DDoS attacks in SDN exploit vulnerabilities in the OpenFlow protocol 11 .Firstly, since the control plane is situated between the application plane and the forwarding plane, providing a programming interface to the upper layer and controlling hardware devices to the lower layer, if the control plane is compromised, the entire SDN can be affected.Therefore, the controller is the preferred target for DDoS attacks.Attackers often send a large number of new flows with random headers to the switch.Because these flows lack matching rules in the switch's flow table, the switch continuously queries the controller for a handling method.This causes the controller's query queue to grow continuously, resulting in the controller remaining constantly busy and unable to provide services to legitimate users 12 .Secondly, SDN switches are also a primary target for network attacks.According to the protocol, the controller generates a matching rule for each new flow request sent by the switch, and this rule is appended as a flow table entry to the flow tables of all switches that the packets from the source host to the destination host pass through, facilitating subsequent forwarding operations.However, due to the limited storage space of the switch, an excess of forwarding rules can cause the switch's flow table to overflow, preventing the switch from providing forwarding services for new legitimate flows 13 .Additionally, when a large number of packets flood the switch, exceeding its processing capacity, "packet loss" can occur, which affects the normal transmission of traffic data in the network 4 .Finally, because OpenFlow lacks the security protection mechanisms of the traditional network transport layer, the controller and the switch can establish a connection merely through an address.Therefore, attackers can also paralyze the SDN by modifying rules to reconfigure downstream switches and carry out more granular malicious attacks 14 .
Detection of DDoS attacks in SDN
Current detection methods for DDoS attacks in SDN are largely adapted from those used in traditional networks, but they often perform unsatisfactorily when faced with the SDN environment.For instance, although statisticalbased detection methods do not require prior knowledge and can perform detection, they necessitate appropriate distribution assumptions for traffic data beforehand, which does not adapt well to the dynamic network model of SDN.Information theory-based methods, while not requiring distribution assumptions, demand a large number of stable and reliable samples to ensure detection accuracy, which contradicts the random dynamic nature of SDN; Clustering-based detection methods are straightforward to implement but time-consuming, failing to meet the SDN's demand for timely detection.Therefore, against the backdrop of current big data, there is a growing interest in research on machine learning-based detection methods, such as Random Forests, Bayesian Networks, Support Vector Machines, and Multilayer Perceptrons.Alduailij et al. 15 utilized a combination of information gain and the Random Forest method to select the main features of traffic data, enhancing the accuracy of the model in detecting DoS attacks within an SDN environment in the cloud.Luo Zhiyong et al. 16 proposed a Bayesian Attack Graph-based method for recognizing intrusion intentions in SDN.They first employed the PageRank algorithm to determine the criticality of devices, then combined attributes such as vulnerability value, attack cost, benefit, and preference to construct an attack intention table, using a risk assessment model to predict intrusion paths.Santos et al. 17 used the Mininet program to set up an SDN environment, simulated DDoS attacks with the Scapy tool and IP lists, and compared the effectiveness of four machine learning algorithms-Support Vector Machine, Decision Tree, Random Forest, and Multilayer Perceptron-in detecting DDoS attacks, concluding that the Decision Tree-based detection method was the most effective.However, when facing large-scale network traffic, the detection capabilities of machine learningbased methods are not always satisfactory.Elsayed et al. 18 , through comparative analysis of several machine learning-based detection methods, found that the lack of labeled samples and weak feature correlation were the main reasons for the poor detection performance.They believe that deep learning, capable of reconstructing the unknown distribution of input data using multi-layer neural networks, has a good representational ability for large-scale network traffic.Therefore, an increasing number of scholars are beginning to focus on research into deep learning-based detection technologies.
Deep learning is a form of machine learning that supports neural network algorithms and is also a quintessential representation learning technique.It has a strong capacity for representing raw data and has been extensively applied in fields such as natural language processing, machine vision, and financial data analysis.There are also numerous practical applications in the realm of attack detection for SDN.ElSayed and colleagues 19 have improved regularization methods for Convolutional Neural Networks (CNN), developing a novel SDN intrusion detection system that effectively mitigates the issue of overfitting that is common in deep learning models.Gadze and others 20 have put forward an adversarial detection and defense approach for DDoS attacks within the SDN environment.This approach combines Generative Adversarial Networks, Deep Belief Networks, and Long Short Term Memory networks (LSTM) to effectively reduce the sensitivity of the detection model to adversarial attacks and to expedite the feature extraction process.Kachavimat et al. 21constructed a DDoS attack detection model that adapts to various deep learning architectures and conducted experiments on the InSDN 22 , the SDN-dataset, and DDoS attack data generated from the Mininet Ryu network.They concluded that the detection method based on Long Short-Term Memory (LSTM) outperforms Convolutional Neural Networks (CNN) and Multilayer Perceptrons in terms of overall effectiveness.Interestingly, in the same year, Lee et al. 23 , in their designed attack detection framework, compared the effectiveness of four deep learning detection models: Multilayer Perceptrons, CNN, LSTM, and Stacked Autoencoders.They believe that the detection effect of Multilayer Perceptrons is the best.However, current deep learning-based attack detection methods in SDN mostly inherit the detection ideas and methods from traditional networks.There is redundancy in the selection of feature data, which brings additional costs to the detection computation.This is because some features used in the detection, such as the number and size of packets, can be directly obtained by the controller from the forwarding layer.Moreover, most current detection methods are based on a single architecture and do not fully exploit and utilize the higher-order information of feature data, leading to suboptimal detection performance.
Mitigation of DDoS attacks in SDN
After detecting a DDoS attack, how to eliminate or mitigate the impact of the attack on network service quality is another issue of concern for cybersecurity professionals.Overall, there are currently two main approaches to solving this issue: restricting the transmission capabilities of the attacking host and load balancing on network devices.Specifically, restricting the transmission capabilities of the attacking host does not mean completely discarding the data sent by the host, but rather assigning a higher forwarding priority to legitimate normal traffic and a lower forwarding priority to illegitimate traffic.This approach reduces the intensity and volume of DDoS attack traffic entering the SDN network, thereby ensuring that normal network services are not severely affected.However, this method cannot completely prevent attack traffic from entering the SDN network.Yungaicela et al. 24 proposed an attack mitigation scheme based on deep reinforcement learning, which prioritizes data flows according to the controller's response time to users.This allows legitimate data flows to receive high-quality routing and forwarding, while malicious data flows are directed to special forwarding paths or are discarded outright.However, this method may inadvertently affect legitimate traffic with longer durations.Cao et al. 25 , on the other hand, combined the white list with the dropping strategy, discarding traffic that falls outside the white list directly.This approach reduces the load on the southbound interface and CPU overhead, but it may also inadvertently injure unknown normal traffic.
Additionally, load balancing on network devices involves dynamically adjusting the task distribution between controllers and switches, migrating network tasks from heavily loaded devices to lightly loaded ones to mitigate the impact of DDoS attacks on SDN service quality 26 .Filali et al. 27 utilized game theory concepts, transforming the controller and switch allocation problem into a many-to-one matching game problem.They dynamically assign switches to controllers, ensuring that each controller meets a specified minimum quota, thus achieving a balance in network load.Although load balancing methods can alleviate the impact of DDoS attacks by equalizing the load on controllers, these methods cannot prevent switches from continuing to be subjected to DDoS attacks.
System overview
The system we designed for DDoS attack detection and mitigation in SDN based on deep learning belongs to the application layer services and can be deployed on devices within the application plane or on the server where the SDN controller resides.The system consists of a traffic information collection module, a two-stage attack detection module, and an attack source tracing and mitigation module, as shown in Fig. 2.
The system monitors the traffic data in the SDN in real time and performs a preliminary detection of network attacks based on the statistical information from the switch ports.It then utilizes wavelet decomposition and convolutional neural network technology for depth analysis of the traffic data from suspicious switches, enabling fine-grained detection of attack traffic.Finally, by employing graph theory and dynamic deletion strategies, the system tracks and restricts the source of the attack, preventing the attack traffic from entering the network, and thereby ensuring the normal operation of the SDN.
The purpose of the information collection module is to periodically collect relevant information on the traffic data passing through the switch ports, transform it into the required format, and then send it to the twostage detection module.The first stage of detection only requires extracting some rough count information of the data packets and flows passing through the switch.The second stage of detection, however, requires the use of specialized traffic analysis tools to extract traffic information that has been aggregated based on the five-tuple characteristics (source IP, source port number, destination IP, destination port number, protocol) of the flows.
The two-stage attack detection consists of attack detection based on switch statistics and attack detection based on multi-dimensional traffic features.In the first stage, which is the attack detection based on the statistical information of switch port traffic, the primary task is to perform a preliminary detection of DDoS attacks within the network segment controlled by the switch.We know that when a DDoS attack is launched, the switch connected to the attacking host will receive a large number of forwarding requests for new flows.Since there are no matching flow entries in the switch's flow table, the switch will send a large number of PacketIn messages to the controller to obtain disposition methods for these new flows.Therefore, the ratio of the number of flows received by the switch to the number of forwarded packet messages within a unit of time will suddenly decrease compared to normal conditions, and the ratio of the number of normal forwarding flows to the number of received flows will also decrease.Additionally, under normal network conditions, the number of incoming and outgoing data packets is relatively balanced, with not much difference between them.However, when a switch is under a DDoS attack, a large volume of packets arrives at the switch in a short period and cannot be forwarded promptly, leading to temporary storage in the buffer.If the buffer space is exhausted, a "packet loss" phenomenon occurs 28 at which point the network exhibits a significant discrepancy between the number of incoming and outgoing packets.When several traffic characteristic indexes exceed the critical value, it can be judged that there is an attack behavior in SDN, and at the same time, the rough location of the attack source can be completed.
The second stage is an attack detection based on multi-dimensional traffic characteristics, which is initiated when the first stage detects certain switches exhibiting attack behaviors.Initially, a traffic collection program captures all the traffic data passing through the suspicious switch.Then, data analysis tools are used to extract the characteristics of the traffic.Subsequently, wavelet transform is utilized to extract the time-frequency characteristics of the traffic data at different scales, and a Convolutional Neural Network (CNN) is employed to extract the spatial characteristics of the data.Finally, a classifier is used to categorize these rich feature data, thereby achieving the detection of attack traffic.
The attack mitigation module is started after detecting the attack flow in the second stage.It locates the attack source according to the state information of the attack source host provided by the attack detection module, formulates the data packet filtering rules including IP address, port number, effective time, execution action, etc., and sends them to all switches under its control through the controller.The switch updates its flow table according to the new rules issued by the controller, deletes the attack flow entries in the flow table, and introduces all new flows without matching rules sent by the attack host to the default port for discarding within the set effective time, to achieve the goal of preventing the spread of network attacks.At the end of the set time, if no new restriction rule is received, the host is restored to send a new stream.In this process, the attack detection module continuously monitors the network state and transmits the detected new attack information to the attack mitigation module in time, and the attack mitigation module continuously generates new packet filtering rules and sends them to the switch; The switch updates its flow table according to the new rules, and handles the traffic data in the network according to the new flow table, thus realizing the uninterrupted detection and protection of SDN.
Two-stage attack detection
Two-stage attack detection is the basis of attack mitigation and the key to maintaining the safe operation of SDN.As the core content of this paper, before describing two-stage attack detection in detail, the symbols used in this section are explained, as listed in Table 1.
(1) Attack detection based on switch statistics We periodically collect statistics at the switch's ports on the number of network flows and data packets entering and exiting the switch, as well as the number of PacketIn messages forwarded by the switch to the controller.Under normal circumstances, flows with corresponding matching rules in the switch's flow table can be processed normally, while flows without matching rules need to inquire with the controller for handling methods.However, the number of such flows is generally not high.Therefore, there are NFI > > NPi in SDN switches, and the ratio between them is a relatively large value.However, when a DDoS attack is launched, a large number of new flows will arrive at the switch to request forwarding operation in a short time.Because these flows are all new, and there is no corresponding matching rule in the switch flow table, each incoming new flow will generate a PacketIn message sent to the controller, so the switch will send a large number of PacketIn messages to the controller to ask how to deal with them.At this time, the ratio of the number of network flows flowing into the switch to the number of PacketIn forwarded from the switch to the controller will be much smaller than normal, and RPi can be expressed as follows: In addition, even if there is a DDoS attack in the network, those normal flows with matching rules in the switch can still be forwarded correctly before the switch is completely blocked, and their proportion in all network flows flowing into the switch is: After the attack is launched, the RFI metric of the switch will experience a very noticeable decline.Furthermore, when a DDoS attack occurs, a large number of data packets arrive at the switch in a short time and cannot be forwarded in time, so they can only be temporarily stored in the cache of the switch; However, the buffer space of the switch is limited, and once it is filled, a large number of packets will be lost, which will lead to the difference between the number of packets flowing into and out of the switch Compared with the normal network situation, it has a significant increase.To sum up, when both RPi and RFI of a switch are reduced to a certain threshold and NP is increased to a certain extent, it can be preliminarily determined that there is DDoS attack in the network; And the switch is the transmission node of the attack flow in SDN, and the attack source must be on the link to which the switch is connected.
(2) Attack detection based on multi-dimensional traffic characteristics The detection in this phase relies on the Multi-Dimensional Deep Convolution Classifier (MDDCC) to complete, which is initiated when an anomaly is detected in a certain switch during the first phase.Initially, the traffic capture program Wireshark is used to intercept all the traffic data passing through the suspicious switch.Subsequently, the CIC-FlowMeter analysis tool is utilized to extract the characteristics of the traffic.After the data is preprocessed, it is then sent to the MDDCC to achieve precise detection of the abnormal traffic.MDDCC is a traffic classification model that combines wavelet transform technology with deep learning.It is capable of using wavelet transform to extract the time-frequency characteristics of traffic data and using CNN to extract the spatial characteristics of the data.The model conducts a comprehensive analysis of the traffic data from three dimensions: "time, frequency, and space".Finally, the classification of the data type is completed by the SoftMax classification function, and its structure is shown in Fig. 3. Due to the adoption of parameter sharing, local perception, and pooling operations, the training parameters and training time of CNN are significantly reduced compared to traditional multi-layer perceptrons.
We know that the temporal correlations hidden within sequential data are closely related to frequency.Correlations of information on larger time scales, such as the long-term trends inherent in the data, are typically found in the low-frequency range.In contrast, correlations of information on smaller time scales, such as the characteristic information resulting from short-term disturbances or sudden random events, are usually located in the high-frequency range.To thoroughly explore the correlations within traffic sequences, we apply wavelet decomposition to the input sequence x = {x 1 ,x 2 , . . ., x k } , which allows us to obtain its low-frequency component x l (i) at the ith level and its high-frequency component x h (i) .They are respectively as follows: (2) Figure 3. Attack detection process based on multi-dimensional traffic characteristics.
Because we only use the decomposition sequence of the original sequence, we don't need wavelet reconstruction, so we don't need to adopt downsampling when we decompose again.After decomposition for n times, the n + 1 subsequence with the same dimension as the original sequence can be finally obtained, and the sequence set can be expressed as follows: Each subsequence is converted into a two-dimensional graphic format and input into n + 1 independent CNN for spatial feature extraction, and each subsequence is subjected to a series of convolution operations to obtain the results.
Here, x i represents the subsequence obtained after the ith level of wavelet decomposition, which is also the input to the ith CNN.z i is the output subsequence after the CNN transformation.ω and b represent the weights and biases, respectively, g(•) is the nonlinear activation function, and ⊗ denotes the convolution operation.We use the mean squared error as the loss function.Additionally, since traditional L1 and L2 regularization methods only focus on individual feature weight values without considering the intrinsic connections between feature values, we employ a regularization method based on the standard deviation constraint operator to prevent overfitting issues.
k represents the number of rows in the weight matrix, i denotes the ith row of the weight matrix, and n is the number of columns in the weight matrix, which is the size of the weight vector.The value of the weight matrix is controlled by , so the loss function L can be expressed as follows: Therefore, we minimize the loss function related to ω by standard deviation.Finally, the output subsequences are linearly superimposed and expanded, then inputted into a fully connected layer for computation.Subsequently, the SoftMax classification function is used to complete the classification of the input sample data.
Attack traceability mitigation module
Before mitigating the attack effect, the attack path discovery strategy based on graph theory and switch and its port identification is used to locate the entrance switch of the attack host accessing the SDN network according to the transmission path of the attack stream.The path of network traffic in SDN can be expressed as follows: Among them, E i,j represents the transmission path of the network flow, s i , s j are the node switches on the transmission path of the network flow, and p i , p j are the port numbers of the switches, respectively.When the network attack traffic passes through the two switches s i , s j the edge connecting s i and s j is considered to be the attack path, and p i , p j are the interfaces through which the attack traffic enters and exits.By combining this with the controller's grasp of the SDN's global topology, the attack can ultimately be located at the edge switch and the access port through link tracing.It can be seen that this traceability method does not use information such as IP address and Mac address inside the packet, so even if the attacker uses forged address information, it can still be accurately traced to the interface port position.
After tracing the switch and port where the attacking host accesses the SDN, a restriction policy is implemented for the host connected to the switch port, that is, within a certain time, any packet with no matching rules sent from the attacking port is discarded while prohibiting this switch from sending the PacketIn to the controller.By this method, the new flow request of the attacking source can be isolated, thus preventing the new attack flow from entering the network.The implementation process is shown in Algorithm 1. ( 6) After the forbidden time exceeds the set time, the forwarding function of the switch is restored.Additionally, although the attack traffic is intercepted, the previous flow table entries are still stored in the switch flow table, which will affect the normal forwarding process of the network and always consume the resources of the controller and the switch.Therefore, for the flow detected as an attack, the controller uses its host tracking function to obtain the relevant information of the attacking host, such as MAC address, IP address, TUP or UDP port, switch port, etc.At regular intervals, a dynamic deletion policy is generated and sent to the switch, and the switch deletes the relevant entries in the flow table entry.This method can effectively restrict the attack flow on the attack link without affecting the forwarding of other normal flows.
Experimental settings
The hardware configuration of the experimental platform is Intel Core i9-12900F, 128 GB RAM, and NVIDIA RTX3090.The detection system is written in Python language, adopts Pytorch1.8deep learning framework and runs on the Ubuntu 16.04 LTS operating system.In addition, using the Mininet simulator and POX controller to build an SDN environment, the network topology is shown in Fig. 4.
Mininet simulates 4 switches and 4 hosts connected to each switch, which are connected to a POX controller to form a star-shaped network structure.The network delay is set to 2 ms, and the IP address information for the controller server and each host is shown in Table 2. Host
Model parameters setting
The MDDCC proposed in the "Two-stage attack detection" section employs a design that integrates wavelet transform with convolutional neural networks (CNN).The wavelet basis function utilizes the Daubechies wavelet (DB), with a decomposition level of 3. The CNN consists of 3 convolutional layers, using 3 × 3 convolutional kernels, followed by a 2 × 2 max pooling layer after each convolutional layer.Dropout is utilized to prevent overfitting.The loss function is the mean-square error (MSE), and the model parameters are updated using the mini-batch gradient descent method and the backpropagation (BP) algorithm.The specific hyperparameter settings are as shown in Table 3.
Evaluation metrics
To evaluate the performance of the detection method against network attacks, we use five detection metrics as references to assess the performance of the detection method: Accuracy, Precision, Recall, F1 score, and False Positive Rate.Their calculation methods are as follows.www.nature.com/scientificreports/Among them, the relationships between TP (True Positives), FN (False Negatives), FP (False Positives), and TN (True Negatives) can be represented by the confusion matrix in Table 4.
Experimental results and analysis
The experiment is divided into two phases: The first phase mainly verifies the performance of the attack detection method based on switch statistics, while the second phase primarily evaluates the detection performance of MDDCC on attack traffic.
Attack detection based on switch statistical information
In this attack detection experiment, host 1 is set as the attack host, and Hping3 is used to continuously send SYN pulses with the intensity of 20 Mb/s to switch 1, each pulse lasts for 1 s, and then it is silent for 5 s, that is, the period of the attack pulse is 6 s. Figure 5 Other hosts simulate normal users, each sending normal data packets according to the distribution pattern preset in the configuration strategy.The traffic information collection module continuously collects information from each switch port with a period of 1 s and transforms it into the data pattern required for detection, inputting it into the anomaly detection module.The attack experiment lasts for 6 h, during which each switch collects 21,600 samples.Specifically, switch 1 has 18,000 normal samples and 3,600 attack samples; Switch 2, Switch 3, and Switch 4 all have normal samples only.Calculate the RFI , RPi , and NP feature values for each switch, respectively.Since the network latency is minimal and can be disregarded, it can be determined that there is an attack behavior in the network when all feature values exceed their thresholds.Here, the threshold refers to the mean and standard deviation of the features RFI , RPi , and NP calculated after sampling 10,000 sets when only normal traffic exists in the network.Then, following the "three-sigma (3σ)" rule, the thresholds for RFI and RPi are set to the mean minus three times the standard deviation, while the threshold for NP is set to the mean plus three times the standard deviation.
Figure 6 illustrates the attack detection situation for 4 switches.It can be observed that Switch 1 has a large number of abnormal samples, which allows us to determine that this switch is abnormal.
The detection method in this phase achieved an accuracy of 99.82%, a precision rate of 99.28%, a recall rate of 99.67%, an F1 score of 99.47%, and an FPR of only 0.14% for detecting abnormal samples in the abnormal switch.It can be said that the attack detection method based on switch statistics can accurately pinpoint the switch connected to the host initiating a DDoS attack in SDN.Additionally, the reason why normal samples are judged as abnormal for other switches during detection is that although the traffic data sent by normal hosts follows a certain distribution, there may still be a sudden change in communication traffic at a certain moment, leading the detection system to mistakenly believe that there is a network attack behavior in the network where the switch is located.
In terms of detection time, Table 5 counts the time consumption of detecting each attack sample, and it can be found that the detection time is mostly within 100 ms, that is to say, when an attacker launches a DDoS attack, the detection system can find the abnormal behavior of the network and locate the location of the problem switch within 0.1 s, which can provide support for the real-time security protection research of SDN in the future.
The detection method based on port statistics can quickly locate the switches through which the attack traffic passes.However, if protective measures are formulated solely based on such rough detection results, it could inadvertently harm other normal hosts connected to the switch and, in severe cases, may lead to partial network paralysis.Therefore, a more refined detection approach is necessary to provide accurate information about the source of the attack, which is essential for accurately and efficiently protecting the SDN network.In this phase of the experiment, Host 1, Host 5, and Host 9 are set as attackers, running the Hping3 attack program to launch intermittent DDoS attacks on the SDN; other hosts continue to send normal TCP or UDP packets to the network according to the predetermined distribution pattern.Using the WireShark (v4.2) packet capturing tool, raw traffic data in .pcapformat is obtained from the established SDN network experimental platform, and using the CIC-FlowMeter (v4.0) traffic analysis tool, the traffic data is aggregated and converted into .csvformat feature data based on the five-tuple information of the flows.The converted dataset contains a total of 77,328 traffic records, with 36,642 records for normal flows and 40,686 records for attack flows.From each record's over 80 features, a subset of 48 features is to form the detection dataset.The training set and the test set are formed by randomly sampling from the normal flow samples and attack flow samples in a 7:3 ratio, respectively.
(a) Detection performance of MDDCC Using the training set, we conduct supervised training on the MDDCC, stopping when the loss function no longer decreases significantly due to training, and then fixing the model parameters.To eliminate random errors during detection calculations, we use the trained model to perform 5 independent detections on the test set data, with the test set samples being randomly shuffled before each detection.We calculate the five metrics for each detection and take the average values and deviations of the detection metrics as the final results of the MDDCC's detection of network attacks in SDN, as shown in Table 5.
Using the training set, we conduct supervised training on the MDDCC, stopping when the loss function no longer decreases significantly due to training, and then fix the model parameters.To eliminate random errors during detection calculations, we use the trained model to perform 5 independent detections on the test set data, with the test set samples being randomly shuffled before each detection.We calculate the five metrics for each detection and take the average values and deviations of the detection metrics as the final results of the MDDCC's detection of network attacks in SDN, as shown in Table 6.
As can be seen from the table, the final detection accuracy of MDDCC is 99.65%, the Accuracy rate is 99.84%, the Recall rate of attack samples is 99.72%, and the F1 value is 99.78%.These indicators are above 99% in each test, and the deviation of each test result is very small, which shows that our MDDCC detection model based on multi-dimensional traffic characteristics can accurately and stably distinguish normal traffic and attack traffic in the SDN environment.In addition, the average false positive rate is only 0.66%, which is acceptable from the demand of current network system security protection tasks.
(b) Detection performance of MDDCC under different decomposition levels
In the previous experiment, MDDCC adopted a detection model with three-level wavelet decomposition.To explore the impact of wavelet decomposition on detection performance, the detection performance of MDDCC under different decomposition levels such as 0-level, 1-level, 2-level, 3-level, and 4-level wavelet decomposition was compared.The specific results are shown in Fig. 7.
It can be observed that as the decomposition level increases, the detection performance of MDDCC gradually improves.For instance, the precision metric of MDDCC at decomposition levels of 0, 1, 2, and 3, is 97.58%, 97.88%, 98.68%, and 99.84% respectively, showing a progressive increase; the Accuracy, Recall, and F1 score also follow this trend, and the FPR gradually decreases.This is because the higher the levels of wavelet decomposition, the richer the information that the feature sequence can provide.MDDCC can then discover more subtle differences between samples from features of different granularity, which helps to enhance the model's ability to identify attack samples.However, when the decomposition level is four, the precision metric of MDDCC is 99.62%, which is slightly lower than when the decomposition level is three.This is because the sequence data becomes overly decomposed, resulting in information redundancy.The ineffective features in the data, once amplified after being extracted by the deep neural network, reduce the performance of the classifier.It is evident that persistently increasing the level of wavelet decomposition does not provide additional effective feature information, and the improvement in model detection performance is limited, which is mainly determined by the amount of information contained within the original sample itself.Since the model achieves the best detection effect at a decomposition level of three, a three-level decomposition model is used for all following experiments.To objectively assess the detection performance and generalization capability of MDDCC, comparative experiments were conducted using the public SDN dataset InSDN 29 .The InSDN dataset features normal and abnormal samples stored in separate files, and there is an imbalance in the categories of samples.Therefore, 70% of each category of samples was selected to form the training set, with the remaining 30% designated as the test set.Additionally, due to the scarcity of U2R samples (only 17 in total), which makes them unsuitable for participation in training and testing, they were excluded.The distribution of the samples in the re-divided InSDN dataset is illustrated in Table 7.
We utilized the divided training sets and testing sets, forming a subset of feature data using the 48 features selected as described in the "Data details" section for the model's training and testing.Due to the class imbalance of the samples, to prevent the model from developing a "preference" during training, a tenfold cross-validation method was employed for training MDDCC.This involved randomly dividing the training data into 10 groups, using 9 groups as the training data and 1 group as the validation data for each iteration.After completing 10 cycles of training, the model was fully trained using the complete training set data to achieve its final state.Finally, the preprocessed InSDN test set data was input into the well-trained MDDCC model for 5 complete tests, and the classification results as shown in Fig. 8 can be obtained.
From the figure, it can be observed that the MDDCC model exhibits varying detection capabilities for different types of attack samples.For instance, the recall rate is highest for DoS attack samples at 99.38%, while it is lowest for BotNet at 95.92%.When calculating from the perspective of the binary classification task of distinguishing between attack and normal samples, the MDDCC model achieves an average accuracy of 99.23%, precision of 99.68%, recall of 99.36%, F1 score of 99.52%, and a false positive rate of 1.28% on the InSDN test set.Overall, the MDDCC model's performance on the InSDN dataset is still quite impressive.
To further verify the generalization capability of MDDCC, we conducted experiments on two commonly used traffic datasets: CIC-IDS2017 and CIC-DDoS2019.These datasets, published by the Canadian Institute for Cybersecurity, simulate real-world network traffic by constructing 25 abstract user behaviors using protocols such as HTTP, HTTPS, FTP, SSH, email, etc.The attack traffic is generated by various cyber attack programs.Specifically, the abnormal traffic in CIC-IDS2017 is produced by seven types of attack behaviors: DoS, DDoS, Web Attack, Botnet, Brute Force, Heartbleed, and internal network penetration.The attack traffic in CIC-DDoS2019, on the other hand, is generated by reflection attacks targeting TCP (MSSQL, SSDP) and UDP (CharGen, NTP, TFTP) protocols, as well as SYN and UDP flood attacks that exploit vulnerabilities in these protocols.Both datasets provide.pcap format raw files as well as flow files containing more than 80 features generated by the FlowMeter traffic analysis tool.The experimental data was prepared following the method described in the "Data details" section, selecting 48 features to form the training and testing sets.During the training phase, the models were fully trained with their respective training sets until reaching a steady state.Finally, the models were tested for attack detection using their respective test sets for 5 trials.Table 8 presents the detection results of MDDCC on the CIC-IDS2017 dataset.Table 9 displays the detection results of MDDCC on the CIC-DDoS2019 dataset.
The experimental results from Tables 8 and 9 indicate that MDDCC has achieved satisfactory outcomes in the detection tests on both the CIC-IDS2017 and CIC-DDoS2019 datasets.The detection accuracy for both datasets surpassed 99.5%, with the recall rates for anomaly samples reaching 99.65% and 98.71% respectively, and the precision rates were also notably high.Additionally, the model exhibited a false positive rate exceeding 8% on the CIC-DDoS2019 dataset.The primary cause of this phenomenon is attributed to the class imbalance within the CIC-DDoS2019 dataset.Due to the relatively smaller number of normal samples, even a small number of misclassifications of normal samples as attack samples can lead to a high FPR.
The detection experiment results of MDDCC on the InSDN, CIC-IDS2017, and CIC-DDoS2019 datasets demonstrate that MDDCC not only has good detection ability on simulated traffic data but also shows excellent
(d) Performance comparison with other detection methods
To objectively evaluate the performance of the MDDCC model, it was compared with other similar detection models, specifically including CNN-Softmax 19 and CNN-LSTM 30 , DNN-LSTM 31 , GAN 32 , and 1D-CNN & 2D-CNN 33 , which are traditional classic detection models.Since some literature does not list the false positive rate as a performance metric, only four detection indexes, namely accuracy, accuracy, recall, and F1 value were selected for comparison, with the results shown in Table 10.
MDDCC achieved an accuracy of 99.24% on the InSDN dataset, which is an improvement of 0.75% and 3.03% over the accuracies of the CNN-Softmax and CNN-LSTM detection methods, respectively.The precision rate of MDDCC is 99.68%, marking an increase of 1.43% and 2.13% compared to the two methods.The recall rate stands at 99.37%, which is an enhancement of 1.13% and 2.2% respectively.Additionally, the F1 score for MDDCC is 99.53%, showing an improvement of 1.28% and 2.17% when compared to CNN-Softmax and CNN-LSTM.Additionally, on the CIC-IDS2017 dataset, MDDCC achieved an accuracy of 99.59%, which is a 0.27% improvement over the accuracy of DNN-LSTM.Although it is 0.14% lower than the 99.77% accuracy of 1D-CNN & 2D-CNN, MDDCC achieved higher precision and recall rates.Furthermore, MDDCC demonstrated a significant advantage over GAN on the CIC-DDoS2019 dataset.This indicates that the MDDCC model we designed has higher detection accuracy compared to traditional detection models.
DDoS attack mitigation test
The attack mitigation experiment uses the new flow rate arriving at the SDN controller (Kf/s, Kilo-flows per second) as the detection metric.Host 1 runs the Hping3 attack program to simulate the DDoS attack source host.At the initiation of the attack, Host 1 sends a large number of packets with random target IP addresses into the SDN network.Once the detection system identifies the network attack, it activates the attack mitigation mechanism to restrict the power of the attacking host to send new flows.Figure 9 illustrates the changes in new flows in the network before and after the DDoS attack, which can be roughly divided into three stages.www.nature.com/scientificreports/(a) Before the attack is initiated, in the first 30 s, Host 1 continuously sends traffic data containing some new flows into the network at a rate of 20Kf/s.During this phase, the switch forwards the new flow requests from Host 1 to the controller for processing instructions, while other legitimate flows are forwarded normally; hence, there is a certain gap between the number of flows sent by Host 1 and the number of new flows received by the controller.(b) During the attack, Host 1 stops sending normal data packets and gradually increases the sending rate of new flows, reaching a maximum rate of 95Kf/s at 55 s, then gradually decreases, and stops the attack at 65 s.In the initial stage of the attack, the switch tries to accommodate the forwarding requests from Host 1, therefore the controller receives a large number of new request packets.However, once the system detects the attack, the mitigation mechanism is activated, and the controller restricts the new flow requests from the attacking host, completely blocking the new flow requests from reaching the controller to prevent the attack from further consuming SDN resources.During this period, the detection system continuously monitors the network status.Additionally, although the attack stops at 65 s, due to the restriction period not being over, it is not until approximately 75 s that Host 1 regains the ability to send new flows, and the controller gradually starts receiving new flow request data again.(c) After the attack concludes, Host 1 resumes its data transmission state 65 s later, and the flow rate received by the controller maintains a normal gap with the flow rate sent by Host 1, as it was before the attack.
Furthermore, during the period Switch 1 was under attack, Host 2 and Host 3 continued to send traffic data of random intensity as usual.Figure 10 shows the packet reception rate at the port of Switch 1 connected to Host 2 and Host 3, as well as the packet transmission rate from other ports except the controller, with the rates measured in Kilopackets per second (Kp/s).It can be observed that the switch maintains an overall relative balance between receiving and transmitting data packets when forwarding normal data packets, and this balance is consistently maintained even during the initiation and progression of the attack.This indicates that the attack detection and mitigation system we have designed is not only capable of effectively detecting attacks present within the network, but it can also autonomously mitigate the effects of these attacks, ensuring that other hosts in the network can continue to send data normally.
The aforementioned experiments demonstrate that the attack mitigation mechanism we designed can implement restriction strategies on the attack source within a short time after identifying the source of the network attack, thereby ensuring that the SDN has sufficient available resources to provide normal services for the network.
Conclusion
SDN, as a trend in the development of future networks, urgently requires a fast and efficient anomaly detection method to maintain its own security.We propose a two-stage attack detection approach.First, attack detection based on switch statistics can quickly achieve a coarse-grained detection of network attacks by calculating the statistical information of switch ports, without adding network components and communication volume.Second, attack detection based on multi-dimensional traffic features uses wavelet transform and deep learning technology to perform multi-dimensional and in-depth feature extraction on traffic feature data, which is conducive to accurately classifying traffic samples.Additionally, the traceability method based on graph theory and the identifiers of switches and ports, along with the forwarding restriction-based mitigation strategy, can prevent excessive consumption of SDN resources.The experimental results show that the detection method we proposed can fully utilize the statistical information of switches and the characteristic data of traffic to achieve rapid detection of DDoS attacks and accurate identification of attack samples, achieving higher detection accuracy than traditional methods.Finally, the mitigation mechanism can effectively prevent the SDN controller from being overloaded and maintain the normal operation of the network.Future research will focus on how to apply the
Figure 2 .
Figure 2. Overall architecture of the attack detection and mitigation system.
NFI
Number of network traffic flowing into the switch NFO Number of network traffic flowing out of the switch NPi Number of PacketIn packets forwarded from the switch to the controller NPI number of packets into the switch NPO Number of packets out of the switch RPi The ratio of the number of network traffic flowing into the switch to the number of PacketIn forwarded from the switch to the controller.(inflow-forwarding ratio) RFI The ratio of the number of network traffic out of the switch to the number of normal network flows flowing into the switch.(normal forwarding ratio) NP Difference in the number of input and output switch packets Vol.:(0123456789) Scientific Reports | (2024) 14:16421 | https://doi.org/10.1038/s41598-024-66907-zwww.nature.com/scientificreports/
Figure 6 .
Figure 6.Results of abnormal detection for each switch.
(c) Detection performance in other data sets
Figure 7 .
Figure 7. Detection performance of MDDCC under different decomposition levels.
Figure 8 .
Figure 8. Classification performance of MDDCC on the InSDN dataset.
Figure 9 .
Figure 9. Changes of new flows before and after DDOS attacks and during mitigation.
Figure 10 .
Figure 10.Rate statistics of normal traffic received and sent by switch 1.
Table 2 .
Controller and host address information.
Table 3 .
is a schematic diagram of a pulsed DDoS attack.MDDCC superparameter.
Table 4 .
Matrix of the relationship between true value and predicted value.
Table 5 .
Detection time of attack samples.
Table 6 .
MDDCC's detection performance on the simulation dataset.
Table 7 .
Sample distribution of InSDN data set after division.
Table 8 .
Detection performance of MDDCC on the CIC-IDS2017 dataset.
Table 9 .
Detection performance of MDDCC on the CIC-DDoS2019 dataset.detectionperformance on open network data sets, which shows that the MDDCC model has strong generalization ability.
Table 10 .
Performance comparison with different detection models. | 12,230.4 | 2024-07-16T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Modeling the spectrum and composition of ultrahigh-energy cosmic rays with two populations of extragalactic sources
We fit the ultrahigh-energy cosmic-ray (UHECR, $E\gtrsim0.1$ EeV) spectrum and composition data from the Pierre Auger Observatory at energies $E\gtrsim5\cdot10^{18}$ eV, i.e., beyond the ankle using two populations of astrophysical sources. One population, accelerating dominantly protons ($^1$H), extends up to the highest observed energies with maximum energy close to the GZK cutoff and injection spectral index near the Fermi acceleration model; while another population accelerates light-to-heavy nuclei ($^4$He, $^{14}$N, $^{28}$Si, $^{56}$Fe) with a relatively low rigidity cutoff and hard injection spectrum. A significant improvement in the combined fit is noted as we go from a one-population to two-population model. For the latter, we constrain the maximum allowed proton fraction at the highest-energy bin within 3.5$\sigma$ statistical significance. In the single-population model, low-luminosity gamma-ray bursts turn out to match the best-fit evolution parameter. In the two-population model, the active galactic nuclei is consistent with the best-fit redshift evolution parameter of the pure proton-emitting sources, while the tidal disruption events could be responsible for emitting heavier nuclei. We also compute expected cosmogenic neutrino flux in such a hybrid source population scenario and discuss possibilities to detect these neutrinos by upcoming detectors to shed light on the sources of UHECRs.
I. INTRODUCTION
Identifying the sources of ultrahigh-energy cosmic rays (UHECRs, E 0.1 EeV) is one of the outstanding problems in astroparticle physics [1,2]. Active Galactic Nuclei (AGNs) residing at the centers of nearby radiogalaxies are considered to be a potential candidate source class of UHECR acceleration [3][4][5][6][7]. Studies involving the origin of TeV γ-rays assert blazars as ideal cosmic accelerators [8][9][10][11]. A recent analysis by the Pierre Auger Observatory has found a possible correlation between starburst galaxies and the observed intermediate scale anisotropy in UHECR arrival directions, with a statistical significance of 4σ in contrast to isotropy [12][13][14]. There are also propositions of other transient high-energy phenomena like gamma-ray bursts (GRBs) [15][16][17][18][19][20], tidal disruption events (TDEs) of white dwarfs or neutron stars [21][22][23][24], as well as, pulsar winds [25,26] which can reach the energy and flux required to explain the observed UHECR spectrum. Nevertheless, a direct correlation of these known source catalogs, derived from X-ray and γ-ray observations, with an observed UHECR event is yet to be made [27][28][29][30]. The different source classes allow an extensively wide range of UHECR parameters to be viable in the acceleration region. UHECRs produce neutrinos and γ-rays on interactions with the cosmic background photons during their propagation over cosmological distances. The current multimessenger data can only constrain UHECR source models and provide *<EMAIL_ADDRESS>†<EMAIL_ADDRESS>‡<EMAIL_ADDRESS>hints towards plausible accelerator environments [31,32], rejecting the possibility of a pure proton composition at the highest energies [33][34][35][36][37]. Deflections in Galactic and extragalactic magnetic fields pose an additional challenge in UHECR source identification.
The Pierre Auger Observatory (PAO) in Malargüe, Argentina [38] and the Telescope Array (TA) experiment in Utah, United States [39] are attaining unprecedented precision in the measurement of UHECR flux, composition, and arrival directions from 0.3 EeV to beyond 100 EeV using their hybrid detection technique [40,41]. On incidence at the Earth's atmosphere, these energetic UHECR nuclei initiate hadronic cascades which are intercepted by the surface detector (SD), and the simultaneous fluorescence light emitted by the Nitrogen molecules in the atmosphere is observed using the fluorescence detector (FD). This extensive air shower (EAS) triggered by the UHECRs is recorded to measure the maximum showerdepth distribution (X max ) [42]. However, even with the large event statistics observed by PAO, the mass composition is not as well constrained as the spectrum and anisotropy up to ∼ 100 EeV [43]. The first two moments of X max , viz., the mean X max , and its fluctuation from shower-to-shower σ(X max ) serves the purpose of deducing the mass composition. The standard shower propagation codes, eg., corsika [44], conex [45], etc., depend on the choice of a hadronic interaction model and photodisintegration cross-section, which are extrapolations of the hadronic physics to the ultrahigh-energy regime. Uncertainties in these models propagate to uncertainties in the reconstruction of the mass-composition of observed events. Lifting the degeneracy in the mass composition will be essential to constrain the source models.
The current LHC-tuned hadronic interaction models arXiv:2004.07621v3 [astro-ph.HE] 14 Jan 2021 viz., sybill2.3c [46], epos-lhc [47], and qgsjet-II.04 [48] differ in their inherent assumptions and thus lead to different inferences of the mass composition using the same observed data. Current estimates from PAO predict that the relative fraction of protons decreases with increasing energy above 10 18.3 eV for all three models. For the first two models, N dominates at 10 19.6 eV, while for the third model, the entire contribution at the highest energy comes from He. The ankle at E ≈ 10 18.7 eV corresponds to a mixed composition with He dominance and lesser contributions from N and H, except for qgsjet-II.04 which suggests a zero N fraction [49]. The ankle is often inferred as a transition between two or more different populations of sources, leading to a tension between the preference of Galactic or extragalactic nature of the sub-ankle spectrum. Based on the observed anisotropy and light composition, some UHECR models invoke increased photohadronic interactions of UHE-CRs in the environment surrounding the source. The magnetic field of the surrounding environment can confine the heavier nuclei with energies higher than that corresponding to the ankle, while they undergo photodisintegration/spallation to produce the light component in the sub-ankle region [50][51][52]. This requires only a single class of UHECR sources that accelerate protons and nuclei. However, it is also possible to add a distinct light nuclei population of extragalactic origin that can explain the origin of the sub-ankle spectrum [53][54][55]. A purely protonic component, in addition to a Milky Way-like nuclear composition, has also been studied [55]. The proton fraction in the UHECR spectrum for various source models can be constrained through composition studies and compliance to multimessenger data [56,57].
In this work, first, we perform a combined fit of spectrum and composition data at E 5 · 10 18 eV measured by PAO [58], to find the best-fit parameters for a single-population of extragalactic UHECR sources injecting a mixed composition of representative elements ( 1 H, 4 He, 14 N, 28 Si, 56 Fe). The best-fit 1 H abundance fraction is found to be zero in this case, conceivable within our choice of the photon background model, photodisintegration cross-section, and hadronic interaction model. Next, we show that within the permissible limit of current multimessenger photon and neutrino flux upper limits [59,60], the addition of a purely protonic ( 1 H) component up to the highest-energy bin can significantly improve the combined fit of spectrum and composition. We consider this component originates from a separate source population than the one accelerating light-to-heavy nuclei and fit the region of the spectrum above the ankle, i.e., E 5·10 18 eV. The best-fit values of the UHECR parameters are calculated for both the populations, allowing for a one-to-one comparison with the single-population case. We study the effect of variation of the proton injection spectral index, which is not done in earlier studies and indicate the maximum allowed proton fraction at the highest-energy bin up to 3.5σ statistical significance. We calculate the fluxes of cosmogenic neutrinos that can be produced by these two populations. We also explore the prospects of their observation by upcoming detectors, and probe the proton fraction at the highestenergy of the UHECR spectrum. Lastly, we take into account the redshift evolution of the two source populations, which is found to further improve the combined fit. We interpret the credibility of the best-fit redshift distributions in light of known candidate classes. We explain our model assumptions and simulation setup in Sec. II and present our results for both singlepopulation and two-population models in Sec. III. We discuss our results and possible source classes in light of the two-population model in Sec. IV and draw our conclusions in Sec. V.
II. UHECR PROPAGATION AND SHOWER DEPTH DISTRIBUTION
UHECRs propagate over cosmological distances undergoing a variety of photohadronic interactions. These interactions lead to the production of secondary particles, viz., cosmogenic neutrinos and photons. The dominant photopion production of UHECR protons on the cosmic microwave background (CMB) via delta resonance occurs at ≈ 6.8 × 10 19 eV, producing neutral and charged pions (π 0 , π + ) with 2/3 and 1/3 probability, respectively. The neutral pions decay to produce γ-rays (π 0 → γγ), while the charged pions decay to produce neutrinos (π + → µ + + ν µ → e + + ν e + ν µ + ν µ ). Neutrinos can also be produced through other pγ processes and neutron beta decay (n → p + e − + ν e ). Bethe-Heitler interaction of UHECR protons of energy ≈ 4.8 × 10 17 eV with CMB photons can produce e + e − pairs. The e + and e − produced through various channels can iteratively produce high-energy photons by inverse-Compton scattering of cosmic background photons or synchrotron radiation in the extragalactic magnetic field (EGMF). The produced photons can undergo Breit-Wheeler pair production. All these interactions also hold for heavier nuclei ( A Z X, Z > 1), in addition to photodisintegration. The interactions may also occur with the extragalactic background light (EBL), having energy higher than the CMB, with cosmic-rays of lower energy. Besides, all particles lose energy due to the adiabatic expansion of the universe. We consider ΛCDM cosmology with the parameter values H 0 = 67.3 km s −1 Mpc −1 , Ω m = 0.315, Ω Λ = 1 − Ω m [61]. While cosmic rays are deflected by the Galactic and extragalactic magnetic fields, the neutrinos travel unaffected by matter or radiation fields, and undeflected by magnetic fields.
The observed spectrum depends heavily on the choice of injection spectrum. We consider all elements are injected by the source following the spectrum given by, This represents an exponential cutoff power-law function, where K i and α are the abundance fraction of elements and spectral index at injection. A 0 and E 0 are arbitrary normalization flux and reference energy, respectively. A similar spectrum has been considered in the combined fit analysis by the PAO [43]. The broken exponential cutoff function is written as, We use the CRPropa 3 simulation framework to find the particle yields obtained at Earth after propagating over extragalactic space from the source to the observer [62]. We find the best-fit values of the UHECR parameters α, rigidity cutoff (R cut ) and K i for both onepopulation and two-population models. The normalization depends on the source model and the source population. The spectrum of EBL photons and its evolution with redshift is not as well known as for CMB. We use a latest and updated EBL model by Gilmore et al. [63] and talys 1.8 photodisintegration cross-section [64].
We use the parametrizations given by PAO based on the Heitler model of EAS to calculate the mean depth of cosmic-ray air shower maximum X max and its dispersion from the first two moments of ln A [65,66].
where X max p is the mean maximum depth of proton showers and f E is a parameter which depends on the energy of the UHECR event, where ξ, D, and δ depend on the specific hadronic interaction model. σ 2 ln A is the variance of ln A distribution and σ 2 sh is the average variance of X max weighted according to the ln A distribution, where σ 2 p is the X max variance for proton showers depending on energy and three model-dependent parameters. In this work, we use the updated parameter values 1 obtained from the conex simulations [45], for one of the post-LHC hadronic interaction models, sybill2.3c.
III. RESULTS
We perform a combined fit of our UHECR source models to the spectrum and composition data measured by PAO [49,67], for one-population and two-population model of the UHECR sources. The fit region corresponds to energies above the ankle, i.e., E 5 · 10 18 eV in the spectrum, as well as, composition. We calculate the goodness-of-fit using the standard χ 2 formalism, where the subscript j corresponds to any of the three observables, viz., spectrum, X max , or σ(X max ). To find the best-fit cases, we minimize the sum of all the χ 2 j values. Here y obs i (E) is the measured value of an observable in the i−th energy bin corresponding to a mean energy E and y mod i (E; a M ) is the value obtained numerically. a M are the best-fit values of M parameters varied in the simulations. σ i are the errors provided by PAO. We denote the spectral fit as χ 2 spec and the composition fit as χ 2 comp . The latter represents the goodness-of-fit considering X max and σ(X max ) simultaneously. In the following subsections, we demonstrate the one-population model in Subsec. III A, the transition due to the addition of an exclusive proton injecting class in Subsec. III B, and finally the effects of redshift distribution in Subsec. III C.
A. One-population model
We start by considering a single population of extragalactic sources up to a redshift z = 1, injecting a mixed composition of representative elements 1 H, 4 He, 14 N, 28 Si, and 56 Fe following an injection spectrum given by Eq. 1. The elements are injected with energy between 0.1 − 1000 EeV. The combined fit analysis done by PAO argues that only particles originating from z 0.5 are able to reach Earth with E > 5 · 10 18 eV [43,68,69]. Indeed, in our case, the contribution at the spectral cutoff comes from 56 Fe. Hence, the sources which are located further in the distance than z max = 1 are unable to contribute to the spectrum above the ankle (≈ 10 18.7 eV) [see, eg., Appendix C of 31]. This is because, as the distance of such heavy nuclei injecting sources increases, the rate of photodisintegration also gradually increases, thus decreasing their survival rate at the highest energies. Moreover, it was found that increasing z max has no effect on the best-fit parameters found with z max = 1 [32]. The source distribution is assumed to be uniform over comoving distance. In a later subsection, we check the effects of a non-trivial redshift evolution for one-population model.
We scan the parameter space by varying the rigidity cutoff log 10 points. All the parameter values for the best-fit case of the single-population model are listed in Table I. We see that the best-fit 1 H fraction turns out to be zero, and a non-zero 56 Fe component is unavoidable in this case. Indeed from the best-fit spectrum, shown in the upper left panel of Fig. 1, the contribution from Z = 1 component above 5 · 10 18 eV is infinitesimal. Since the heavier nuclei must come from nearby sources, for them to survive at the highest energies, the maximum rigidity, in this case, suggests that the cutoff in the spectrum originates from maximum acceleration energy at the sources. The fit, however, corresponds to a negative injection spectral index, which is difficult to explain by either the existing particle acceleration models or by sufficient hardening due to photohadronic interactions in the environment surrounding the source. The slope of the simulated X max plot (cf. Fig. 1), in comparison to data, suggests that the addition of a light element above 10 19 eV can improve the fit. Motivated by these aforementioned characteristics of the combined fit, it is impulsive to add the contribution from another source population and check the effects on the spectrum and composition.
B. Two-population model
We consider a discrete extragalactic source population injecting 1 H following the spectrum of Eq. 1. We refer to this as the source-class I (abbv. Cls-I). This pureproton component has a distinct rigidity cutoff R cut,1 , and injection spectral index α 1 2, such that the spectrum extends up to the highest-energy bin of the observed UHECR spectrum. The normalization where J(E) = dN/dE of the observed spectrum and E h is the mean energy of the highest-energy bin. f H is an additional parameter that takes care of the proton fraction in the highest-energy bin of the UHECR spectrum.
Another population (source-class II, abbv. Cls-II) injects light-to-heavy nuclei, viz., 4 He, 14 N, 56 Fe, as we have already seen that for a mixed composition at injection, the contribution of 1 H abundance tends to be zero above the ankle energy. Cls-II also follows the spectrum in Eq. 1 with rigidity cutoff R cut,2 and injection spectral index α 2 , and the abundance fraction at injection given by K i ( i K i = 100%). The normalization A 2 in this case is a free parameter which is adjusted to fit the spectrum and composition. As in the single-population model, here too, we set the maximum redshift of the sources to z max = 1. Athough the anisotropy of UHECR arrival directions suggest that the observed spectrum depends on the position distribution of their sources, a definitive source evolution model is difficult to find. The rigidity cutoff and the injection spectral index will vary widely with the variation of evolution function and its exponent. We first consider that both the source populations are devoid of redshift evolution, i.e., m = 0 in the (1 + z) m type of source evolution models. Afterwards, in the next subsection, we present the m = 0 cases for one-and two-population models. The cumulative contribution of Cls-I and Cls-II is used to fit the UHECR spectrum and composition for fixed values of f H . We vary f H from 1.0 − 20.0%, at intervals of 0.5% between 1.0 − 2.5% and at intervals of 2.5% between 2.5 − 20.0% to save computation time. α 1 is varied through the values 2.2, 2.4, and 2.6, inspired by previous analyses with light elements fitting the UHECR spectrum [70,71]. We vary log 10 (R cut,1 /V) between the interval [19.5, 20.2] at grid spacings of 0.1, and log 10 (R cut,2 /V) between [18.22, 18.36] at grid spacing of 0.02. For each combination of {α 1 , f H }, we find the bestfit values of log 10 (R cut,1 /V), log 10 (R cut,2 /V), α 2 , and composition K i at injection of Cls-II; that minimizes the χ 2 tot of the combined fit. Due to increased number of parameters, we set the precision of composition K i to 0.25%. These parameter sets are listed in Table II. For α 1 = 2.2 and 2.4, the χ 2 tot value monotonically increases with f H beyond the best-fit value, while for α 1 = 2.6, an alternating behaviour is obtained. The best-fits are found at f H = 1.5%, 2.5%, and 2.0%, respectively for α 1 = 2.2, 2.4, and 2.6. For all the cases, a significant improvement in the combined fit is evident compared to the one-population model. It is worth pointing out that the minimum of χ 2 comp and χ 2 spec do not occur simultaneously and the variation in the best-fit value of log 10 (R cut,2 /V) is insignificant. In the top right and bottom panels of Fig. 1, we show the best-fit cases II, XIV, XXV corresponding to α 1 = 2.2, 2.4, and 2.6, respectively. The minimum χ 2 value for all the three cases are comparable and very close to each other, indicating the best-fits are equally good for all the α 1 values considered. The pureproton component favors higher values of cutoff rigidity than Cls-II and steeper injection spectral index.
It is instructive to compare the all-flavor neutrino fluxes resulting from the two-population model with the current 90% C.L. differential flux upper limits imposed by 9-years of IceCube data [60]. The hard spectral index and lower maximum rigidity in case of one-population model leads to a neutrino spectrum much lower than the current and upcoming neutrino detector sensitivities. This is shown in the top left panel of Fig. 2 along with the current sensitivity by PAO [72,73] and that predicted for 3years of observation by GRAND [74,75] and POEMMA [76,77]. We also present the allowed range of neutrino flux from Cls-I and Cls-II in the two-population model for f H = 1.0 − 20.0%. The cosmogenic neutrino flux from Cls-I is within the reach of the proposed GRAND sensitivity. The all-flavor integral limit for GRAND implies an expected detection of ∼ 100 neutrino events within 3-years of observation for a flux of ∼ 10 −8 GeV cm −2 s −1 sr −1 . This implies that with a further increase in exposure time, GRAND should be able to constrain our two-population model parameters if f H 10%. As we find the best-fit H fraction is zero in Table-I, K H is a redundant parameter in this case. Scanning the parameter space excluding the latter will result in the same values of the remaining 6 parameters and thus, the resulting model coincides with that of Cls-II in Table-II. Thus, for a ∆χ 2 calculation between the one-population and two-population model, we consider the number of parameters in the former to be 6 and not 7. The difference in the number of parameters varied between onepopulation and two-population model is one, i.e., R cut,1 . A smooth transition from the two-population model to one-population model can be done by setting R cut,1 = 0. This necessarily implies that f H = 0 and there remains no α 1 . Based on the values obtained from, we estimate the maximum allowed proton fraction at 3.5σ confidence level (C.L.) in the highest-energy bin. For α 1 = 2.2 this corresponds to ≈ 12.5%, α 1 = 2.4 corresponds to ≈ 15.0%, and for α 1 = 2.6 it turns out to be ≈ 17.5%. However the maximum | ∆χ 2 |, which also indicates the most significant improvement in contrast to one-population model, is found for α 1 = 2.2, as shown in Fig. 3. The 2.6σ and 3.5σ C.L. are also indicated.
C. Redshift evolution of sources
In the preceding study with flat redshift evolution of the two populations of extragalactic sources, we see that the contribution of 1 H from the light-to-heavy nuclei injecting sources, to the combined fit of energy spectrum and mass composition beyond the ankle, is infinitesimal. Whereas, the pure proton spectrum from Cls-I maintains a steady contribution up to the GZK energies superposed on the Peters cycle pattern [78], resulting from Cls-II. We carry out a systematic analysis over plausible strengths of redshift evolution of the source classes. We assume the source distribution evolves with redshift according to (1+z) m , where m is a free parameter. First, we find out the best-fit value of m in the one-population model assuming a mixed composition at injection comprising of 4 He, 14 N, 28 Si, and 56 Fe. The combined fit improves with comparison to the flat evolution case, but not substantially. The resulting spectrum and composition fit are shown in the top left panel of Fig. 4. The composition fit is found to be more significant than the flat evolution case. In this case too, we see the contribution from protons, resulting in the photodisintegration of heavier elements, is sub-dominant at E 10 18.7 eV. The redshift evolution index m is varied in the range −6 m +6 at intervals of 1.0 and the corresponding best-fit values of cutoff rigidity, injection spectral index, and the composition are calculated. The best-fit parameters and the fit statistics are indicated in Table III. The number of d.o.f is 25. The minimum χ 2 is obtained for the fit corresponding to m = +2. This indicates a wide range of candidate classes, eg., low-luminosity GRBs (LL GRBs) where the UHECR nuclear survival is possible inside the source/jet [20]. However, their redshift evolution is not well known but expected to follow that of long GRBs, given by ψ(z) ∝ (1 + z) 2.1 for 0 < z < 3 [79].
For the two-population case, we need to take into account two values of the redshift evolution index m 1 and m 2 , respectively for Cls-I and Cls-II. In case of high-luminosity γ-ray sources, the dynamical timescales are larger than nuclear interaction timescales, inside the acceleration region. The relativistic jet provides suitable environment for heavier nuclei to dissociate via interactions with ambient matter and radiation. Hence, they are ideal candidate for 1 H injection. We identify our Cls-I with AGNs, injecting predominantly protons. The redshift evolution of AGNs follow the function ψ(z) ∝ (1 + z) 3.4 for z < 1.2 and X-ray luminosity in the range L X ∼ 10 43 − 10 44 erg/s [80]. An even higher luminosity might be required to accelerate UHECR pro-tons up to 10 20 eV [81]. In principle, one can consider even higher luminosity AGNs, but the number density decreases sharply with luminosity. The redshift evolution of medium-high luminosity AGNs (L X ∼ 10 44 − 10 45 ) is given by ψ(z) ∝ (1+z) 5.0 for z < 1.7. Radio-loud quasers with bolometric γ-ray luminosity 10 47 erg/s and a number density of 10 −5 − 10 −4 Mpc −3 can meet the energy requirements for UHECR acceleration [82]. Hence, we vary m 1 through 3, 4, and 5. While for m 2 , we consider a wide range of values spanning from positive to negative, viz., +2, 0, −3, and −6, to find the best-fit region. Once again, we fix the injection spectral index of proton-injecting sources to α 1 = 2.2, 2.4, and 2.6, which is now a more physically motivated choice for AGNs. As before, we vary the proton fraction (f H ) at the highest energy bin from 1.0 − 2.0% at intervals of 0.5%, and from 2.5% − 10.0% at intervals of 2.5%. We find the best-fit value of R cut,1 , R cut,2 , α 2 , and the fractional abundance of elements at injection (K i ) for Cls-II. For this case, we vary log 10 (R cut,2 /V) with a precision of 0.1. We represent the goodness-of-fit for the best-fit case corresponding to each set of {α 1 , f H , m 1 , m 2 } values in Fig. 5, distinctly for the combined fit, spectrum fit, and the composition fit from top to bottom, respectively. The combined fit improves as we go to more and more negative values of m 2 and lower values of α 1 . We see the best-fit occurs for m 1 = +3, m 2 = −6, α 1 = 2.2 and f H = 1.5%. The details for this set are given in Table IV. The best-fit spectrum and composition are displayed on the top right panel of Fig. 4. However, it is interesting to note from Fig. 5 that the best-fit composition and best-fit spectrum cases are not coincident. The best-fit composition (χ 2 comp = 10.29) is obtained for m 1 = +5, m 2 = 0, α 1 = 2.2 and f H = 1.5%, whereas the best-fit spectrum (χ 2 spec ) occurs at m 1 = +5, m 2 = −6, α 1 = 2.6 and f H = 2.5%. We also calculate the neutrino fluxes originitaing from the best-fit one-population and two-population models, after considering redshift evolution of the source classes. In the bottom left panel of Fig. 4, the neutrino flux increases in the case of onepopulation model, owing to the positive redshift evolution (m = 2). While, in case of two-population model, the flux from heavy nuclei injecting sources is greatly reduced (m 2 = −6), and the cumulative neutrino flux distribution is dominated by that from protons (m 1 = 3). The shaded region indicates the flux range enclosed by f H = 1.0 − 20.0%, α 1 = 2.2, and is within the flux upper limit imposed by 9-yr of IceCube data. Even a small fraction of proton can yield a neutrino flux which is within the reach of 3-yr extrapolated sensitivity of the proposed GRAND detector.
The preference over large negative values of m 2 can be attributed to specific source classes, such as tidal disruption events (TDEs) [21,83]. The event rate of TDEs depend on the number density of SMBH as a function of redshift. The best-fit empirical model indicates a negative redshift evolution [84]. TDEs forming relativistic jets can be the powerhouse of UHECR acceleration, but their event rate severely constrains the UHECR flux [85], thus requiring a mixed or heavy composition at injection. Metal-rich composition consisting of a significant Si and Fe fraction is required to explain the spectrum with a population of TDE [24]. In our case too, a high fraction of Fe is required to explain the spectrum at the highest energies, as indicated in Table IV. However, the survival of UHECR nuclei depends on the specific outflow model. For luminous jetted TDEs like Swift J1644+57 [86,87], which reaches a bolometric luminosity L bol 10 48 erg/s in the high state, UHECR acceleration becomes difficult via internal shock model, but is allowed for TDEs with lower luminosities. Forward/reverse shock models were also found in accordance with heavy nuclei injection [88]. The best-fit obtained in two-population model with non-trivial redshift evolution (m 1 , m 2 =0) is better than the flat evolution case and also compared to the onepopulation model with redshift evolution. A smooth transition can be made from two-population model to one-population model by setting R cut,1 = 0 and m 1 = 0. So the difference in the number of parameters varied is two. Fig. 6 shows the | ∆χ 2 | values for two d.o.f between the one-population and two-population cases, as a function of proton fraction at the highest-energy bin. As before, we constrain the maximum allowed proton fraction at 3.5σ confidence level, which turns out to be between 7.5% and 15% for different values of the proton injection spectral index considered. We also calculate the correlation between the fit parameters for the specific case of m 1 = 3, m 2 = −6, α 1 = 2.2, and f H = 1.5%, i.e., for the best-fit case SRE-1 listed in Table IV. We vary the cutoff rigidity log 10 (R cut,1 /V) in the range 19. Fig. 7 shows the 1σ, 2σ, and 3σ C.L. contours for 25 d.o.f. The variation of cutoff rigidity is not shown, since they were found to be insen-sitive to variation of other parameters. We see a hard injection spectral index α 2 ≈ 1.6 is preferred, which conforms with the recent predictions by the PAO [43], and analysis done by other works [31,32]. The composition is also in accordance with those predicted from latest measurements, implying a progressively heavier composition at higher energies. The 1σ region in composition space corresponds to a high value of Fe fraction, which is indeed needed for TDEs to have significant contribution in the UHECR spectrum. Another candidate class which can represent our Cls-II is the low-luminosity (L γ < 10 44 erg/s) and high synchrotron peaked BL Lacertae objects. They possess a negative redshift evolution and are predicted to be more numerous than their high-luminosity counterpart. However, a direct detection of these lowluminosity objects are difficult, and the current 4LAC catalog consists ∼ 20 such sources.
IV. DISCUSSIONS
The composition fit corresponding to the onepopulation model, especially the departure of simulated X max and σ(X max ) values from the data, leaves a substantial window for improvement. The addition of a light nuclei component up to the highest observed energies shall alleviate the mismatch. We exploit this possibility in our work by adding a distinct source population injecting 1 H that extends up to the highest observed energies. Earlier works have considered a pure-protonic component with an assumed steep injection spectral index [54] to explain the region of the spectrum below the ankle. We do not fit the UHECR spectrum below the ankle, and the proton spectrum considered in our work contributes directly to the improvement in composition fit at the highest energies. A relatively hard spectrum (α 1 = −1) in addition to a Milky Way-like nuclear composition is considered in Ref. [55], extending up to the highest energies. We do not assume any fixed abundance fraction for the light-to-heavy nuclei injecting sources and calculate the best-fit values within the resolution adopted.
Ref. [57] have proposed an interaction-model independent method to probe the allowed proton fraction for E p 30 EeV, constrained by the cosmogenic neutrino flux upper limits at 1 EeV. Thus, they do not take the composition of primary cosmic rays into account, inferred from air shower data. They have considered a generalized redshift evolution function of the proton injecting sources, parametrized by the evolution index m. In our work, we fit the composition data X max , σ(X max ), and the energy spectrum simultaneously to infer the proton fraction in a two-population scenario.
Here, we find a significant improvement in the combined fit to spectrum and composition data, when adding an extragalactic source population emitting UHECRs as protons. For our choice of steep proton injection indices (α 1 ), the goodness-of-fit is found to be comparable to each other. We also consider the injection index i.e., for the best-fit case SRE-1 listed in Table IV. It can be seen that a high fraction of Fe is required at injection along with a hard injection spectral index. The diagonal plots represent the posterior probability distribution and red dots in others indicate the central values. The 1σ, 2σ, and 3σ standard deviations are shown by dark to light-colored shading.
(α 2 ), maximum rigidity (R cut,2 ), and composition fractions (K i ) of the second population injecting light-toheavy nuclei to be variables and find the corresponding best-fit values. The corresponding improvement in the combined fit is found to be 3σ in some cases. We have also surveyed our results for a wide range of source redshift evolution. Such an analysis is already done earlier for a single source population for a mixed composition of injected elements [31,32,89]. In our analysis, we find that, although a positive evolution index is preferred in one-population model, the best-fit value changes sign on going to two-population model. However, with increasing values of z max , the variation of m can significantly affect the neutrino spectrum. We have kept the contributing sources within z 1 in view of the fact that particles originating at higher redshifts will contribute below the ankle, which we do not fit here. Thus within the minimal requirements of this model, our neutrino spectrum can be considered as a conservative lower bound in the two-population scenario.
The resultant neutrino spectrum in two-population model at E 0.1 EeV is dominated by that from pure-protons. Even a small fraction of protons at the highest energy is capable of producing a significant flux of neutrinos. This is expected because of the maximum energy considered for proton-injecting sources. Even for low f H , the values of E max are very close to GZK cutoff energy, where the resonant photopion production occurs, leading to pion-decay neutrinos. The double-humped feature of the neutrino spectrum is a signature of interactions on the CMB and EBL by cosmic rays of different energies. The higher energy peak produced from protons possesses the highest flux, and the detection of these neutrinos at ∼ 3·10 18 eV will be a robust test of the presence of a light component at the highest energies, thus also constraining the proton fraction. For E < 0.1 EeV, the neutrinos from Cls-II becomes important with peaks at ∼ 1 PeV and ∼ 40 PeV. Hence, the cumulative neutrino spectrum (Cls-I + Cls-II) exhibits three bumps for α 1 = 2.2 (see Fig. 2). But gradually with increasing values of α 1 , the lower energy peak of Cls-I becomes significant, diminishing the "three-peak" feature until neutrinos from protons dominate down to ∼ 1 PeV for α 1 = 2. 6 We present the upper limit on the maximum allowed proton fraction in two-population model at ≈ 1.4 × 10 20 eV. This is based on the improvement in the combined fit compared to the one-population model, up to 3.5σ statistical significance. For a higher C.L., the proton fraction is even lower at the highest-energy bin. However, a nonzero proton fraction is inevitable. It is studied earlier that the flux of secondary photons increases with an increasing value of α 1 [33]. If a single population injecting protons is used to fit the UHECR spectrum, the resulting cosmogenic photon spectrum saturates the diffuse gamma-ray background at ∼ 1 TeV for α 1 = 2.6, m = 0 [71]. In our two-population model, the proton fraction at the highest energies is much lower than the total observed flux. This ensures the resulting photon spectrum from Cls-I is well within the upper bound imposed by Fermi-LAT [59]. For Cls-II injecting heavier nuclei, the main energy loss process is photodisintegration, contributing only weakly to the cosmogenic photon flux. Hence the two-population model, which we invoke in our study, is in accordance with the current multimessenger data.
The choice of the hadronic interaction model for our analysis is based on the interpretation of air shower data by the PAO [58,66]. It is found that qgsjet-II.04 is unsuitable compared to the other two models and leads to inconsistent interpretation of observed data [49]. Also, for our choice of photodisintegration cross-section, i.e. talys 1.8, the hadronic model sybill2.3c yields superior fits [32]. In general, the sybill2.3c model allows for the addition of a higher fraction of heavy nuclei, compared to others, at the highest energies. Indeed in Table II, it is seen that the lowest-χ 2 cases correspond to high K Fe , which increases monotonically with α 1 . The requirement of Fe abundance in one-population model is much lower than in the case of two-population model. For the latter, the cutoff in the cosmic ray spectrum cannot be solely explained by the maximum acceleration energy of iron nuclei at the sources, but also, must be attributed to photopion production of UHECR protons on the CMB to some extent.
In going from one-population to the two-population model, the injection spectral index of the population injecting heavier elements changes sign from negative to positive, making it easier to accept in the context of various astrophysical source classes. Young neutron stars, eg., can accelerate UHECR nuclei with a flat spectrum, α 2 ∼ 1 [90]. Particle acceleration in magnetic reconnection sites can also result in such hard spectral indices [see for eg., 91]. Luminous AGNs and/or GRBs are probably candidates for Cls-I, accelerating protons to ultrahigh energies [15]. The Cls-II injecting light-to-heavy nuclei suggests the sources to be compact objects or massive stars with prolonged evolution history, leading to rich, heavy nuclei abundance in them. In particular the high negative redshift evolution and substantial Fe fraction allows us to identify the Cls-II with TDEs. The problem in the case of a highly luminous object is, although heavier nuclei may be accelerated in the jet, they interact with ambient matter and radiation density in the environment near the sources [92]. To increase the survivability of UHECR nuclei, less luminous objects such as LL GRBs [93] are preferred.
V. CONCLUSIONS
Based on the spectrum and composition data measured by PAO, a combined fit analysis with a single-population of extragalactic sources suggest that the composition fit at the highest energy deserves improvement. The slope of the simulated X max curve implies that fitting the highest-energy data points with contribution from only 56 Fe will diminish the abundance of lighter components 28 Si, 14 N, and 4 He. This will in turn decrease the flux near the ankle region, thus resulting in a bad fit. Addition of another light component of extragalactic origin, preferably pure proton, extending up to the highestenergy bin can resolve this problem. From a critical point of view, this solution is not unique, but definitely a rectifying one. The combined fit improves significantly and we present the maximum allowed proton fraction at the highest-energy bin of spectrum data corresponding to > 3σ statistical significance. An additional population of extragalactic protons has also been suggested in Ref. [94], in the context of fitting the UHECR spectrum.
There are observational indications that different astrophysical source populations likely contribute to the UHECR data. A plausible hot spot around the nearby starburst galaxy M82 [95,96] in the TA data and an intermediate-scale anisotropy around the nearest radio galaxy Cen-A in the Auger data already suggest possibility of two types of source populations. The Auger data, however, do not show any small-scale anisotropy, suggesting that the majority of UHECR sources are distributed uniformly in the sky. The recent 3σ correlation of an observed high-energy muon neutrino event detected by IceCube with a flaring blazar TXS 0506+056 at a moderate redshift of 0.34 is consistent with this scenario [97,98].
The two generic source classes of UHECRs studied here by us is also representative of the scenario described above. High luminosity AGNs or GRBs could contribute a pure proton component that is significant at the highest UHECR energies. The resulting cosmogenic neutrino spectrum can be detected by future experiments with sufficient exposure and the proton fraction in the highest energy UHECR data can be tested. | 9,706.2 | 2020-04-16T00:00:00.000 | [
"Physics"
] |
Faith in the modern Reformed church: Calvin and Barth
Calvin and Barth are arguably the main exponents of two notable soteriological camps in the Reformed world nowadays and their soteriology has wide and sometimes unarticulated impacts on Reformed doctrine and praxis. By exploring the systematic theologies of Calvin and Barth, we articulate the similarities and differences between their views of faith. Both theologians emphasise that an individual’s faith must be in Christ and not in one’s own works; neither is one justified because of one’s faith, but because of Christ’s redemptive work. The locus of faith is the main point of difference: Calvin locates an individual’s faith in the Christ revealed in the Bible, whereas Barth locates it in Christ’s immanent revelation of himself at a time of crisis. Behind this difference are divergent views of the Bible and its relationships with theology and praxis.
INTRODUCTION
"Justification by faith alone" was a central cry of the Reformation since Luther understood Romans 1:17, "the just shall live by faith", to mean that believers receive Christ's righteousness by faith (Bingham 2019). This understanding is codified in the comprehensive Lutheran scholastic theology of Quenstedt (1685), and in Article 11 of the Anglican Thirty-Nine Articles, but it is arguably in the writings of Calvin's Institutes that the doctrine receives its most prominent treatment in the Reformation era. Whilst justification by faith is less prominent in modern Lutheranism and Anglicanism, it remains a doctrine whereby the mainline Reformed church is identified. It is, therefore, important that the wider 2. DEFINITIONS OF FAITH
Calvin
Calvin was trained as a lawyer in the art of rhetoric. This includes not only developing an effective argument, but also delivering it at appropriate locations in a wider communicative context. As Calhoun (1998) notes in his lectures, Calvin's ordering of the chapters in the Institutes instructs us, along with their contents, about the typical order of doctrinal growth in an individual who is coming to understand the systematic theological truths underpinning an individual's soteriological experiences. In the first edition (1536), faith is contained in Chapter Two of a total of six chapters. By Calvin's muchexpanded final edition of the Institutes (1559/1560), faith is found discussed mainly in Book 3, Chapters 2 ("Of faith", vol.1:467-507) and 11 ("Justification by faith", Vol. 2:36-59).
Although the Institutes is a body of systematic theology, Calvin writes pre-eminently as a biblical theologian who regarded his main work as the orderly preaching of Scripture (Schulze 1998:50). The content of his chapters represents a departure from the medieval scholastic pattern used in Peter Lombard's Sentences or in Thomas Aquinas' Summa Theologica, which had been the standard theological textbooks prior to the Reformation (Gonzalez 1971:258-279). A major Reformation theme is the sufficiency of Scripture, and Calvin places greater emphasis on Scripture, and less emphasis on philosophical method than the medieval schoolmen who preceded him. Some church historians suggest that Calvin's high view of the sovereignty of God dominates his theology; as one of God's attributes, Calvin indeed views God's sovereignty as overarching everything. However, to suggest that Calvin's high view of God's sovereignty is disproportionate to Calvin's high views of God's other attributes would be a mischaracterisation of his theology (Bouma 1947:34;Winterdink 1976:9;Pillay 2015). If anything may be said to loom large in Calvin's writings, it is his commitment to the divine inspiration and authority of the Bible (Murray 1959). Accordingly, to rightly understand Calvin's doctrine of faith, we must first make a brief excursion into Calvin's doctrine of revelation.
In Calvin (1949), God's created world affords men some understanding of the existence and character of the creator (1.5.1-11). However, because of man's blindness in his fallen state, this "light of nature" is inadequate to reveal enough of whom God is or what is required to restore a sinner to fellowship with God (1.5.14). Therefore, God sent his Word, the Old and New Testament Scriptures, to "make himself known unto salvation" (1.6). A person cannot be firmly persuaded of a doctrine unless s/he is firmly persuaded that God is the author of the Scripture that contains it (1.7.4). This conviction of the divine inspiration of Scripture requires the internal teaching of the Holy Spirit (1.7.4-5) but is not contrary to arguments from human reason, which can be used as subordinate proofs (1.8.13). Hence, Calvin teaches that God's special revelation to mankind across the ages is the Bible.
From this basis, Calvin develops his doctrine of faith. It is one of the fruits of election (3.2.11): a gift of the Holy Spirit (3.1.4) that enables man to receive Christ as he is revealed in the Scriptures (3.2.6).
Whether God uses the agency of man, or works immediately by his own power, it is always by his word that he manifests himself to those whom he designs to draw to himself. Hence Paul designates faith as the obedience which is given to the Gospel (Rom. 1:5) (3.2.6).
Faith is more than a simple knowledge of the divine will (3.2.7); there are many kinds of knowledge, or understanding, that do not amount to faith (Jas. 2:19). Neither is faith a simple knowledge of divine benevolence, which "cannot be of much importance unless it leads us to confide in it" (3.2.7). It is not mere comprehension but "so much superior, that the human mind must far surpass and go beyond itself in order to reach it (Eph. 3:18-19)" (3.2.14). It is not even a cognitive agreement or concord with the teachings of Scripture (3.2.8). It is, rather, a firm and sure knowledge of the divine favour toward us, founded on the truth of a free promise in Christ, and revealed to our minds, and sealed on our hearts, by the Holy Spirit (3.2.7).
The principal act of faith is the inward embrace of God's promises; this is a peace or security that quiets and calms the conscience in the view of the judgement of God, and without which it is necessarily vexed and almost torn with tumultuous dread (3.2.16).
[Faith] seeks life in God, life which is not found in commands or the denunciations of punishment, but in the promise of mercy (3.2.29).
These are not conditional but "gratuitous" promises.
Believers, indeed, ought to recognise God as the judge and avenger of wickedness; and yet mercy is the object to which they properly look, since he is exhibited to their contemplation as 'good and ready to forgive,' 'plenteous in mercy,' 'slow to anger' [etc.] (3.2.29).
Faith centres on the person and work of Christ. Calvin observes that all the promises of Scripture point towards and receive their commensurate fulfilment in Christ: they are "yea and amen in Christ Jesus" (3.2.32; see Rom. 1:3; 1 Col. 2:2; 2 Col. 1:20). Both our faith and the Father's love rest on Christ who is "the bond by which the Father is united to us in paternal affection" (3.2.32). God views his children in Christ and only because of their relationship to Christ does he love them (3.2.32).
Calvin stops just short of asserting that full assurance is of the essence of faith but nonetheless sets a high bar of confidence in the promises of Scripture and their fulfilment in Christ and a slightly lower, but still considerable 1 bar of confidence that Christ's redemptive work has been applied to the individual. Faith is not a "wavering" knowledge, "fluctuating with perpetual doubt" but "a firm and sure knowledge of the divine favour toward us" (3.2.7) and bold and confident in Christ (3.2.15). In this instance, Calvin regards faith as a confidence, not only that God is generally benevolent or that Christ has the ability to save, but that God has favoured "us". It is not altogether clear whether this "us" is the majestic plural or the regular first-person plural and, by extension, whether Calvin means that the faithful individual knows that God is favourable to him/her specifically or the church, in general -but the former appears more plausible. It is a peace or security that quiets and calms the conscience in the view of the judgement of God, and without which it is necessarily vexed and almost torn with tumultuous dread (3.2.16).
Our knowledge that God is favourable to us does not arise from direct revelation, dreams, or ecstasies (1.9), but from confidence in the absoluteness and gratuity (freeness) of God's promises in Scripture as sealed by the Spirit operating in the believing soul (3.2.29). However, it is wrong to suggest that Calvin's doctrine of assurance leaves no room for the reality of unbelief in the life of the Christian.
Believers have a perpetual struggle with their own distrust and are thus far from thinking that their consciences possess a placid quiet, uninterrupted by perturbation. On the other hand, whatever be the mode in which they are assailed, we deny that they fall off and abandon that sure confidence which they have formed in the mercy of God (3.2.17).
David is given as an example of this, and contrasted with Ahaz, who heard God's promise, but his heart was shaken, and he ceased not to tremble (3.2.17). Lane helpfully summarises that, in Calvin's writings, full assurance (in degree) is not of the essence of faith, but assurance (in some measure) must necessarily exist if a person is extending faith in another (Lane 1979:32). On assurance, Calvin concludes: [When] faith is instilled into our minds, we begin to behold the face of God placid, serene, and propitious; far off, indeed, but still so distinctly as to assure us that there is no delusion in it (3.2.19).
Having considered Calvin's views on the nature of faith, our attention now turns to consider Calvin's doctrine of justification by faith. His order of unfolding the doctrine is noteworthy: he first explains what justification by faith is, and then its place in the ordo salutis (order of salvation).
Calvin explains what justification by faith is in terms that are familiar to any Reformed theologian nowadays: A man will be justified by faith when, excluded from the righteousness of works, he by faith lays hold of the righteousness of Christ, and clothed in it appears in the sight of God not as a sinner, but as righteous […] This justification consists in the forgiveness of sins and the imputation of the righteousness of Christ (3.11.2).
Let us analyse this statement. In Calvin's theology, there are two parts to the justification of sinners. First, there is the forgiveness of their sins, as Christ has paid the judicial penalty on their behalf. Secondly, there is the imputation, by the Spirit, of Christ's obedience to God's law, which he perfected as a man. These two parts mean that the justified sinner is now no longer merely guiltless but has possession of the full and perfect righteousness, required by the law, and accomplished by Christ.
Calvin repudiates Osiander's doctrine that a person's faith itself justifies him/her. Calvin states that "the power of justifying exists not in faith, considered in itself, but only as receiving Christ" (3.11.7). Osiander's position was broadly repeated later in church history by J.N. Darby and the Plymouth Brethren and refuted contemporaneously by Dabney (1890:169-228) in two long essays. The argument is too lengthy to enter in detail, in this instance, but what both Calvin and Dabney argue is that a belief of the truth and inspiration of Scripture, believing that Jesus is the Son of God, or indeed believing that there is forgiveness of sin in Christ -whilst all parts of faith are not themselves the cause or locus of salvation. Rather, the locus of salvation is in the person and work of Christ and one's faith, which is enabled by the activity of the Spirit in the soul, instrumentally receives Christ's justifying work to himself. The "efficient cause" of salvation is the mercy and free love of the Father; the "material cause" is Christ and his obedience unto righteousness, and the "instrumental cause" is faith (3.14.17). We might summarise this by saying that, for Calvin and Dabney, we are not justified because of our faith; it is the instrument, not the grounds, of justification.
Unlike his Genevan successor Beza (2016), Calvin's ordo salutis was embryonic (McGowan 2004). He speaks of sanctification and regeneration as the same concept and, along with justification, as consequences to faith.
Christ given to us by the kindness of God is apprehended and possessed by faith, by means of which we obtain in particular a twofold benefit: first, being reconciled by the righteousness of Christ, God becomes instead of a judge, an indulgent Father; and, secondly, being sanctified by his Spirit, we aspire to integrity and purity of life. This second benefit -viz. regeneration" (3.11.1).
He nonetheless admits that they may be arranged in a better order than that in which they are here presented. But it is of little consequence, provided they are so connected with each other as to give us a full exposition and solid confirmation of the whole subject (3.11.16).
New readers of Calvin may be surprised to note how little he writes on the ordo salutis.
Barth
Karl Barth seemed destined to become an academic theologian. After studying at the Universities of Bern, Berlin (under Adolf von Harnack), Tübingen, and Marburg (under Wilhelm Herrmann), and rejecting the liberal school then dominating Germanic theological life, he became the minister of the village in Safenwil in the Swiss Aargau between 1911 and 1921. Academic life called, however, and he taught theology at the Universities of Göttingen, Münster, and Bonn, before taking a Professorship at Bern in 1935, in which Chair he was to remain for the rest of his academic life (Busch 1977). Among Barth's early theological influences is the existentialist Kierkegaard, whose idea of conversion as a surrender to God in a time of crisis was further developed by Barth: God is the infinite "wholly other", whereas man is finite, needy, and sinful (Paas 2016:267). Barth's view of the Bible is that it is the record of God's revelation -which is, essentially, Christ; the Bible is not intrinsically revelation but becomes revelation when it is applied to a person at a time of crisis (Paas 2016:267). However, when reading Barth's writings on faith, this view of Scripture does not immediately come to the fore. The theological relationship between Barth and the liberal 19 th -century theologian Schleiermacher is debated. Heron (2000:394-395) shows that, although Barth was resolutely critical of Schleiermacher's theological method, he quotes him extensively and respected the paradigm-changing nature of his conclusions. Heron (2000:403) further observes that their understanding of God's revelation is radically different: Schleiermacher (1928:123) argues that God reveals himself immediately in the "common Spirit that animates the corporate life of believers", whereas Barth argues that God reveals himself transcendently through Jesus Christ as the Word of God with the Bible as a record or "witness" of that revelation (De Moor 1937;Ramm 1993:92-93). Hence, Schleiermacher's doctrine of revelation is anthropocentric, whereas Barth's is Christocentric; both differ from Calvin's doctrine of the divine inspiration of authoritative Scripture.
This excursion into Barth's background is necessary for our discussion of faith in Barthian theology, because his strongly objective Christocentric view pervades not only his doctrine of revelation -which, in turn, affects his view of faith -but also directly contributes to his conceptualisation of faith in his theological system. Barth's magnum opus is his multi-volume Church Dogmatics (1956); his writings on faith are concentrated in Volume IV.1: in Sections 61.4 ("Justification by faith alone, pp. 609-642); 63.1 ("Faith and its object", pp. 741-757), and 63.2 ("The act of faith", pp. 757-779). Note that Barth inverts Calvin's order by first discussing justification by faith, and then considering faith independently; we shall consider his treatments of these subjects in the order he lays them out.
Barth begins with the two dialectic poles that faith is both a real human action but also entirely received from outside the self, which accords with faith as the working of God and Barth's theological observation of God as "wholly other" (Barth 1960:72). Faith "is a free decision, but made with the genuine necessity of obedience" (Barth 1956:IV.1:620).
The ability required is a genuinely and concretely human ability, but … when a man does make use of it it is shown not to be an ability which he himself has contributed and exchanged as a presupposition, in the form of a capacity of his own (613).
Faith cannot be observed directly but only reflectively as it affects the life and actions of the possessor (613). The humanity and super-humanity of faith is expressed in the following dense quote, which we must then unpack: It is faith which can do what has to be done and what cannot be done by anyone naturally. It is in faith that a man surmounts the great difficulty which consists in the fact that he is not adapted of himself to do justice to the sovereign self-demonstration of the justified man -not to speak of the lesser difficulty caused by the historical questioning of this man, the anxiety whether he is not after all a myth or an illusion (614).
At this point, the reader may wonder whether Barth is conflating faith with assurance. The key, in this instance, is that the "justified man" of whom Barth speaks is not the justified sinner but Christ, and his main point is not questioning the reality of the individual person's justification but that faith rests assured of the historical reality of Christ as The Justified Man. It is the absolutely positive answer to the question of the reality and existence of the man justified by God … in his own person in its solidarity with all other men, and therefore virtually and prospectively in their persons too (614).
The distinctively Barthian doctrine of hypothetical prospective universal corporate election in Christ as The Elect Man emerges, in this instance, and we must pause a moment to outline it. Barth affirms double predestination, that, in the eternal decree, God elected man to life and death, but then restricts that elective decree to Christ who would undergo life and death in potentiality for all men (II.2) and every man (IV.1:750); this understanding of election is, for Barth, the basis of the gospel message (IV.3.2). Hence, Barth's theological system is highly Christocentric.
Barth repeats similar negations as Calvin and Dabney when he observes that a person is not justified because of his/her faith, which would be Pharisaic self-righteousness (IV.1:617). It appears, but may simply be a matter of semantics, that Barth does not regard faith as a "means" of grace: For there is as little justification of man 'by' -that is to say, by means of -the faith produced in him … as there is a justification 'by' any other works (616-617).
The one who is righteous by faith can only live in an atmosphere which is purified completely from the noxious fumes of the dream of other justifications (621).
Rather, it is "wholly and utter humility" (618) as regards one's self, without descending into the Colossian error (Col. 2:23) of pessimism or defeatism (619). The humility of faith stems from its monergistic nature (627). This humility essentially involves obedience to Christ: "the free decision of faith [is] made with the genuine necessity of obedience" (620). Faith and the obedience of performing good works are inseparable, as justification and sanctification appear together. The outward activity reflects the inward reality. Faith has "nothing to do with indifferentism, quietism or libertinism" (627). Believers cannot either keep their faith to themselves or behave in a manner disobedient to Christ.
Positively, justifying faith [is] a faith which knows and grasps and realises the justification of man as the decision and act and word of God (630); a monergistic salvation and revelation. A bare acceptance of the historicity of Christ does not pass muster in Barth's definition of faith; reliance on Christ must accompany it. "[Faith] lets itself be told and accepts the fact and trusts in it that Jesus Christ is man's justification" (631).
Barth's discussion of faith itself comes several chapters after his discussion of justification by faith. Whereas, on justification by faith, Barth walks largely (but not altogether) down established Reformed paths, many of his remarks on the nature of faith are fresh and insightful. Faith "is a subjective realisation … a subjectivisation of an objective res" (742). The fall corrupted mankind so that mankind is incapable of faith (745); "in the rivalry between a possible faith and actual sin, faith will always come off second best" (746). Objectively, man is justified in Christ, which reality s/he subjectivises to him-/herself by faith. The believer closes the object circle, whilst the object circle of the unbeliever remains open (742). Faith rests on Christ's restorative work, but the faithless person is also the subject of Christ's work of restoring the broken covenant between God and man -not in a mere theoretical way -but as a present person and for all humanity and all men (irrespective of their attitude to him) (743). Barth views faith itself in the same monergistic terms as he views justification. "In faith, man ceases to be in control", because it comes from Christ (743). The life of faith may be viewed as revolving around an external point, namely Christ, "finding in Him the true centre of himself which is outside himself" and from which point source it grows (744). Because of this absolute monergism, faith is a free choice: there are no other plausible, evident choices for the believer to make; the action of faith is the doing of the self-evident -just because it takes place in the free choice beside which man has no other choice (748). Like Calvin, Barth associates faith with a very high degree of assurance, characterised by an "adamantine, unquestioning and joyful certainty" that is superlatively beyond "the certainty of any other human action" (747). This assurance is obtained when one is regenerated by Christ 2 and one's eyes are now opened to see the new state into which one has entered: that the night has passed and the day dawned; that there is peace between God and sinful man, revealed truth, full and present salvation (748-749).
Indeed, Barth regards the very possession of faith as itself an assurance for the believer, as the evidence of a change that came upon all mankind at the resurrection, which assurance is further confirmed by the testimony of the church (751). Barth's doctrine of assurance, therefore, requires introspection and extrospection from man: man must look at himself to find all humanity (including himself) in a new state, and he must look to Christ as God's revelation and as confirmed by the church through history. Because Barth is not drawn into the agency of the Holy Spirit, and because he views Christ's redemptive work as for all men, it is unclear whether anyone can or should find him-/herself in an original fallen state, the reality of which Barth readily admits.
Faith pervades the whole life of a Christian and all that s/he does (757-758) and should be expressed in three main ways, namely acknowledgement, recognition, and confession (758). In acknowledgement, the believer should be drawn to the church with such compulsion that he must not only accept and respect it but submit to its law and desire to associate himself with it and join it (759).
Barth then introduces his novel doctrine of revelation into the equation.
The active acknowledgement of the Christian faith, in which recognition and confession are included and from which they result, does not have reference to any doctrine, theory, or theology represented by or in the community. … Nor does it have reference -as the Reformers so sharply emphasised -to the histories of the Old and New Testaments, to the prophetic and apostolic theology as such on whose witness the proclamation of the community is founded (760).
Rather, Barth argues that the acknowledging act of faith is acknowledging the community of Christ, in which community one encounters Christ himself (759). This point at which soteriology and ecclesiology meet is clearly a departure from Calvin and one to which we will return in Section 3. Next, the recognising act of faith is the recognition of the sinfulness of self, and also recognising Christ as Lord and beginning to see and understand him (762). This involves contrition for sin in very orthodox Reformed language: [From] the recognition that my pride and fall are vanquished in the death of Jesus Christ … it follows that I am seriously alarmed at myself, that I am radically and heartily sorry for my condition, that I can no longer boast of myself and my thoughts and words and works and especially my heart, but can only be ashamed of them, that I can think of myself and my acts only with remorse and penitence (771).
Recognising one's self thus, faith also has knowledge of Christ as the Word of God revealed in history (774). In this light, the believer begins to recognise positive changes in his/her life: little renovations and provisional sanctifications and reassurances and elucidations will necessarily penetrate the whole man, who in the knowledge of faith has undoubtedly become a new subject (775).
Finally, confession is Barth's third main act of faith. It involves a public declaration of one's faith in Christ whom he has come to acknowledge and recognise. Helpfully, Barth observes: Confessing is not a special action of the Christian. All that is demanded is that he should be what he is. … If he stands out in some respect … if, as a confessor, he has to suffer in some way, it is not because he intends all this. It proceeds from his action, but he himself does not impart to it the quality to provoke it (777).
These words are somewhat profound when one considers that Barth was writing against the backdrop of the Second World War, in which the Confessing Church in Germany suffered heavily for standing out on the side of Christ.
POINTS OF DEBATE
This section outlines the main points of debate between Calvin and Barth that have emerged from our study in Section 2 as touching on the doctrine of faith. In the interests of brevity, we do not effect an extensive comparison with other Reformed theologians, in this instance, as to do so judiciously would require a chapter devoted to each. Instead, in the debated areas, we summarise the positions of Calvin and Barth and attempt to articulate the basic points of disagreement.
We noted that Barth inverts Calvin's order by first discussing justification by faith, and then considering faith independently. The scholastic method first requires a definition of the individual elements before a discussion can proceed to the relationships between the elements: scientifically working "toward the ultimate unknowableness of the totally transcendent God" (Vanderhaar 2012:73). One can only tentatively suggest possible reasons for Barth's inversion of the expected order. Certainly, Barth unfolds an external, monergistic, and intensely Christocentric doctrine of justification by faith. Perhaps, Barth places his discussion of justification by faith before his discussion of faith itself because he wishes to emphasise that the external justification by Christ, on which faith trusts, is necessary before one can either possess that faith or be aware of one's possession. However, this is merely conjecture. It is altogether clear that Barth emphasises justification on the basis of Christ's faith (Rom. 3:22) more than the believer's faith as the instrument (or means) of appropriating the justification accomplished by Christ. Both are correct statements but, when discussing justification, Barth stresses the former whereas Calvin stresses the latter. Furthermore, Yu (2019:312-313) observes an ontological divergence between Calvin and Barth. Whilst Calvin emphasises a theocentric faith on the mediatorial work of Christ as linking man with God, Barth's object of faith is Christocentric -"primarily and only Jesus Christ" (Yu 2019:313).
Calvin asserts that faith rests on Christ as he is revealed in the Bible, whereas Barth argues that faith rests on Christ but not in the biblical record. We have observed (see p. 3) that Calvin's doctrine of revelation is that the Protestant Canon is the inspired special revelation of God; the Bible alone is the ultimate authority and the standard to which we must refer our thoughts, activities, or experiences. Kuiper (1993) shows how Calvin's doctrine of revelation is critical of the authority of extra-biblical special revelations and experiences as subversive to doctrine and piety. Barth emphasises that faith rests not principally on the record of Christ in the Bible, but on a personal experience of Christ after a point of spiritual crisis or conversion; the Bible and the church help bolster faith, by providing historical rationale and continuity, but they are not themselves authoritative. He views the Bible as a "meaningful history" that is a helpful but occasionally erroneous witness of Christ the Word (Sonderegger 2019); and that the Bible is not itself God's revelation but contains the substance of God's revelation, who is Christ (Bosman 2019). Whilst Calvin also rejects the primary authority of the church, he and Barth disagree on the relationship between faith and Scripture. Calvin views Christ, as found in Scripture, and extra-Scriptural experiences of Christ are only valid in so far as they agree with Scripture. Barth holds to faith in an imminent Christ and Scripture as valid only so far as it agrees with Christ. Calvin locates faith in the Christ of the Scriptures, whilst Barth locates faith in Christ as contained in the Scriptures. Both these positions contrast with Schleiermacher, who locates faith in Christ in one's personal and shared experience of him (1928:123), and yet there is a sense in which Barth, in his intense Christomony, returns the question to experience: if Christ but not the Bible is my divine authority, how may I be authoritatively convinced of Christ except by my immediate experience of him? Calvin's solution is that Christ as God is indeed our divine authority, has revealed all that we need to know about himself in the Bible, and communicates that knowledge to us especially by his Spirit. But it is hard to pinpoint whether the difference between Calvin and Barth on the locus of faith amounts to a fundamental difference in their definitions of faith, or a difference in emphasis. Might Calvin say of Barth that his faith rests on his own understanding of meeting Christ, rather than on Christ? And might Barth say of Calvin that his faith rests on a book that purports to teach of Christ, rather than on Christ? We suggest that the potential Barthian critique, in this instance, is valid only if the Bible is not the inspired Word of God. If, however, it is, then the revelation of the Word in Christ's incarnation and his writings are equally authoritative, each flowing from the same divine source. John uses λόγοςto to mean both Christ and the written word: in John 1:1-4, the λόγοςis evidently Christ, but in most other Johannine uses, the evident meaning is the written word. 3 The words of the incarnate Christ instruct that, for the New Testament church, faith (πίστις) in himself will be on account of the written word: "[I pray] for them also which shall believe on me through my word" (John 17:20).
Barth's doctrine of corporate justification in Christ flows from his views of hypothetical prospective universal corporate election and reprobation in Christ. This theological system is internally consistent but the original premise is flawed, as all men are not depicted by Scripture as prospectively universally elect in Christ; rather, the eternal decree concerns individuals (Rom. 9:13). Some individuals are elected to make up Christ's body (Col. 1:24) and others are reprobate to make up those upon whom Christ would show his justice (2 Thess. 1:3-10). Although Barth denied that he was a universalist (see Jüngel 1986:44-45), several theologians have argued that this understanding of election amounts to universalism (Crisp 2004). Von Balthasar (1971:163) argues that Barth's grounding of election in God's absolute grace must lead to universal salvation. Berkouwer (1960:229) contrasts Barth with classical universalists in his resolving election and the Gospel offer into Christ who is elected and reprobated for the universality of mankind, but he observes that Barth's system logically leads to universalism.
CONCLUSION
There is much on the doctrine of faith that the neo-orthodox Barth shares with the Reformation Father Calvin. Both are quick to assert a monergistic doctrine of faith: an instrument bestowed by God, whereby a person is enabled to perform a real human action to rest on Christ for his/her salvation and cannot be earned or otherwise obtained by his/her own good works. Both repudiate the idea that a person is justified on the mere basis of his/her faith; rather, Christ justifies, and faith receives that justification. Both Calvin and Barth contend that faith is accompanied by a high degree of assurance, although Calvin also acknowledges the perturbations of conscience from the (re) discovery of personal sin and an imperfect trust that admits unbelief.
The theologian's doctrine of revelation has a major impact on his systematics. Comparing Calvin and Barth's doctrine of faith, the main difference is on the locus of faith and that stems from their variant doctrines of revelation. Calvin holds that the Christ and the Bible are both God's inspired Word. Barth accepts the former but rejects the latter; for him, the Bible is a historical record containing God's Word -Christ. Calvin is foremost a biblical theologian and his doctrine of faith accords better with the plain teachings of the Bible. Barth is foremost a systematic theologian and his Christomonic system resolves some of the hard questions regarding, for example, the eternal decree, but sometimes at the expense of strict biblical fidelity. If we might summarise Calvin's faith in words such as the instrument whereby I hold to Christ as he reveals himself in Scripture. Barth's faith is something like the solid trust that Christ is the centre of my world. The role of the Bible is the principal point of divergence in their definitions of faith. As Clark (2008) has commented, the Reformed church would do well to reflect on the relationships between the Bible, experience, and faith, if it is to recover a Reformed identity for the 21 st century.
If one were to imagine how these two theologians do their work, one imagines Calvin at his desk, flicking backwards and forwards through the Bible, and through Augustine and Bernard, taking copious notes and arranging and rearranging these into publishable form. One imagines Barth sitting in an armchair, with a coffee and perhaps a cigar, thinking, then discussing with his amanuensis Lollo, 4 then dictating a paragraph, then moving on to the next sphere of thought. When working with Calvin's writings nowadays, one generally finds a subject where one expects where the subject is dealt with summarily and then left; when working with Barth's writings, one finds him leaving a subject only to return to it several pages or chapters later. If Calvin is precise and methodical so that the reader receives a honed and unambiguous statement of truth, Barth's thoughts flow onto the page as a stream of consciousness in which the loose ends are sometimes left for the reader to complete and develop. | 8,213.8 | 2022-01-01T00:00:00.000 | [
"Philosophy"
] |
Immunomodulating Profile of Dental Mesenchymal Stromal Cells: A Comprehensive Overview
Dental mesenchymal stromal cells (MSCs) are multipotent cells present in dental tissues, characterized by plastic adherence in culture and specific surface markers (CD105, CD73, CD90, STRO-1, CD106, and CD146), common to all other MSC subtypes. Dental pulp, periodontal ligament, apical papilla, human exfoliated deciduous teeth, alveolar bone, dental follicle, tooth germ, and gingiva are all different sources for isolation and expansion of MSCs. Dental MSCs have regenerative and immunomodulatory properties; they are scarcely immunogenic but actively modulate T cell reactivity. in vitro studies and animal models of autoimmune diseases have provided evidence for the suppressive effects of dental MSCs on peripheral blood mononuclear cell proliferation, clearance of apoptotic cells, and promotion of a shift in the Treg/Th17 cell ratio. Appropriately stimulated MSCs produce anti-inflammatory mediators, such as transforming growth factor-β (TGF-β), prostaglandin E2, and interleukin (IL)-10. A particular mechanism through which MSCs exert their immunomodulatory action is via the production of extracellular vesicles containing such anti-inflammatory mediators. Recent studies demonstrated MSC-mediated inhibitory effects both on monocytes and activated macrophages, promoting their polarization to an anti-inflammatory M2-phenotype. A growing number of trials focusing on MSCs to treat autoimmune and inflammatory conditions are ongoing, but very few use dental tissue as a cellular source. Recent results suggest that dental MSCs are a promising therapeutic tool for immune-mediated disorders. However, the exact mechanisms responsible for dental MSC-mediated immunosuppression remain to be clarified, and impairment of dental MSCs immunosuppressive function in inflammatory conditions and aging must be assessed before considering autologous MSCs or their secreted vesicles for therapeutic purposes.
INTRODUCTION
Mesenchymal stromal cells (MSCs) are a subset of multipotent cells present in tissues of mesenchymal origin, mainly responsible for their regeneration. MSCs were first identified as a specific subset of spindle-shaped cells in the bone marrow, characterized by adherence to plastic under standard culture conditions, with the potential for clonogenic proliferation. In 2006, the Mesenchymal and Tissue Stem Cell Committee of the International Society for Cellular Therapy defined three minimal criteria for MSCs: plastic adherence, ability to differentiate into chondroblasts, osteoblasts, and adipocytes in vitro, and the presence of several specific surface markers, such as CD105, CD73, and CD90 [1]. More recently, the nomenclature has been revised, and novel specific surface molecules have been identified: MSCs are now defined also as STRO-1, CD106, and CD146 positive cells [2][3][4][5].
All MSC subpopulations not only share selfrenewal capabilities and multipotency but also display immunomodulatory properties [16,17].
As for other types of MSCs, dental MSCs are currently widely studied for their immune properties [23]. Here, we briefly describe the immunomodulating properties typical for each subset of MSCs (see Figure 1).
DENTAL PULP MESENCHYMAL STROMAL CELLS
DPMSCs were the first human dental MSCs identified in 2000 by Gronthos et al. [24]. DPMSCs are today widely used in clinical trials for regenerative purposes. Many groups already demonstrated that DPMSCs are capable of T cell inhibition and therefore have the potential for modulating T cell reactivity associated with both autoimmune diseases and allogeneic tissue transplantation [25]. Inhibition of peripheral blood mononuclear cell proliferation in vitro is thought to occur via the production of soluble factors secreted by DPMSCs, induced by interferon (IFN)-γ. The immunosuppressive effect of DPMSCs was alternatively shown to be triggered by activation of Toll-like receptors (TLRs) through the upregulation of specific cytokines and growth factors, such as IL-6 and TGF-β [26]. In addition, DPMSCs can induce apoptosis of activated T cells via direct cell-to-cell interactions, mediated by the Fas ligand [27]. DPMSCs also interact with activated neutrophils: a recent article demonstrated enhanced IFN-γ and IL-6 production after coculturing. Moreover, rapid and significant commitment toward the osteogenic lineage is achieved by neutrophilexposed DPMSCs [28]. DPMSCs' immunomodulatory ability was deeply investigated by Martinez and co-authors in in vitro-induced hypoxic conditions. DPMSCs were not only shown to dampen dendritic cell (DC) differentiation from monocytes but also efficiently recruited monocytes with immunosuppressive potential, as demonstrated by the M2phenotype of macrophages and high levels of IL-10. Moreover, DPMSCs were demonstrated both to determine impairment in natural killer (NK) degranulation and to have enhanced resistance to NK cell-mediated lysis. Lastly, DPMSCs' proangiogenic properties were also described [29]. Several authors have hypothesized the presence of different subpopulations with different activity among DPMSCs [30]; whether the immunosuppressive phenotype strictly correlates with the presence of specific surface markers still needs to be determined.
MESENCHYMAL STROMAL CELLS FROM HUMAN EXFOLIATED DECIDUOUS TEETH
MSCHEDTs are the DPMSC's counterpart in deciduous teeth, discovered in 2003 by Miura et al. [31]. MSCHEDTs significantly inhibit the differentiation of the pro-inflammatory subset of T helper 17 (Th17) cells and promote the induction of regulatory T cells (Tregs) ex vivo, being even more efficient than BMSCs for Th17 inhibition [32]. Their immunomodulatory effect has already been demonstrated in canine models of muscular dystrophy [33].
In murine models, systemic infusion of MSCHEDTs was able to effectively reverse systemic lupus erythematosus (SLE)associated manifestations, probably because of a shift in the Treg/Th17 cell ratio. Potentially, their efficacy in SLE models could also be due to clearance of apoptotic cells by MSCHEDTs, as already demonstrated for other types of MSCs [34].
PERIODONTAL LIGAMENT MESENCHYMAL STROMAL CELLS
PDLMSCs were isolated and described in detail for the first time by Seo et al. [35] and Trubiani et al. [36]. PDLMSCs, similar to other MSCs of different origins, are sensitive to specific stimuli. One such stimulus for the expression of immunomodulatory properties is a coculture with peripheral blood mononuclear cells and specific cytokines such as IFN-γ [37,38]. After in vitro exposure to IFN-γ, the expression of hepatocyte growth factor, indoleamine 2,3-dioxygenase (IDO), and TGF-β was upregulated, leading to immunosuppression [39]. PDLMSCs were also shown to induce T cell anergy through the secretion of prostaglandin E2 (PGE2) [40].
In animal models of experimental autoimmune encephalomyelitis, decreased signs of inflammation and demyelination in the spinal cord are observed after injection of PDLMSCs, both through the increased production of neurotrophic factors and the suppression of inflammatory mediators [41]. The cell-conditioned medium reduced inflammatory damage in the same model and purified extracellular vesicles from PDLMSCs that mediated similar effects [42]. The vesicles were found to contain the antiinflammatory cytokines IL-10 and TGF-β. However, PDLMSCs from inflamed periodontium were shown recently to have significantly diminished inhibitory effects on T cell proliferation, compared with cells from healthy tissue, mainly due to a reduced induction of Tregs [43]. These findings may be relevant to the pathogenesis of periodontitis and should direct the efforts toward developing therapeutics for periodontitis by exploiting immunomodulation.
MESENCHYMAL STROMAL CELLS OF APICAL PAPILLA
The apical papilla is the part of the soft tissue found at the apex of developing teeth. MSCAPs were discovered in human immature permanent teeth in 2006 by Sonoyama et al. [44,45].
Relative to DPMSCs, MSCAPs show higher proliferation rates and mediate more efficient regeneration of the dentin matrix. Thus, developing dental tissues are probably a better source of immature stromal cells. MSCAPs are scarcely immunogenic and inhibit mixed lymphocyte reactions mainly through the secretion of soluble factors [46]. Conveniently, cryopreservation does not seem to alter MSCAPs' immune properties [47].
ALVEOLAR BONE MESENCHYMAL STROMAL CELLS
Recently, a unique population of MSCs referred to as ABMSCs has been isolated from the alveolar bone [48]. The isolation procedure is considered particularly easy and feasible when performed during implant positioning. These cells morphologically and functionally resemble the other types of dental MSCs described. Very recent studies confirmed in vitro ABMSCs' immunosuppressive effects both on monocyte and T cell activation. Moreover, ABMS was found to induce polarization of macrophages toward an anti-inflammatory phenotype (M2) and was able to secrete IL-6 and MCP-1 [49].
DENTAL FOLLICLE PRECURSOR CELLS
The dental follicle (DF) is a vascular fibrous sac containing the developing tooth and its odontogenic organ before eruption [50]. [51]. Recent studies have shown that DFPCs can inhibit the mixed lymphocyte reactions and elicit macrophage M2 polarization, mainly through TGF-β production [52]. Moreover, treatment with TLR3 and TLR4 agonists potentiates TGF-β and IL-6 secretion [53]. These characteristics make DFPCs promising candidates for the treatment of chronic inflammatory conditions.
TOOTH GERM PROGENITOR CELLS
TGPCs were identified by Ikeda et al. in the dental mesenchyme of the third molar tooth germ during the late bell stage. TGPCs have been successfully transplanted in rat models of chronic hepatitis, preventing the progression of liver fibrosis and contributing to the normalization of liver function [54]. Thus far, little is known about TGPCs' immunomodulatory mechanisms.
GINGIVA-DERIVED MESENCHYMAL STROMAL CELLS
Zhang et al. [55] identified human gingiva-derived MSCs in 2009. GMSCs are easily isolated and rapidly expanded ex vivo, thus potentially representing an optimal source of MSCs in the clinical setting. GMSCs have been shown to efficiently inhibit T cell proliferation in response to mitogen stimulation and to induce IDO, IL-10, cyclooxygenase 2, and inducible nitric oxide synthase through IFN-γ secretion, thereby exerting a wide-ranging antiinflammatory and immunomodulating action [56].
In animal models of contact hyper-reactivity and in an autoimmune arthritis model, systemic infusion of GMSCs attenuated pathological damage and suppressed Th17 activity, with a significant increase both in Tregs differentiation and IL-10 production [57]. Moreover, GMSCs elicited M2 polarization of macrophages and decreased Th17 cell expansion [58]. In addition, murine models of chemotherapy-induced oral mucositis showed significant clinical improvement after GMSC administration: in this setting, increased levels of manganese superoxide dismutase and hypoxia-inducible factors 1 and 2α were associated with lower rates of oxidative stress-induced apoptosis of epithelial cells [29].
ADDITIONAL DATA ON DENTAL MESENCHYMAL STROMAL CELLS AND OTHER CELL SOURCES
Our paper focused on MSC interaction mainly with T cells and macrophages because most of the existing data on dental MSCs are restricted to these cell subtypes. However, MSCs in general have been deeply investigated for their interactions with other immune cells, with the majority of data coming from bone marrow MSCs [59].
MSCs notably inhibit the maturation of DCs and can promote plasmacytoid DC differentiation, with subsequent Th2 polarization of the immune response [60,61]. Both PGE2 and IL-6 secretions have been postulated as possible mechanisms for DC modulation by MSCs [62,63]. NK cells also interact with MSCs and are sometimes responsible for their death through cell lysis [64]. However, although only partially interfering with activated-NK activity, MSCs can block the proliferation of resting NKs. Moreover, MSCs prevent DC-mediated induction of T-cell effector functions, IDO and PGE2 being key mediators in this setting [65]. MSCs are also capable of B cell inhibition and can block antibody production [66]. Programmed-death-1 pathway, CCL2 production, and Blimp-1 inhibition seem to be responsible for this action [67,68]. Finally, MSCs are demonstrated to promote the proliferation of CD4+ CD25+ FOXP3+ Tregs both in vitro and in vivo [69][70][71].
Although those data have not entirely been confirmed for dental MSCs, it is possible to hypothesize similar mechanisms underlying their immunomodulatory action. In fact, several comparative studies between dental MSCs and MSCs from other sources have been performed in the last years, but although significant differences were found in proliferative potential and both regenerative and differentiating properties, none of them was focused on immune-modulatory capabilities [72,73].
DISCUSSION
MSCs have already been used as cell-based immunosuppressive therapies for various disorders, including neurologic, ocular, oral, cutaneous, cardiovascular, and autoimmune diseases [74,75]. A growing number of clinical trials are using MSCs for therapeutic interventions in severe degenerative and inflammatory disorders. At the time of writing, almost more than 1,000 clinical trials were registered worldwide at ClinicalTrials.gov [4,76], with MSCs becoming a powerful new tool for effective immunosuppression avoiding many unwanted adverse effects of conventional drugs [77]. Some types of dental MSCs have been shown to share both regenerative and immunoregulatory potentials, which are becoming extremely relevant for tissue engineering and regenerative medicine [78]. However, very few studies have explored their interactions with immune cells in any depth, and much less is known about the possible mechanisms of their activity. Taking into account the different types of MSCs isolated from teeth, including DPMSCs, MSCHEDTs, GMSCs, PDLMSCs, ABMSCs, DFPCs, and TGPCs, representing an easily accessible source of multipotent cells for clinical applications [79,80], we are still far from a systematic investigation and comparative appreciation of their immunomodulatory properties [81].
One difficulty in such studies resides in the complexity of the stroma, whose tissue-resident cells interact in many ways with immune cells. The characterization of stromal subsets, which are often identified by combinations of markers that are not cell type-specific, has not been extensively carried out [82]. These subsets of mature stromal cells and MSCs from different tissues (bone marrow, adipose tissue, and umbilical cord, better studied so far) both promote active responses and suppress immune effector cells through regulatory circuits. Tissue stromal cells under inflammatory conditions drive the formation of immune cell aggregates, termed tertiary lymphoid structures [83], which disappear on a resolution of inflammation [84]. These structures actively drive inflammation, autoimmune responses, and autoantibody production, as well as promoting cancer progression, as prominently described in the lymph node stroma [85], but they also harbor tolerogenic potential, which depends on the inflammatory environment to be licensed [86].
Several mechanisms and molecules have been proposed for the immunoregulatory activity of MSCs in general, involving both cell contact and soluble mediators [87]. These have a protective role and stimulate growth and survival through paracrine secretion of bioactive molecules, collectively defined as the secretome. In many instances, the secretome has been shown to account for the effects of MSCs, so its exploitation may avoid the limitations associated with stem cell therapy [88,89]. The secretome also contains extracellular vesicles (EVs) [90]. These released membrane vesicles, including exosomes, microparticles, microvesicles, and apoptotic bodies, can be regarded as a dynamic extracellular vesicular compartment, strategic for their paracrine or autocrine biological effects. They can contribute to tissue regeneration, but with their content rich in cytokines, chemokines, enzymes, growth factors, microRNAs, and other molecules, they may also be responsible for controlling interactions with immune cells, ensuring prevention of excessive tissue fibrosis, stimulation of angiogenesis, and immunomodulatory effects [91]. However, as for different sources of MSCs, and different culture and passage conditions, also for the secretome, differences in production protocols, cell source, and cellular age all impact its composition and anti-inflammatory action [92]. There is a need for focused mechanistic studies and standardized functional assays in the area of immunomodulation by MSCs because they are usually assessed by in vitro tests of inhibition of T lymphocyte proliferation, and only a few studies compare MSCs from different tissue sources, none at present with dental MSCs [93]. There is general agreement that pro-inflammatory environments are not permissive for endogenous stem and progenitor cells to initiate regenerative processes because stem and progenitor cells require a tolerogenic niche to survive and to promote repair and regeneration. MSCs from teeth have a central role in dampening inflammation locally as in periodontitis [94], and they achieve this effect through their secretome [93] and EVs. The cytokine content and immunoregulatory effect effects of the latter are variable across different diseases, so that the MSCs-EV fraction should be carefully evaluated in the context of the condition studied for the best therapeutic potential.
Although there are no direct studies of MSC interactions with neutrophils in tissues, MSCs inhibit neutrophil apoptosis, although they have no inhibitory effect on their phagocytic and chemotactic activity [95]. MSCs generally reduce the activation of innate immunity [92], and many of their effects are due to the secretion of IL-6, PGE2, and IL-17. Stromal cells also play a role in the induction of myeloid-derived suppressor cells, which can be a pathological differentiated type of neutrophil, in several conditions, including cancer, sepsis, and viral infections [96]. MSCs and their EVs have been shown to induce conversion of pro-inflammatory M1 into M2 macrophages, and EVs released by M2 macrophages can subsequently promote Treg formation [97][98][99]. MSCs also modulate immune cell function through inhibition of dendritic cell maturation and suppress the functions of T lymphocytes, B lymphocytes, and NK cells. Many reviews have appeared on this issue of immunosuppression related to clinical uses, for example [100][101][102].
Pulp-derived MSCs have been proposed for treating systemic disorders [87] and other types of MSCs, particularly in the area of neuroinflammatory and neurodegenerative diseases [103,104], whereas EVs have been advocated for the control and therapy of autoimmunity [105]. This promising outlook is certainly reinforced by progress in transcriptomics and single-cell analysis of MSCs [106,107], revealing different subsets and mechanisms of action. It is therefore not surprising that MSCs or their exosomes have recently been suggested as a treatment for severe COVID-19 [108][109][110][111]. This potential therapeutic strategy has been successfully used in a few reported cases [112] and is mainly based on the known immunomodulating actions of MSCs in acute respiratory infections, through induction of Tregs [113] and in their ability to counteract proinflammatory cytokines [114].
Recent studies have highlighted that MSCs aging may limit their function and therapeutic potential, with some evidence for reducing their immunosuppressive activity [115][116][117]. Senescent MSCs show decreased proliferative activity, smaller MSCs-EV size, and lower production of cytokines and chemokines; their ability to inhibit T cell proliferation is impaired while not suppressing NK, B lymphocytes, and macrophages [76,118]. To address this issue, changes in the expression profiles (including transcriptomic, proteomic, epigenetic, and noncoding RNAs) of senescent MSCs have been explored, and some rejuvenation strategies devised, starting from the modulation of the microenvironment under hypoxic conditions [119,120]. Data mining several genetic datasets, coupled with powerful bioinformatics applications, have revealed that upregulation of HLA class II antigen expression is central to the changes of aged MSCs, causing a pro-inflammatory phenotype and a decreased immunosuppressive function [121]. Again, little is known about replicative senescence (and its markers) and other effects of aging on dental MSCs.
The immunomodulatory properties of MSCs, at variance with other stem cells, contribute greatly to their therapeutic effects not only in immune-mediated diseases but also for the repair of tissue damages. This has been chiefly verified in several neurodegenerative disorders, such as Alzheimer's disease, amyotrophic lateral sclerosis, and Parkinson's disease, as well as in cerebrovascular damage and autoimmune disease as multiple sclerosis. The glia cells, activated in these conditions, constitute the main targets of the immunosuppressive action of MSCs [103]. Other cell types, predominantly macrophages, but also dendritic cells, induce a different inflammatory environment, in response to which MSCs display regulatory mechanisms tailored to the local situation and, thus, have been used for the treatment of various conditions such as graft-vs.-host disease [122], systemic lupus erythematosus [123], liver cirrhosis [124], and inflammatory bowel disease [125]. Their potential use in treating fibrotic and inflammatory diseases such as systemic sclerosis, chronic obstructive pulmonary disease, pulmonary fibrosis, and also severe asthma and COVID-19 should now be the logical next step forward.
These immunoregulatory properties and long-term stability of dental MSCs are of paramount importance for developing their application to autoimmune and other inflammatory conditions, as well as for continued renewal in regenerative medicine, as dampening inflammatory reactions promotes proliferation and differentiation of MSCs. Present data suggest that dental MSCs may be a useful source of MSCs for treating immunemediated diseases.
However, the exact mechanisms responsible for dental MSCmediated immunosuppression remain to be clarified. Moreover, it is not known whether dental MSCs' immunosuppressive function is impaired under local as well as systemic inflammatory conditions. This point is crucial to understanding whether autologous PDLC could be a reasonable source of MSCs for the treatment of autoimmune and other disorders. Despite the promising results achieved in dental MSCs and immunomodulation, this area of research needs to be methodically investigated.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
AUTHOR CONTRIBUTIONS
RP and APa: conceptualization. FD and APi: literature revision and data curation. RP: supervision. APa, OT, and RP: writingoriginal draft preparation and writing-review and editing. All authors have read and agreed to the published version of the manuscript. | 4,618.6 | 2021-03-31T00:00:00.000 | [
"Biology",
"Medicine",
"Materials Science"
] |
ON THE BIFURCATION OF LIMIT CYCLES DUE TO POLYNOMIAL PERTURBATIONS OF HAMILTONIAN CENTERS
We study the number of limit cycles bifurcating from the period annulus of a real planar polynomial Hamiltonian ordinary differential system with a center at the origin when it is perturbed in the class of polynomial vector fields of a given degree.
Introduction and statement of the main results
In the qualitative theory of real planar polynomial differential systems one of the main problems is the determination of limit cycles of a given vector field. The notion of limit cycle goes back to Poincaré, see [12]. He defined a limit cycle for a vector field in the plane as a periodic orbit of the differential system isolated in the set of all periodic orbits. The first works in determining the number of limit cycles of a given vector field can be traced back to Liénard [9] and Andronov [1]. After these works, the detection of the number of limit cycles of a polynomial differential system, intrinsically related with the so-called 16th Hilbert problem [7], has been extensively studied in the mathematical community, see for instance the books [3,14] and the papers [5,6,10,11].
One of the main tools of producing limit cycles is perturbing a system having a center. The notion of center goes back to Poincaré, see [12], who defined a center for a vector field on the real plane as a singular point having a neighborhood filled with periodic orbits with the exception of the singular point. If a system has a center then when we perturb it we may have a limit cycle that bifurcates in the perturbed system from some of the periodic orbits forming a center. This tool is one of the most effective ways of producing limit cycles but it requires the knowledge of the first integral of the unperturbed system (the one having a center). It is well-known that the determination of first integrals is also a very hard problem. This is why in this paper we will focus on an unperturbed planar differential system from which we know a first integral of it. More precisely, in this paper we consider the planar polynomial Hamiltonian system where H(x, y) = (x 2 + y 2 )/2 and the first c i ̸ = 0 is positive. We assume c n ̸ = 0 for convenience so that the degree of system (1) is 2n − 1. Note that H is a first integral of system (1), and since H has a local minimum at the origin of system (1) has a center at the origin.
In order to simplify the notation we will write system (1) as We note that the circles x 2 + y 2 = constant on which G(x, y) ̸ = 0 are periodic orbits, and that the ones on which G(x, y) = 0 are filled of singular points of the differential system (2), and consequently of the differential system (1). Let η > 0 be the smallest real number such that the circle x 2 + y 2 = η 2 is filled of singular points if it exists, otherwise η = +∞. We also note that all the singular points of the differential system (2) except the origin are on circles x 2 + y 2 = constant where G(x, y) = 0. So the period annulus of the center at the origin (i.e. the connected set formed by the union of all the periodic orbits surrounding the origin and having the origin in its inner boundary) is the annulus {(x, y) ∈ R 2 : x 2 + y 2 < η 2 }.
First we will study the number of limit cycles that appear when system (1) (or (2)) is perturbed in the class of all polynomial differential systems in the form where A and B are arbitrary real polynomials such that Second, we will study the number of limit cycles that appear when system (1) (or (2)) is perturbed in the class of all polynomial differential systems in where A i and B i are arbitrary real polynomials such that Let η 0 = η if η < +∞, and let η 0 < +∞ if η = +∞. Then we can parameterize the set of periodic orbits surrounding the origin and intersecting the that converges for sufficiently small ε. The functions M i (h), defined for h ≥ 0, are called the i-th Melnikov function, and each positive simple zero of the first non-vanishing Melnikov function corresponds to a limit cycle of system (4).
In order to study the limit cycles that bifurcate from an unpertubed system when we perturb it, the vast majority of the papers study the simple zeros of M 1 (h), assuming that it is the first non-vanishing Melnikov function. There are much fewer papers studying the simple zeros of M 2 (h) assuming that it is the first non-vanishing Melnikov function, and there very few papers which study the simple zeros of M 2 (h) assuming that it is the first non-vanishing Melnikov function. In this paper we will study the simple zeroes of all the Melnikov functions M k for an arbitrary k, assuming that it is the first non-vanishing Melnikov function.
As far as we know there are only two papers that provide a similar result working with Melnikov functions at any order and perturbing the linear centerẋ = −y,ẏ = −x. The first one goes back to Iliev [8] were he was the first one in doing so. Due to the fact that this is an extremely hard problem involving very difficult computations, he started with the linear center, i.e., system (2) with G = 1. Following the ideas of Iliev, in [2] the authors study the number of zeros of the Melnikov function at any order for system (2) in the case in which G = (x 2 + y 2 ) m−1 . Our system (2) generalizes the systems studied in [8] and [2] because the unperturbed part is taken to be more general and we extend their results to this more general situation.
Here [p] denotes the integer part of the real number p.
The second main result of the paper is the following. When G = 1 the bounds obtained in Theorem 2 coincide with the upper bounds obtained in [8]. When n = m − 1 and c i = 0 for i = 1, . . . , m − 1, system (2) coincide with the one studied in [2]. However, the upper bounds provided by Theorem 2 are larger than those obtained in [2] due to the fact that our G is full. Since a particular case of our system is the system studied in [2] where the authors obtain a better bound and prove that the bounds are not reached, and since the bounds in Theorem 1 are the same as those in statement (ii) of Theorem 2, we believe that the bounds in Theorem 2 are not going to be reached.
The paper is divided as follows. In section 2 we introduce three lemmas that will be used in the proofs of Theorem 1 and 2. The proof of Theorem 1 is given in section 3 and the proof of Theorem 2 is given in section 4.
Preliminary results
We first present a lemma, proved in [8], which will be a key factor in calculating the Melnikov functions. Proof. The proof follows directly from Lemma 3.
Corollary 5. Let τ be a polynomial one-form of degree s. Then the oneform τ /G l can be expressed as such that Q(x, y), q(x, y) and α(h) are polynomials of degree s + 1, s − 1 and [(s − 1)/2], respectively.
Proof. Choose the polynomials Q, q and α as in Lemma 3. Then the proof follows by substitution.
We rewrite system (4) as where ω = A(x, y)dy − B(x, y)dx is a polynomial one-form of degree m. We will calculate the Melnikov functions for system (4) using the following well-known result due to Françoise [4] and Roussarie [13].
We rewrite system (5) as where ω i = A i (x, y)dy − B i (x, y)dx is a polynomial one-form of degree m for each i = 1, 2, . . .. We will calculate the Melnikov functions for system (4) using the following well-known result due to Françoise [4] and Roussarie [13].
Lemma 7. For system (5) we have where where and r k−i is determined successively by Ω i = dS i + r i dH for i = 1, . . . , k − 1.
Proof of Theorem 1
In order to proof Theorem 1 we first prove the following lemma where we look at the r i in Lemma 6 in more detail.
Lemma 8. Assume M 1 (h) = . . . = M k−1 (h) ≡ 0 for some k ≥ 2, and define p 0 = 1. For i = 1, . . . , k − 1 let p i be the polynomial such that the function r i of Lemma 6 is r i = p i /G 2i , and let Q i and q i defined satisfying p i−1 ω = dQ i + q i dH, see Lemma 3. Then p i is a polynomial of degree i(2n − 3) + im given by Proof. We know by Lemma 6 that and by Lemma 3 that where Q 1 , q 1 and α are polynomials of degree m + 1, m − 1 and [(m − 1)/2], respectively. Using this information and induction on k we shall prove that p i is a polynomial of degree i(2n − 3) + im given by p i = q i G + (2i − 1)Q i J for i = 1, . . . , k − 1. | 2,321.2 | 2017-03-02T00:00:00.000 | [
"Mathematics"
] |
IOTFlood: hardware and software platform using internet of things to monitor floods in real time
Floods are responsible for a high number of human and material losses every year. Monitoring of river levels is usually performed with radar and pre-configured sensors. However, a major flood can occur quickly. This justifies the implementation of a real-time monitoring system. This work presents a hardware and software platform that uses Internet of Things (IoTFlood) to generate flood alerts to agencies responsible for monitoring by sending automatic messages about the situation of rivers. Research design involved laboratory and field scenarios, simulating floods using mockups, and later tested on the Mundaú River, state of Alagoas, Brazil, where flooding episodes have already occurred. As a result, a low-cost, modular and scalable IoT platform was achieved, where sensor data can be accessed through a web interface or smartphone, without the need for existing infrastructure at the site where the IOTFlood solution was installed using affordable hardware, open source software and free online services for the viewing of collected data.
INTRODUCTION
Floods are responsible for a high number of human and material losses every year. It is estimated that, worldwide, 20 million people suffer annually from the effects of this environmental phenomenon, to include 270 thousand in Brazil (Christofidis et al., 2019). The adverse effects of seasonal floods in riverside communities and in cities bordering river basins, such as deaths, destruction of houses and crops, loss of material goods and long periods of homelessness, are the main social consequences caused by the alteration of the hydrological regime (Cristaldo et al., 2018;Franco et al., 2018).
However, it is possible to reduce these impacts through the adoption of preventive measures, which involve the constant monitoring of river levels very close to inhabited areas, using equipment that performs the measurement of data in a continuous and reliable way. Monitoring can be manually performed, with daily records from observers, using linimetric rules available at hydrometric stations to measure rainfall volume, level and flow of rivers, or automatically through float, ultrasonic and water column pressure sensors, which remotely transmit data to software (ANA, 2011;Pereira et al., 2020).
Recently, the Internet of Things (IoT) has emerged as a global computing infrastructure through the adoption of technologies that transmit and convert data obtained from different devices, including digital sensors, allowing the autonomous exchange of useful information through technologies such as radio frequency identification (RFID), wireless sensor networks (WSN), cloud computing (Farooq et al., 2015).
The prediction of natural disasters, such as floods, is one of the IoT applications, as it allows adequate monitoring of variables inherent to the environment. However, there are two significant disadvantages of using smart sensors across isolated areas such as riverbeds: component cost and battery life, since they commonly require periodic supervision to check status and battery replacement (Farooq et al., 2015). Fortunately, the evolution of sensor engineering has allowed the creation of low-cost implementation and maintenance projects.
In Brazil, the northeastern region is considered the most economically vulnerable area and also one of the most susceptible to flooding episodes during the rainy season (Pereira et al., 2018). In the states of Alagoas and Pernambuco, the hydrographic basin of the Mundaú River stands out in the region.With a surface area of 4126 km 2 , 52.2% of which located in the state of Pernambuco and 47.8% in the state of Alagoas, it is the water system most used in irrigation, fishing, supply and transportation activities (Freire and Natenzon, 2019).
Approximately half of municipalities that make up this hydrographic basin are located in the state of Alagoas. Most of the urban areas of these municipalities are located on plains, which have repeatedly suffered from floods (1914,1941,1969,1988,1989,2000,2010), revealing the vulnerability of the region in relation to events of this nature (Monte et al., 2016). According to Fragoso Júnior et al. (2010) and Freire and Natenzon (2019), in the last extreme episode, which occurred in 2010, in just 3 days of heavy rain, 26 municipalities in Alagoas were declared in the state of public calamity and another 34 in the state of emergency, causing 55 deaths and leaving almost 150 thousand people homeless. Such a scenario indicates the need for constant and effective monitoring, with accurate and real-time hydrological measurements along the river.
According to Alagoas (2020), in hydro-meteorological stations present in the state, there are twenty-six monitoring points of rivers using radar and level sensors which send, via GPRS and satellite, hourly information on level and flow for viewing the intensity of rising rivers. However, a major flood can occur in a few minutes, justifying the implementation of a realtime monitoring system. In view of this gap, it is understood that environmental technologies and innovations using digital sensors, integrating hardware and software, can be inserted in the scenario of monitoring rivers in order to generate instant data, helping decision-making by competent bodies.
For these reasons, the Mundaú River served as a test scenario for this research, which features a flood monitoring system, called IOTFlood, composed of surface sensors that collect data on atmospheric pressure, river elevation level, rain occurrence, temperature and geolocation, represented in Figure 1. In the proposal, a low energy consumption microcontroller was used to read data from sensors and transmit them via LoRaWAN® -an open-standard communication protocol that governs LoRa® networks and uses radio frequency for long-range transmissions with minimum energy consumption -for a remote gateway, connected to the Internet, which sends received data to a server in the cloud. This station, in addition to having an open implementation architecture with integrated low-cost software and hardware solution, has the following advantages: (i) possibility of installation in remote locations and without access to the Internet or cellular signal; (ii) long transmission range (reaching tens of kilometers); (iii) access to data over the Internet; (iv) creation of graphs showing data variation, with possibility of generating alarms to notify situations requiring attention, making it possible to take preventive actions; (v) easy installation and use; and (vi) long battery life, eliminating the need for power banks to achieve this longevity.
Before the development of the IOTFlood solution, a literature review was carried out in search of related works in order to support the appropriate choice of its hardware and software components.
In the work of Tose (2012), a prototype was developed for monitoring sewage stations with Arduino connected to temperature, humidity and gas sensors, using communication technology without ZigBee for data transmission. However, this is a high-cost approach with low reach (due to the operation frequency) and very sensitive to interference from other devices, with low battery autonomy and dependence on a computer located near the prototype's installation site. IotFlood differentiates itself by having a significantly longer transmission range, running on battery and the possibility of operating without the need for a nearby computer. Helal (2018) described the construction of a prototype, called EstAcqua, of an environmental and oceanographic station with low-cost surface and submerged sensors, with low-cost hardware, using wireless data transmission via LoRaWAN and open source software with low power consumption and long transmission range. EstAcqua has illuminance, atmospheric pressure, humidity and external and underwater temperature sensors. IotFlood differentiates itself by being applied to the monitoring of floods in lagoons and rivers, having as reference level location (GPS), rainfall, ultrasonic, temperature and pressure sensors.
Oliveira (2017) reported the creation of a geomagnetic station using Internet of Things concepts with a web interface for viewing online data. The magnetometer has temperature sensors to help with measurements made and stores them in a database on a micro SD card. IotFlood differentiates itself by using LoRaWAN for data transmission, having low energy consumption and being focused on environmental and oceanographic monitoring. Tzortzakis et al. (2017) used Atmega328 microcontrollers with transmission cards without LoRa to collect environmental data in cities and transmit them via LoRa to a gateway that in turn forwards data to an IoT server using GPRS. The devices that collected data were powered by a battery and solar plate. IotFlood is differentiated by the use of nodes exposed to the environment, transmission of data close to the ground and the use of LoRaWAN in sending collected data.
As observed in the aforementioned works, solutions are usually for short range, and depend on a nearby computer to obtain data from sensors or on an external power source to energize the equipment. Thus, a differential of the architecture proposed in this work is its adherence to IoT, where data from sensors can be accessed through web interface or cell phone, without the need for existing infrastructure at the IoTFlood installation site using low-cost hardware, open source software and free online services for viewing collected data.
Section 2 presents the research methodological procedure, which consisted of laboratory experiments and field tests, as well as hardware and software components used. Section 3 presents the IOTFlood platform and discussions on collected data. Finally, conclusions and suggestions for future works are presented.
MATERIALS AND METHODS
This applied research adopted the Design Science Research Methodology (DSRM) as methodological procedure (Peffers et al., 2007), which consists of the production of a technological artifact in six stages: problem identification and motivation; definition of the objectives for the solution; design and development; demonstration; evaluation and communication. The problem identification stage referred to the need for real-time flood detection. The objective was to implement hardware and software architecture to adequately support the services provided for in the project. In design and development, a test mockup was used, called the "aquarium", whose graphical representation is shown in Figure 2, which contextualizes all ten elements of the laboratory environment scenario. Demonstration and evaluation of the proposal were carried out in laboratory and field environments. The aquarium, built for the laboratory test environment, measuring 100 centimeters in length by 50 centimeters in height and 70 centimeters in width and capacity to store 350 liters of water, proposed to represent, without using scales, a situation of river bed and bank, with the usual decline to simulate the field environment, with reading of variables by means of sensors. The experiment was carried out with manipulation of variables such as water flow and rain occurrence and quantity, simulated with a shower installed in the corner of the mockup. It is also important to note that the tests were carried out in order to demonstrate that the modularity of the IOTFlood architecture allows different sensor configurations to be coupled to the proposed solution.
IOTFlood solution hardware
The first challenge in designing the IOTFlood platform hardware was deciding which sensors are best suited to monitor water in the event of flooding. The most common level sensor technologies include ultrasound, guided wave radar and pressure transducers. The microcontroller most suitable to the proposal was chosen, which consists of a small and independent computer that is housed in a single integrated circuit. The option for IOTFlood was LoPy4 due to its low cost, memory capacity, integrated LoRa® chip and embedded MicroPyton language. Fipy, which consists of a native MicroPython development board, was chosen as a gateway to enable the option of connecting via Wi-Fi or operator chip, thus allowing an extra standard connection mode with the possibility of operating in places difficult to access.
Printed Circuit Boards (PCI) were designed with the implementation of two independent and auxiliary circuits to integrate the microcontroller connected to the sensors. The field environment board was more compact, since it only needed to perform level (analog and digital), temperature, pressure, and rain indication and intensity readings. The laboratory board carried out the same field environment measurements; however, it displays a visual signaling through LEDs that need to be activated by relays. Figure 3 describes the materials used in the IOTFlood hardware. The characteristics of the different sensors used in the research are detailed below: • HC-SR04 ultrasonic distance sensor: capable of measuring distances from 2cm to 4m with great accuracy. In IOTFlood, it was used to measure the distance from the sensor to the water blade.
• Neo6m GPS module: compatible with several flight controller modules, making it ideal for drones. The serial interface is 3.3V. It was used in research to indicate positioning, latitude, longitude, date and time of displacement in relation to the object.
• BMP280 pressure and temperature sensor: it is a barometer, which was used to measure atmospheric pressure and temperature. In addition, this sensor can also inform the approximate altitude of the location where it is installed. The BMP280 module works with I2C or SPI interfaces and 3V voltage, and its low energy consumption allows it to work for long periods on battery power, being suitable for projects such as drones, weather stations, devices with GPS, clocks, etc; • 9SS18 rain sensor: It monitors a variety of climatic conditions such as rain or snow, and was used in IOTFlood to capture the rainfall incidence. When the climate is dry, the sensor output is in high state and in low state when in the presence of rain; • LA16M-40 level sensor: The sensor indicates via ON / OFF signal when the water level has been reached, being installed on the internal side of the test tank. For each water level to be detected, it was necessary to install a sensor at the desired point in the test tank and on the bridge in the field. Four sensors were used in both. These data on the water level, when captured and presented in dashboards, signal to the agency about the risk of flooding. In addition, they send warning signs by text message to the email of those responsible for environmental monitoring.
• Harmony XB7 beacons: This Harmony XB7 monoblock line is composed of buttons, switches and control unit beacons, which simplifies acquisition and installation in power distribution machines and applications, being used in IOTFlood to visually indicate upon ignition of buttons at each water level detected by sensors installed in the test tank.
IOTFlood solution software
To promote the integration and real-time visualization of data generated by sensors through an application for users, The Things Network (TTN) and Cayenne platforms were adopted because they are free and worldwide references in IoT projects.
The TTN platform is an open cloud-based network (network server) that globally connects LoRaWAN gateways and provides a set of tools to create low-cost, safe and scalable IoT applications. In turn, Cayenne is a free software platform that allows building, prototyping and sharing connected devices from IoT projects. Cayenne also provides an online dashboard and an app version for smartphones so that users can remotely monitor and control their IoT projects. The platform has a database with hundreds of LoRaWAN devices from different manufacturers, which automatically allows identifying a new sensor added to the application, whose data is captured and loaded on the user's dashboard.
The software embedded in IOTFlood was programmed in the MicroPython language in two modules, one for reading information from sensors and the other for transmission to the gateway through the LoRa network. The gateway handles data received from devices and forwards them to the TTN, where integration with the Cayenne server is performed to allow users to view information collected from sensors.
Solution architecture
The IoTFlood solution was designed to be used in two scenarios: laboratory and field. The laboratory environment simulated a flood scenario through the projected model called "aquarium". In the field environment, IOTFlood could be tested in a natural environment, that is, in a river where several flooding episodes have already occurred.
In both environments, the IOTFlood architecture was divided into four modules formed by: data transmitting devices, data receiver, Cloud IoT (network) and the visualization interface for the user, as shown in Figure 4. It should be noted that the difference between the IOTFlood architecture developed for laboratory and field was that, in the first environment, LEDs (light emitting diode) were used, which only work with 220V alternating voltage supply, while in the second environment, the hardware intended for the field was designed to provide 5V continuous voltage through a battery without the use of LEDs. In turn, data read from sensors are transmitted to the gateway, which forwards them to the TTN cloud for integration with Cayenne, which are made available to the user.
RESULTS AND DISCUSSION
Results were achieved in the two environments designed for the research. To evaluate the functionalities and the fulfillment of requirements for collection, transmission and processing of the prototype data, the following tests were performed on the solution architecture: (i) laboratory simulation aiming to analyze the behavior of the prototype in place with controlled variables for simulations; (ii) installation on the Mundaú River aiming to analyze the behavior of the prototype in natural environment.
Laboratory results involved the analysis of simulated data on monitoring pressure, humidity, temperature and flood levels. Sensor data were sent to the gateway located 10 meters away from the sensing control device (endpoint), which was connected to the laboratory's Wi-Fi network. The endpoint was coupled to an electrical box following the pattern of the twophase and three-phase model, as shown in Figure 5, which also shows how data are made available online to the user in web and mobile versions. The occurrences of water reaching the predetermined levels caused the LED lights to turn on, signaling relevant simulation events. IOTFlood: hardware and software platform using … Rev. Ambient. Água vol. 16 n. 4, e2675 -Taubaté 2021 Figure 6 presents the graphs of data captured by sensors during laboratory tests, whose analyses and discussions are described below: • Barometer sensor: on Day 16, it was possible to notice a slight increase in atmospheric pressure (Point A). On the other days, tests confirmed a drop in pressure levels in the environment; • Analog Distance Sensor (level): simulations of raising and lowering of the river level were carried out by adding water through a shower and removal with the aid of a hose (Points B, C, D, E); • Temperature Sensor: in the first days, tests were carried out at night and then during the day, which explains variation in ambient temperature (Point F); • Ultrasonic Distance Sensor: as in the environment, there were routine simulations of river elevation through the addition of water, reaching different elevations and respective removal, symbolizing level reductions (Points G, H, I, K, L and M); • Rain Intensity Sensor: using sensor to identify rain occurrence and intensity, this phenomenon was simulated with the use of shower (Points M, N, O, P, Q and R); • Rainy Weather Sensor: with a sensor installed just below the shower, it was possible to simulate in real time the variation of weather without rain and with light and intense rains (Points S, T, U, V and X). Since all IOTFlood hardware and software were adequately tested in laboratory simulations, we proceeded to monitoring and communication tests for validation in the real environment. The chosen stretch of the Mundaú River is located in the city of União dos Palmares, state of Alagoas, which has already undergone major flood events (Freire and Natenzon, 2019). In addition to monitoring pressure, humidity, temperature and flood levels, data indicating the location of the equipment on the map were also collected. Data captured by sensors were transmitted to the gateway, located 15 meters away from the endpoint and connected to a cell phone's Wi-Fi network. The electrical box with the devices was fixed at the base of the main river bridge, as shown in Figure 7. In the field environment, tests took place between 10:00 am and 06:30 pm. The results are discussed below: • Barometer sensor: from Point A to point B, there was a decrease in atmospheric pressure due to the milder temperature in late afternoon, followed by a slight increase at Point C; • Temperature sensor: there was an increase in temperature (Points D and E) from morning until late afternoon, with a decrease (Point F) in early evening.
• Ultrasonic Distance sensor: no rise in the Mundaú River level was observed, but a slight reduction (Points H and I) of a few centimeters in the river level was observed at dusk; • Rainy Weather sensor: there was no incidence of rain on the day of tests; • Analog Distance sensor (level): as observed in the ultrasonic sensor, there was no rise in the Mundaú River level; however, there was a slight reduction (Points O and P) of a few centimeters at dusk. They were composed of four sensors separated by a distance of fifty centimeters assembled on a structure with height of two meters and fifty centimeters from the river level; • Rain Intensity sensor: there was no incidence of rain on the day of tests.
Data captured from a natural environment met expectations for validating the proposed architecture. The functioning of sensors in procedures of capturing and sending remote data occurred properly in the connection established in the field, with the endpoint sending data to the gateway every 10 seconds. Figure 9 presents a comparison of the performance of materials used in IOTFlood, especially sensors, during laboratory and field experiments.
CONCLUSIONS
The IOTFlood platform was built to capture data from a set of sensors for the environmental monitoring of river levels in real time, with high battery autonomy and long transmission range. Data collected by IOTFlood are transmitted to an Internet of Things server in the cloud and made available for user viewing. Tests carried out in the laboratory and in the field demonstrated that the LoRa technology met the architecture purpose, which was to transmit data to a gateway located up to 1.5 km away.
The IoT server in the cloud allows the creation of graphs for viewing collected data, making it available both via web and mobile, in addition to allowing the creation of alarms to warn for the occurrence of anomalous situations. In addition, it is possible to configure the Cayenne application to send specific alert messages by E-mail or by the Short Message Service (SMS) according to the monitoring of sensors.
Regarding project costs, which included a list with twelve materials, including gateway and endpoint, after several tests to select more suitable sensors, LEDs and other components, architecture with accessible value was reached, as proposed in this research, totaling approximately U$ 300.00.
This study presented limitations regarding the capture of data in the real environment, as sensors were exposed for only one day. Although the meteorological service indicated that it would be a rainy day, rain did not occur, making it impossible to accurately check rain sensors. In addition, the time taken to install and remove materials on the bridge and due monitoring to avoid theft problems also prevented the procedure from being repeated on other days.
Further studies should explore the IOTFlood platform in different seasons and other stretches of the Mundaú River, with more variations of wind intensity, current and water level, submitting the solution architecture to other uncontrolled environmental variables. | 5,322.8 | 2021-07-27T00:00:00.000 | [
"Environmental Science",
"Engineering",
"Computer Science"
] |
BREAST DENSE CELL COUNT AND ANALYSIS IN DIGITAL MAMMOGRAM BY AUTO THRESHOLD METHOD.
Dr. (Mrs). D. Pugazhenthi 1 and Mrs. N. M. Sangeetha 2 . 1. Assitant .Professor PG & Research Dept. of Computer Science, Quaid-E-Millath Govt.College for Women (A), Chennai, India.IEEE MEMBER. 2. Research Scholar, PG & Research Dept. of Computer Science, Quaid-E-Millath Govt. College for Women (A), Chennai, India.IEEE MEMBER. ...................................................................................................................... Manuscript Info Abstract ......................... ........................................................................ Manuscript History
Breast cancer one of the most leading disease among the women's. In this research paper, we count the number of abnormal breast cells and find their position with image processingtechniques. The proposed work contains four stages, and there are i) smoothing ii) Threshold iii) Basic Morphology iv) Partial analysis. The particular cells segregation is the most important process of this work. In present work is the application of various filters used in image processing and apply these filters in detecting the highly dense cells responsible for breast abnormalities. Two different type of filteroperators in image processing are Gradient, and Laplacian operators have been used and implemented. The LABVIEW and MATLABSoftware gave practical usages to this image processing system because it can interconnect with other tools used in this system and controls them to have an automatic system. This experimental result is the potential effectiveness of such a system on diagnostic tasks that require the classification of individual cells. The digital mammogram images and data sample have been taken from the online database has been made for the detection of dense cells responsible for breast abnormalities. The main aim of this paper helps the radiologist to detect the breast dense cells.
…………………………………………………………………………………………………….... Introduction:-
To evaluate the results of this method, an approach for the detection of breast abnormalities of digital mammogram was also proposed. This research article presents an automated procedure for breast cell counting and its performance. The procedure for analyzing dense breast cancer cell image consists of four procedures, i.e. i) smoothing ii) Threshold iii) Basic Morphology iv) Partial analysis. The goal of detection and segmentation is to locate and extract a highly dense cell from the image. In digital mammogram images, this detection and segmentation play important roles in breast cancer classification between dense and non-dense [1]. Computing technology has often proven useful in performing tedious or complex tasks fast, accurately. In extensive previous work by this author and others, computational intelligence was used to identify breast lesions based on radiologist's impresses of various features visible on a digital mammogram images. [2] In other areas of medicine, computational aptitude has been used to segment an image into its constituent parts, permitting automated assessments of tissue dimensions [3]. Methods based on thresholding and edge detection [4]- [5] [7] have also been functional to the problem of cell separation. If the rises in the image are large enough, then the background can be easily separated from the background image [8].
The methodology that is used in this research paper is based on image processing tools such as image improvements (filter, noise removal, particle analysis, etc.), image segmentation, vision assistant and LABVIEW software [9]. The most important property of this method is accuracy where preserves its difficulty. Also, noise removal algorithm that is used in this system for improving the counting ability of the breast cells is one of the most significant qualities of this research work.
Cancer Cell Image Analysis:-
In the image preprocessing, the Filters: Smoothing-Gaussian transform is performed. The global thresholding and morphological operations are performed in segmentation. The segmentation results show that the average performance is 85% when square-shaped structuring element. The experimental results show that the average performance is more than 90 %.
DATASET DESCRIPTION:-
The digital mammogram is the low energy x-ray of the breast. It is one of the efficient screening tools which compare to another screening tool. The digital mammogram images taken by an online database.
Proposed Methodology:-
All the digital mammogram image have to be transferred into LABVIEW Software. After that, the image processing process will be continued by using LABVIEW and MATLAB together. The image enhancement, segmentation, etc. are done with this two software. The system of this process is shown in Fig. 1. In Fig. 2 the original digital mammogram image that is used for image processing system is shown. In this picture, particular cells are shown beside the normal cells. Fig. 3 shows it will look that some of the cells are marked. These are the defected cells, and the goal is to mark them between other cells in the digital mammogram image with image processing.
Filters:-
Smoothing-Gaussian-Smoothes the image using a Gaussian filter. The effect of Gaussian smoothing is to blur a picture, in a similar approach to the mean filter. The degree of smoothing is determined by the standard deviation of the Gaussian. The Gaussian smoothing process yields a "weighted average" of each pixel's region, with the average weighted more towards the value of the central pixels. This is in contrast to the mean filter's uniformly weighted average. Because of this, a Gaussian provides moderate smoothing and reserves edge better than a similarly sized low screen. As can be seen, the dense cells are extracted and highlighted among the other cells, very well. This technique can be used in large scale cells check and can improve the accuracy and speed of the examination and counting process of the highly dense affected cells.
By counting the detected cells changes of the effect of medical treatment on the growth of the cancer cells.
Filters: Edge Detection-Laplacian:-
Highlights the edges in the image using a Laplacian filter. This operation finds edges in the filtered image and adds them to the original image. This process helps increase the separation between the cells and the background.
The filters are used in the course of identifying the image by locating the sharp edges which are discontinuous. These breaks bring changes in pixels strengths which define the boundaries of the object. The object is breast, and a new methodology is applied to identify the breast type using its morphological features. Here, it is applied to different two-dimensional filters, comparative studies and displays the result. In this edge detection method, the assumption edges are the pixels with a high gradient. A fast rate of change of intensity at some direction is given by the angle of the gradient vector is observed at edge pixels. In Fig. 3, an ideal edge pixel and the corresponding gradient vector are shown. At the pixel, the intensity changes from 0 to 255 at the direction of the slope. The magnitude of the angle indicates the strength of the edge. Calculation of the slope at uniform regions and end up with a 0 vector which means there is no edge pixel. In natural images which usually do not have the ideal discontinuity or the uniform regions as in the Fig.1 and the process the magnitude of the gradient to make a decision to detect the edge pixels. The fundamental processing is applied for a threshold. If the gradient scale is larger than the threshold, decide the method in matches on the brink pixel. An edge pixel is described by using two important features, primarily the edge strength, which is equal to the magnitude of the gradient and secondarily edge direction, which is equal to the angle of the slope. A slope is not defined at all for a discrete function. Instead, the slope, which 1216 can be set for the ideal continuous image is estimated using some operators. Among these operators "Roberts, Sobel and Prewitt" are gradient-based edge detector. Representing proper close morphology operation. Adv. Morphology:-Fill Holes-Advance morphology is the one of the essential parts to fills any size holes in a particle. This process helps to differentiate between the sizes of the breast cells. Representing Advance Morphology Using Fill Hole Operation.
Particle Filter:
Removes unwanted particles from the binary image using the particle filter. This application only analyzes particles that are mostly circular and larger than 10 pixels. Particles that have a Heywood Circularity Factor that falls outside of the 0 to 1.40 range or have a pixel area of less than ten are removed.
Representing particle filters
Particle Analysis:-Analyzes the properties of the remaining particles (cells) in the image. The particle measurement function can analyze up to 50 different properties of a particle.
Conclusion:-
Digital mammogram is the fundamental and efficient screening tool to count and analysis the breast cells. The highly dense cells are extracted and highlighted among the other cells. This process method used in online database evaluation and counting process of the highly dense cells in the breast. By counting and analysis the breast thick cell .The experimental results shows the effect of medical treatment. | 2,030 | 2016-01-01T00:00:00.000 | [
"Computer Science"
] |
Security Enhancement of 2-D DIM Codes using 4 x 1 NOR Logic based on FISO
Security enhancement and bandwidth increase are requirements of ultrafast future optical code division networks that also minimise the latency at nodes without distressing the traffic performance. In OCDMA systems, eavesdropper tries to breach the optical code, and encoded/encrypted data in order to get correct code word of legitimate user. For security enhancement, an ultra high speed 4×100 Gbps two dimensional Diagonal identity matrix (2D-DIM) code and four input single output (FISO) all optical logic gate (AOLG) with private key encryption is proposed and its performance in terms of security is analyzed. Expressions for synchronous and asynchronous transmissions are derived for proposed work and it is perceived that asynchronous transmission has better security. Further performance/security analysis is carried out for different scenarios such as with/without FISO, with 1D-DIM/2D-DIM codes, using 4×1 AND gate/NOR gate in terms of correct code detection rate (CCDR), max. eye amplitude (MEA), Received Q factor, Signal to noise ratio (SNR) at different launched power levels for legitimate user and eavesdropper. Eavesdropper attacked the proposed system but due to FISO layering, private key encryption, and 2D-DIM codes, cannot get acceptable level of CCDR. To the best of author ’ s knowledge, proposed system is ultra high speed with enhanced security and no reported studies have investigated security enhancement at this much ultra high speed.
Introduction
OCDMA systems have been deployed in military and commercial communication systems for data transmission [1] [2]. Network security is a prime requirement in these systems due to highly sensitive information [3]. Three different security schemes are prominent to boost the security of networks, such as (i) algorithmic cryptography (ii) quantum key distribution (QKD) (iii) physical layer cryptography [4]. Algorithm cryptography is used in upper layers or layer 3 and is constantly prone to security breaching. QKD provides high security but with a trade off between distance and data rate. Nowadays, physical layer cryptography has received attention because of enhanced security at intermediate data rates and performances. Further, different strategies which eavesdropper may use to get the signal are energy detection, code interception and differential detection. Eavesdropper can easily detect the "1" and "0" in case of non return to zero modulated transmission with simple photo-detector [5]. Multicode keying is also demonstrated in [6] [7] and differential detection is used in [8]. Multicode keying is the most widely used technique for chiphertexting to randomize data in OCDMA for security purposes. Multicode keying allows multiple binary bits for each one and zero of authentic user bits but it wastes the bandwidth of the medium. Cryptanalysis is a method to decrypt signals when a key is not known to eavesdropper [9] and different attacks are known-plaintext attack, chosen-plaintext attack and ciphertext-only attack. In the case of a known-plaintext attack, plaintext and ciphertext are known but the key is not available. Chosen plaintext attack is freedom to select plaintext attack in which ciphertext and key are not known. In ciphertext-only attack, plaintext is not available. For security enhancement, many techniques are incorporated into the OCDMA systems, such as AOLG [10] [11], multi-code keying [12], steganography [13], virtual user scheme [14], and etc. Logic gates are prominently deployed in OCDMA systems due to simple encryption and decryption. Semiconductor optical amplifier (SOA) based XOR logic proposed at 120 Gbps [15], code swapping using XOR gate [16], quantum Logic Gate Code with NAND gate [17] and with multi diagonal coded NAND gate [18] etc. Optical logic gates with SOA deployment are studied in literature but SOA 's data rate capability is perceived to be limited [19]. Fiber nonlinearities are a popular technique for generating high speed optical logic gates, according to the picoseconds operation [20]. As researchers studied and engineered optical logic gates with highly nonlinear fibres, they came across the issue of long HNLFs that may decrease system compactness. Logic gates in OCDMA system for security enhancement have two input ports and a single output port (2 x 1 logic). Minimal power losses are desired for long reach systems, but multiple 2 x 1 logic gates introduce power degradation. As a result, more than two inputs based on all AOLG are required to reduce power losses.
In this research article, for the first time, we demonstrate a 4 x 100 Gbps OCDMA system using diagonal identity matrix codes [21] with ciphertexting using private key and four input single output (FISO) all optical DCF-MZI based NOR/AND gates. In order to breach network security, eavesdroppers have to break not only logic gates operations but also private key cipher texting.
Introduction of OCDMA systems and security enhancement are given in Section 1, code construction of 2 D DIM codes is given in Section 2. The concept of all optical gating is mentioned in Section 3. System setup and system parameters are written in Section 4 and this is followed by results and discussion. Graphical representation for different parameters is incorporated in Section 5. Conclusion/outcomes of the proposed work are given in Section 6.
Code construction of 2 dimensional DIM codes
For the OCDMA system implementation, DIM codes are used which are presented by us in [19] and these are upgraded to 2 dimensional DIM codes. K is users, W is weight of code and L is length of the code, λc cross correlation, and λa auto correlation. For the code sequence x=(x1, x2, x3 …… xN) and y= (y1, y2, y3 …… yN), auto correlation and cross correlation is given as (1) and (2) respectively Auto-correlation: where 0<m<N-1 (2) λc=0 and λa=2 in DIM codes. Final code matrix size is K × L and length of code is K × W. There is freedom to select any weight ≥2. Size of base matrix (IB) is 2 × WBalance and basic matrix of size Y × Z is given as In matrix M2, placement of identity matrix is repeated for K × L times as shown in equ. (5) Wavelengths are representing row wise and time in column. Time t is selected as where j is number of time the slot j (1,2,3,4…..N) and tb bit time slot, calculated as Performance of comparison performed in reported work [19] and 1D DIM codes emerged as an optimal code out of double diagonal weight, multi diagonal weight, enhanced diagonal weight, walsh hadamard code. Therefore, in this work we are upgrading 1D DIM code to 2D DIM codes.
Concept of all optical gating
where P is the power of the channel, 2 is nonlinear refractive index, is fiber effective area, ∆ is spacing between channels and D is chromatic dispersion.
Carrier Frequencies (THZ)
Carrier Frequencies (THZ) Figure 1 Carrier spectrums of (a) input signal and (b) after nonlinear medium
Output Power
Output Power
Four wave mixing
Dispersion compensation fiber (DCF) in Mach-Zehnder interferometric (MZI) configuration employed and Compensation of dispersion with DCFs is prominently observed in optical communication systems [23] [24] and has negative dispersion from -70 to -90 ps/nm/km. Power of signal or probe significantly affects the FWM in DCF and higher the probe power, more is the signal quality. On contrary, high pump power, suppress the power level of ones and zeros. For the better power and shape of signal pulse, optimal power levels are needed. We present four input single output AND/NOR gate for security enhancement using DCF-MZI configuration.
Output at AND will be true if all the inputs are true and for NOR gate to be true, all input bits should be false. In simple works, 4 input bits having 0 on all ports provide 1 for NOR only and having 1 on all ports provide 1 for AND gate only. Concept diagram of FISO is represented in Figure 2 using DCF-MZI structure and gate logic tables are shown in Table 1 (a)(b).
Optical Filter AND Logic S-signal, P-pump Figure 1 Proposed FISO AND/NOR gate using DCF-MZI Four signals SA, SB, SC, SD generated from optical laser source, amplified with erbium doped fiber amplifier (EDFA) and for the wavelength multiplexing, send to part (A). Pk is the private key (explained in section 4) as depicted in part (B).
A B Output
Encryption Optical Filter AND Logic S-signal, Pk-private key Figure 2 Proposed FISO AND/NOR gate using DCF-MZI FWM nonlinear medium in part (C) consists of two DCFs in MZI configurations for destructive and constructive interference. Upper port of the FISO unit shows the carriers at the OSA while lower port show only noises. Therefore constructive interference is on upper port and further in order to analyze the output data behaviour, signal is divided into two parts and each part consists of optical Bessel filers of variable frequencies. Varied frequencies are used to check the wavelengths which will provide data according to some logic gates. It is observed that data after filter 1 act as NOR logic data and after filter 2 is acting as output data of AND gate. Part (D) shows that NOR gate is high when all the inputs are low and in case of part (E) which is acting as AND gate, will provide high output when inputs are also high. Proposed FISO system is used for security enhancement of OCDMA system and legitimate user security system is depicted in Figure 2. Four user 2D DIM data passed through FISO system and in order to baffle eavesdropper, a private key (Pk) in place of pump signal is multiplexed with DIM data and passed through DCF-MZI. AND gate output from FISO system is neglected to increase the number of cases to get legitimate data and keep eavesdropper in illusion. NOR gate output communicated through optical fiber and then passed through DCF-MZI. Pumps equal to DIM user wavelengths are combined to DCF-MZI then again NORing with PK is performed to get legitimate user data. Figure 2 represents the 2D DIM code with optical layering and private key based OCDMA system. Legitimate data can be attacked in OCDMA networks and eavesdropper can use known plaintext attack to get the authentic information. In security enhanced OCDMA systems, first and foremost data layer security is performed and optical encoder then performs physical layer security. Optical decoding implemented at receiver to retrieve legitimate data and in this work, wavelength-time spreading codes are implemented. Security enhancement of 2D DIM based OCDMA system is proposed and investigated in Optiwave Optisystem TM . Four users are selected from DIM code matrix having weight (W) 2, code length (L) 8 and cross correlation 0. Code matrix for 2D DIM code is given as
System Model
Laser array with eight frequencies (193.1 to 193.8 THz) each at 100 Gbps provide continuous pulse train and spacing between channels is 100 GHz. From the power optimization, 10.877 dBm input power is selected and pseudo random bit sequence generator data is modulated with NRZ and further electrical to optical conversion is performed by Mach-zehndar modulation (MZM).
In order to realize 2D DIM users, time delays (t) are added to each wavelength as shown in Figure 3 (a) and t is calculated as equation (8) In [25], code interception by eavesdropper for single user is given and probability to comprehend legitimate user data at eavesdropper is denoted as PC and it is expressed as Where is miss detection of transmitted pulse, is false detection when no pulse transmitted, code length is L, code weight W and no. of wavelengths are .
Marcum Q function is expressed in equ. (16), detection threshold is , and Peak pulse energy to the noise power spectral density is E/N.
Where first order modified Bessel function zero th order is represented as 0 (x). In linear cryptanalysis in which output ciphertexts are utilized, is termed as known plaintext attack [26]. Time required obtaining output cipher texts by eavesdropper using linear cryptanalysis is given as: Time required to perform encryption on Advanced encryption standard is denoted by , and time needed to intercept complete code once is . Probability of complete detection of n user codes at eavesdropper is represented as .
SMF of 50 km is deployed and output is given to logic gate decoder such as DCF-MZI unit having same DCFs lengths with eight laser wavelengths same as DIM transmitter wavelengths/frequencies. The FWM again emerges and data from 193.93 THz spread on other laser wavelengths and specific legitimate output wavelength selected with demultiplexer. Negative time delays corresponding to specific user are given to wavelengths such as t1= -0.1 ns, t2= -0.2 ns, t3= -0.3 ns, t4= -0.4 ns, t5= -0.5 ns, t6= -0.6 ns, t7= -0.7 ns, t8= -0.8 ns and general receiver is shown in Figure 3 (b). PIN photodetector, low pass filter, 3-R regerator, BER analyzer are together make receiver.
Results and Discussions
Equation (17) reveals the increase and decrease in the time complexity to detect the legitimate user data at eavesdropper at different parameters i.e. for instance, signal to noise ratio (SNR) level increase at eavesdropper reduce the time complexity. At lower levels of SNR, for eavesdropper, security increase and therefore complex codes preferred. For multiple users, there are two transmission schemes such as synchronous and asynchronous transmission.
Synchronous OCDMA systems (S-OCDMA)
OCDMA encoded total N users with NRZ data linecoding travel inside fiber simultaneously operating at B bits per second. Probability of transmitting logic '1' by special user is ½ at given bit period and for same bit period, '0' is transmitted by N-1 users is 1/2 N-1 . Probability of transmission of '1' when other user transmits '0' assuming that there are independent data bits from each other is given as N/2 N . Time for which eavesdropper wait to get bit of single user is 2 N /(B×N).
Time required for breaching S-OCDMA security
Code interception is used to detect the single legitimate user code by eavesdropper and ciphertext information can be taken. Once eavesdropper gets the ciphertext, encryption key will decipher.
Time required for breaching S-OCDMA security is given as (19) Where Tea is eavesdropper time to attack encryption algorithm, Tei is eavesdropper time to get address code after interception. Pen is eavesdropper probability till nth interception to get correct user's code.
Asynchronous OCDMA systems (A-OCDMA)
In A-OCDMA, no synchronization is needed and in this case, for the duration of transmitting '1' by one user, fraction of 2 adjacent bits is transmitted for the same duration by all other users (N-1). For eavesdropper to differentiate one user, when it transmit '1' for fixed duration and all other users (N-1) for same time duration must transmit 2 adjacent '0'. Probability of transmit '1' by one user is ½ at given time period and for N-1 users, which transmit '0' on two overlapped bits, probability is 1/2 2N-2 . Eavesdropper time to wait one user transmission is 2 2N-2 /(D×N).
Time required for breaching A-OCDMA security
Time required for breaching S-OCDMA security is given as Comparison of S-OCDMA and A-OCDMA in terms of time required by eavesdropper to get the correct code is clear from equ. (20) and equ. (22) such that due to the presence of greater number of cases in A-OCDMA ( 2 × 2 −1 ) as compared to S-OCDMA ( 2 × ), the security of former system is more i.e. time required by eavesdropper to get the correct code is more. Therefore A-OCDMA is more secure than S-OCDMA systems.
If total users are not known by eavesdropper which are asynchronously travel, eavesdropper perform code interception on each bit period and thus probability is given as Number of users in the optical network is directly proportional to the security of the system and if eavesdropper do not know the users in the system, security improves and when it get to know total users, security reduces as clear from equ. (21) and equ. (23).
Performance and security analysis in Optisystem
2-D DIM OCDMA system having capacity 4×100 Gbps over 50 km using FISO unit is investigated in Optisystem software. Representation of four users multiplexed frequencies spectrum which is 2-D coded using DIM codes is depicted in Figure 4 (a) and carrier spectrum addition of private key is shown in Figure 4 (b). As discussed in section (4), four user data at the rate of 100 Gbps/user fed to FISO unit by adding Pk and FWM takes places inside FISO unit due to MZI-DCF configuration as shown in Figure 4 (c). Iteration is set to select the different frequencies at filter 1 and filter 2 in order to check the behaviour of filters towards data. Figure 5 (a)(b)(c)(d) and further Pk is added and it is shown in Figure 5 (e). Bit sequence at NOR gate and AND gate after respective filters is represented in Figure (f) (g). Figure 5 (h)(i)(j)(k) shows received bit patterns in proposed system after decoding of FISO data at legitimate users such as 1 st , 2 nd , 3 rd , 4 th users. Launched power has significant effects on the Quality, and SNR of received signal at both legitimate and at eavesdropper. Performance of FISO-2D DIM system is affected by launched/ input power but due to presence of FWM in the FISO, there is nonlinear response in both legitimate and eavesdropper as illustrated in the Figure 6. Power optimization is selected by observing received Q at varied power level -20 dBm to 20 dBm. Figure 6 shows the highest Q at 10 dBm launched power for both legitimate user and eavesdropper but eavesdropper has low Q factor. SNR increase with the increase in launched power and there are approximately same values at eavesdropper for all power levels but highest SNR observed at 10 dB launched power. Investigated parameters are analyzed in terms of security and high performance also which shows that security and performance of proposed system is enhanced.
System security with and without FISO unit is investigated in terms of correct codeword detection rate (CCDR) at legitimate user and eavesdropper, at diverse launched powers. FISO system enhances security of the system but also introduce insertion losses as well as nonlinear effects in the system. Therefore it is evident from Figure 7 that performance in terms of CCDR at legitimate user without FISO is enhanced followed by legitimate user with FISO. However different in both systems, is security of the communication and results revealed that eavesdropper in absence of all optical FISO system has greater success rates to receive correct code word but in other case where FISO is deployed, CCDR of eavesdropper is negligible as compared to aforementioned system. Figure 7 CCDR responses on different launched powers for with and without FISO unit AND gate and NOR gate are obtained from the FISO system and we have taken NOR due to high output power but in terms of security, better optical logic gate is selected after analysis. Analysis is performed on different launched power levels and system with AND gate provides lesser security to legitimate user in proposed system. AND gate encoding is common these days and become easy to decode for eavesdropper in the system. However, our FISO system is more secure than conventional system but here, NOR gate is best performing because of the logic operation of NOR. Table 3 (a) and (b) shows the logic operation of NOR and AND gate respectively on input data bits and Pk. In this work, 2D-DIM codes are investigated with FISO and comparison is carried out in Figure 9 with 1D-DIM codes with FISO in terms of maximum eye amplitude (MEA). MEA is least in case of eavesdropper with 2D-DIM codes and performance in MEA of 1D-DIM is highest because 2D-DIM code has some time skews limitations but eavesdropping MEA is highest in 1D-DIM, therefore it is lesser secure than 2D-DIM. Figure 10 shows the eye diagram at 50 km SMF distance for legitimate user and eavesdropper. Eye opening at legitimate user is wide because of encryption key and FISO system. Eavesdropper attack the proposed system but get insignificant information and therefore error are more and eye opening is very less. Encrypted legitimate user data reaches to the optical network unit with improved quality, better CCDR and high SNR. Therefore proposed 2D-DIM FISO system is suggested for the very high speed OCDMA systems in services where data protection is utmost concern.
Figure 9 Comparison of 1D-DIM and 2D-DIM codes in terms of MEA (a) (b) Figure 10 Eye diagram of proposed system for (a) legitimate user (b) eavesdropper at 50 km
Conclusion
Confidentiality enhanced ultra high speed OCDMA system is proposed in this work with all optical FISO gating and private key encryption using 2D codes. Security enhancement is performed such that eavesdropper has to breach the physical layer based optical code, FISO operation and also the encryption key in order to get correct code word of legitimate user. Large number of users in the A-OCDMA provides far better security than S-OCDMA. Optisystem simulation tool is considered for the realization of proposed work and system is analyzed at different launched powers in terms of CCDR, MEA, Received Q factor, SNR for legitimate user and eavesdropper when different scenarios are used such as system with/without FISO, with 1D-DIM/2D-DIM codes, using 4×1 AND gate/NOR gate. For the successful legitimate user data detection, eavesdropper need matched decoder/encryption at the same time. Results reveal that proposed system with FISO at 10 dBm launched power provide CCDR of 10 -10 at legitimate user, CCDR of 10 -3 at eavesdropper, without FISO show CCDR of 10 -10 at legitimate user but system is very much prone to eavesdropper and provide CCDR 10 -7 . FISO system improve the security of proposed system and analysis is also performed for 1D-DIM and 2D-DIM, it is seen that eavesdropping is easier in 1D-DIM system. NOR gate due to its logic operation and high output power from FISO offers improved security than AND gate. Therefore, proposed system is good candidate for secure transmission and in near future, effect of uni-phase, bi-phase and quad-phase modulation can be studied.
Declaration:
No funding was received for the research work presented in this manuscript. Figure 1 Carrier spectrums of (a) input signal and (b) after nonlinear medium Optical bits with respect to time for (a) 1st user (b) 2nd user (c) 3rd user (d) 4th user (e) Pk (f) NOR gate (g) AND gate (h) received 1st user (i) received 2nd user (j) received 3rd user (k) received 4th user | 5,203.8 | 2021-05-21T00:00:00.000 | [
"Computer Science"
] |
Paleogene – Early Miocene deformations of Bukulja – Ven ~ ac crystalline ( Vardar Zone , Serbia )
Low-grade metamorphic rocks of the crystalline of Mts. Bukulja and Ven~ac, which are integral parts of the Vardar Zone, are of Late Cretaceous age. From the Middle Paleogene to the beginning of the Miocene, they were subjected to three phases of intensive deformations. In the first phase, during the Middle Paleogene, these rocks were subjected to intense shortening (approximately in the E–W direction), regional metamorphism and deformations in the ductile and brittle domains, when first-generation folds with NNE–SSW striking fold hinges were formed. In the second phase, during the Late Oligocene and up to the Early Miocene, extensional unroofing and exhumation of the crystalline occurred, which was followed by intrusion of the granitoid of Bukulja and refolding of the previously formed folds in a simple brachial form of Bukulja and Ven~ac with an ESE–WNW striking B-axis. The third phase was expressed in the Early lowermost Miocene (before the Ottnanghian), under conditions of NE–SW compression and NW–SE tension. It was characterized by wrench-tectonic activity, particularly by dextral movements along NNW–SSE striking faults.
Introduction
Crystalline of Bukulja and Ven~ac, with its non-metamorphosed Mesozoic-Cenozoic cover and Oligocene--Miocene granitoid, spatially belongs to the Vardar Zone (Fig. 1).These are terrains with complex geological compositions which have been discussed many times, often with controversial explanations.
There are dilemmas about the age of the crystalline in the first place, which directly influenced different explanations of the tectonics of these terrains.The crystalline has most often been considered to be of Paleozoic age (SIMI] 1938; FILIPOVI] 1973; FILIPOVI] & RO-DIN 1980; \OKOVI] et al 1995; TRIVI] 1998).Such an opinion is mostly based on the fact that these are rocks of different metamorphic grade, while there are no reliable paleontological proofs or even paleontological proofs of any kind.However, according to findings of globotruncana and other fauna and on the base of palynologic data from low-metamorphic rocks of Ven~ac, BRKOVI] et al. (1980) and MAROVI] et al. (2005), respectively, concluded that the Bukulja-Ven~ac crystalline is of Late Cretaceous age.
According to its age, folding of the area has also been explained in different ways.\OKOVI] & MAROVI] (1985,1986) separated three generations of folds in these terrains.These authors related the first fold generation to Hercynian deformation, which is marked by NE-SW striking fold axes.In the second phase, during older Alpine tectogenetic events, the Hercynian fold structures were refolded into E-W striking folds.The third generation of folds is the consequence of a pluton intrusion and further refolding of all the existing folds into large domes and brachial synclines.TRIVI] (1998) separated three (? four) generations of folds.According to this author, axes of the oldest, Hercynian structures are oriented in the WNW-ESE direction.These structures were refolded into folds with NNW-SSE striking axes during the first phase of Alpine deformation in the Mesozoic.Later, during the later Alpine phases, the geometry of such folds became more complex due to a pluton intrusion and strike-slip movements along E-W striking faults.MAROVI] et al. (2005) considered the metamorphic rocks of Bukulja and Ven~ac to be of Late Cretaceous age and the authors are of the opinion that there are only Alpine and no Hercynian folds in these rocks.
The relationship between the crystalline and the nonmetamorphosed Cretaceous (prevailingly Late Cretaceous) deposits, including tectonically incorporated slices of serpentinite, is unclear and has been explained in different ways.Sedimentary deposits are widespread on the surface, mostly north, east and southeast of the Bukulja-Ven~ac crystalline, and they were also drilled out under Neogene deposits of the Aran|elovac and Belanovica Basin.There are also isolated and disconnected portions of Cretaceous sediments on the southern rim of the crystalline.All this points to the possibility that the crystalline was completely covered by Cretaceous sediments.The majority of authors is of the opinion that the Cretaceous sediments transgressively overlie the crystalline.According to TRIVI] (1998), metamorphic rocks were thrust over Cretaceous sediments in certain parts of the terrain in the South.BRKOVI] et al. (1980) and \OKOVI] & MAROVI] (1986) mentioned sections where metamorphic rocks gradually transit into nonmetamorphosed Upper Cretaceous deposits.
Finally, in accordance with different interpretations of the geologic composition, the geotectonic position of the Bukulja-Ven~ac crystalline unit is also controversial.Its metamorphic content resembles the Drina-Ivanjica crystalline (\OKOVI] et al. 1995).The Bukulja--Ven~ac crystalline is located on the eastward extension of the Jadar Block, which is made of Paleozoic rocks.This fact led FILIPOVI] (1995), FILIPOVI] & JO-VANOVI] (1998) and FILIPOVI] ( 2005) to include at least a part of it (western part of Bukulja) into the Jadar entity.There is also the opinion that Bukulja-Ven~ac crystalline is completely different from the metamorphic rocks of both the Drina-Ivanjica and Jadar developments and that it is made of metamorphosed Cretaceous deposits belonging to the Vardar Zone (BRKOVI] et al. 1980;MAROVI] et al. 2005).
The above-cited problems concerning the geologic composition of the Bukulja-Ven~ac crystalline are a challenge for further investigations directed toward new and better documented solutions.The results of one of these studies, which represent a contribution to a better understanding of the Paleogene-Early Miocene tectonics of these regions, are presented in this paper.
(1) The Bukulja-Ven~ac crystalline is made of rocks of different degrees of metamorphism, mostly low-grade metamorphics and, to a smaller extent, medium-to-highgrade metamorphics.These are mostly sedimentary rocks which were subjected to regional metamorphism and also to contact metamorphism in the vicinity of the granitoid.The lowest structural position is occupied by gneisses (and also leptynolites in places), which are followed by: micaschists, sericite schists, meta-quartz conglomerates, phyllites and sericite schists, marbles, calcschists, metacalcarenites and metasiltstones.Also epidote-actinolite-and chlorite schists occur subordinately in the low-grade metamorphic complex.Rocks with a higher grade of metamorphism are found in the vicin- ity of the granitoid, while going away from it -towards the Ven~ac, low-grade metamorphics predominate.The Bukulja-Ven~ac crystalline is of Late Cretaceous or maybe partly even of Paleogene age.The second author (I.\.) is of the opinion that Ven~ac domain of the crystalline is of Late Cretaceous age, while the rest of it is Paleozoic and resembles the Drina-Ivanjica Paleozoic.During these investigations, rich microfloral association, which indicates Late Cretaceous age, was found in the calcschists and metacalcarenites of Ven~ac.This is in full agreement with the results on the crystalline age based on globotruncanas (BRKOVI] et al. 1980).However, this age most probably does not refer to the whole crystalline.Based on a lithostratigraphic correlation, FILIPOVI] ( 2005) is of the opinion that the metamorphic rocks west of Bukulja are similar to the Jadar Paleozoic, thus that they are Devonian and Carboniferous in age.
(2) Cretaceous sequence of non-metamorphosed deposits and serpentinite are exposed on the northern, eastern and southern slopes of the Bukulja-Ven~ac morphostructure.The Cretaceous sediments are represented by reefal and stratified limestones, rarely also by Early Cretaceous clastites and, for the largest part, by various types of carbonates, clastites and Late Cretaceous flysch (BRKOVI] et al. 1980).Smaller tectonic slices of serpentinite of Jurassic age occur locally near the Cretaceous sediments.
(4) A Neogene-Quaternary cover is represented by loosely bound coarse-grained, gravely-sandy, clayey-sandy and clayey deposits.These are mostly fresh-water equivalents of the Ottnangian-Karpatian and, to a lesser extent, also marine deposits of the Badenian and Sarmatian.The highest stratigraphic level is represented by different types of Quaternary deposits.
Methodology of research
Geologic mapping of the Bukulja-Ven~ac crystalline (including the granitoid) and its non-metamorphosed cover of Late Cretaceous age provided information relevant for solving the tectonic setting of the area.These were data on bedding, foliation, folds of different scale and faults.They were analyzed within different scale ranges and homogeneous domains and the obtained data were incorporated in a tectonic synthesis, together with knowledge on the lithostratigraphic units.
Particular attention was paid to the determination of the orientation of fault planes and associated slip direc-tion, which was used for the reconstruction of paleostress and deformation phases manifested from the middle Paleogene to the beginning of the Miocene.
Reconstruction of faulting succession and displacement was based on the criteria given by PETIT (1987) and GAMOND (1983GAMOND ( , 1987)).Reduced deviatoric paleostress tensors were computed for a cogenetic fault population which was separated from polyphase sets, based on field observations and kinematic compatibility.The method of numerical and graphical inversion proposed by ANGELIER & MECHLER (1977), ANGELIER (1979ANGELIER ( , 1989) ) and method of numerical dynamic analysis (NDA) by SPERNER et al. (1993) were used.Computation of the data for paleostress analysis was performed using Tectonic FP software (ORTNER et al. 2002).
Structural features
In a structural sense, three large homogeneous domains can be distinguished within the research area: (1) Bukulja-Ven~ac crystalline, (2) the thrust-fold sequence of non-metamorphosed Cretaceous deposits with tectonically incorporated slices of serpentinite and (3) Neogene basins.The first two structural domains are discussed in this paper, because they resulted from Paleogene-Early Miocene deformations, which were the subject of the research.
The structural setting of the Bukulja-Ven~ac crystalline is very complex with a polyphase-deformation history and at least two phases of folding.The area is dominated by a large (Dkm) brachial-antiform structure, the hinge of which plunges toward ESE.The bestdeveloped fabric element is foliation, which actually makes this antiform (Fig. 2A).Foliation is unevenly developed: it is best-developed in gneisses and micaschists, less present in phyllites, sericite schists and calcschists, while it is poorly developed in metacalcarenite, metasiltstone and "massive" marble.
The foliation is probably the result of flattening perpendicular to the foliation planes.Isoclinal intrafolial folds of cm and dm scale are indicators of shearing along foliation.They are particularly well-visible in the metacalcarenites of Ven~ac, and locally, also in quartzsericite schists (Fig. 3).The Folds are mostly rootless and represent thickened hinge zones, while their limbs are strongly flattened and sheared.These folds are west-northwest-vergent with fold axes plunging toward NNE and SSW (Fig. 2B).Crenulations of foliation are noticed locally.The crenulation axes plunge toward south-southeast to, southeast and northwest and they are genetically related to the formation of the brachial antiform (Fig. 2C).Foliation and intrafolial rootless folds could have been formed in an almost horizontal position.All this indicates refolding in the Bukulja--Ven~ac crystalline.
Foliation is developed in the granitoid as well.It has a periclinal distribution (Fig. 2D) compatible with foli- The thrust-fold stack of non-metamorphosed Cretaceous sediments with tectonically incorporated slices of serpentinite also have a very complex structure as well.Today, this unit is preserved within several small, more or less homogeneous structural regions on the northern, eastern and southern rims of the Bukulja-Ven~ac antiform.The structure is dominated by bedding and faults.The Bedding planes are well-exposed and penetrative.
Terrains on the northern slopes of Mt.Ven~ac are composed of non-metamorphosed deposits of Cretaceous age.Despite the fact that a large part of the area is covered with deluvium, a lot of information was acquired for fault analyses.
On the diagram F (Fig. 2), poles to bedding are mostly concentrated in the NW quadrant, marking a monoclinal dip toward southeast.However, field investigations showed that the folds in this area are not simple but that it is a folded unit with normal and overturned limbs of NNW (NW) vergent folds, similar to the folds of the first generation in the underlying Bukulja-Ven~ac crystalline, only less developed with less strain.Cretaceous deposits north of Bukulja are identically deformed (Fig. 2E).
East of Ven~ac, there is an intensely tectonized zone in the Cretaceous deposits and serpentinite.Unfortunately, this area is mostly covered, with no outcrops of Cretaceous deposits, thus a comprehensive measuremant the of bedding attitude could not be performed.According to the data from the wider surroundings (BR-KOVI] et al. 1980), the area is characterized by a thrustfold pattern marked by West-vergent recumbent folds and reverse faults, developed under dextral transpressio.
Terrains made of non-metamorphosed Cretaceous deposits on the southern and southwestern slopes of Ven-~ac are mostly covered with deluvium and are unfavorable for structural investigations.The scattering of the bedding data, presented on diagram G (Fig. 2) is probably a consequence of the rotation of faulted blocks, but also of the small number of measurements which are statistically not representative.Field observations showed that the Cretaceous deposits here are also intensely folded, with the occurrence of overturned west-northwest-vergent folds.
Results of paleostress analysis
Paleostress analysis in the area of the Bukulja-Ven-~ac crystalline, non-metamorphosed Cretaceous deposits and the granitoid show three kinematic stages, the first probably being of Middle Paleogene, the second of Oligocene to Oligocene-Miocene and the third of Early Miocene (Pre-Ottnangian to Karpatian) age.The relative chronology of these events is deduced from crosscutting map-scale faults in key outcrops.
Deformational event (D 1 ) -E-W compression
This paleostress tensor group comprises a conjugated pair of NW-trending sinistral and NE-trending dextral strike-slip faults (Fig. 4).These faults are overprinted by mainly extensional structures on numerous outcrops.
Folds of the first generation with a NNE (NE)-SSW (SW) striking axes probably originated in such a stress field.Today, they are exposed as intrafolial folds in the Bukulja-Ven~ac crystalline, as well as in WNW (NW) vergent folds in non-metamorphosed Cretaceous deposits.
Deformational event (D 2 ) -N-S-to-NE-SW extension
The second paleostress tensor group comprises WNW to NW and NE-trending normal faults (Fig. 5).These faults are probably related to an Oligocene unroofing of the Bukulja-Ven~ac crystalline and the granitoid intrusion.In this case, WNW to NW trending normal faults often form conjugate sets: synthetic, gently sloping northwards and antithetic, with steeper dips toward the south.They were formed above the brittle-ductile detachment zone along which the extensional unroofing occurred.
Deformational event (D 3 ) -wrench tectonic regime, NE-SW compression and NW-SE tension
The third paleostress tensor group comprises NNW to NW trending dextral and WNW trending sinistral strike-slip faults (Fig. 6).Fault systems with these kinematic characteristics, which originated in the stress field with NE-SW compression and NW-SE tension, can be
Discussion and Conclusions
Investigations in the area of the Bukulja-Ven~ac crystalline showed the following: • The Bukulja-Ven~ac crystalline is of Late Cretaceous age, maybe even partly Early Paleogene.It was intruded by an Early Miocene granitoid.
• The crystalline is overlain mostly by Late Cretaceous non-metamorphosed clastic-carbonate rocks and flysch.
• Metamorphic grade in the crystalline decreases from the granitoid to the periphery and toward the upper structural levels, where there is a gradual transition into non-metamorphosed members of the Late Cretaceous.
• There is a similar manner of folding (fold shape, vergences) in both sequences of Cretaceous deposits: the metamorphosed and the non-metamorphosed ones, but deformations in the crystalline is more intense and occurred in the ductile domain.Two phases of folding are noticed.
• Reconstruction of paleostress fields points to three major phases of brittle formation : in the middle of the Paleogene, in the Oligocene-Early Miocene and in the Early Miocene.
The above presented facts point to a unique tectonic-sedimentary environment in this area during the Late Cretaceous (maybe also in the ?Early Paleogene), which was inverted in the middle of the Paleogene.Such an environment is consistent with the model elaborated by PAMI} (1993), PAMI} et al. (2000, 2002), according to which the northern part of the Vardar Zone (Vardar--Sava) is the result of obliteration in the Upper Cretaceous-Paleogene active continental margin of Southern Europe, with well-defined island arc and back-arc basins.This sedimentation area was inverted and included into the Dinaridic orogene by collisional processes in the Eocene.According to PAMI} et al. (2000PAMI} et al. ( , 2002)), this phase was followed by intense deformation of the Jurassic ophiolitic mélange, metamorphism and magmatism.
The Bukulja-Ven~ac sedimentation and deformation area (Fig. 7) was probably generated in a similar tectonic setting.In the middle of the Paleogene, the Bukulja-Ven~ac area was subjected to shortening in the approximate E-W direction, when a thick WNW vergent thrust-fold sequence was formed.The second author (I.\.) is of the opinion that these structures were formed only in the Ven~ac domain of the crystalline, while, in its other parts, the Hercynian structures were refolded by a Mesozoic-Cenozoic tectonic event.The lower parts of the sequence reached the zone of ductile deformations and underwent regional low-to medium-grade metamorphism.The whole process was followed by the formation of tight and isoclinal folds with hinges striking NNE (NE)-SSW (SW) with strong axial plane cleavage, and subsequent transposition of bedding along the cleavage, the formation of foliation.Rem-nants of these folds are preserved today as intrafolial folds.
In the brittle-ductile and brittle domain, above the metamorphites, this phase of tectogenesis resulted in the formation of distinctly WNW (NW) vergent overturned, sometimes also recumbent, folds with axes striking NNE (NE)-SSW (SW) and the formation of conjugated NW trending sinistral and NE trending dextral strike slip faults.
Extension, probably ductile, followed by intrusion of granitoid, volcanism and exhumation of the Late Cretaceous metamorphics (metamorphic core complex) is characteristic for the second phase, which that was expressed in the Late Oligocene and up into the Early Miocene.
The process of exhumation metamorphism and emplacement of the granitoid was marked by refolding of the foliation and the previously formed folds, when the distinct brachial-antiform of the Mts.Bukulja and Ven-~ac (with an ESE plunging axis) was formed.There are certain indications that a shallow synform), rim synform, which is presently mostly burried with Neogene-Quaternary deposits, was formed northeast of the antiform.
Unfortunately, the detachment zone along which the ductile extension occurred has not been defined, which certainly does not mean that it does not exist.Further detailed investigations are necessary for its determination.
In the brittle domain in the area of extensional allochthon, WNW to NW and NE trending normal faults were activated, often as pairs of synthetic and antithetic sets.
After the exhumation of the metamorphic core complex of Bukulja and Ven~ac, tectonic shortening affected the area.It is expressed through dextral transpression with NE-SW compression and NW-SE tension.Activation of the NNW-NW trending dextral and WNW trending sinistral strike-slip faults is characteristic for this phase.In the domain of the first system, small NE-trending normal faults (probably "pinnate" faults) were activated.Under transpressional conditions, west-vergent folds and thrusts were formed, particularly on the eastern periphery of Ven~ac.This transpressional event affected the Vardar Zone, the Serbian-Macedonian Unit and the Carpatho-Balkanides, all the way to the Moesian Plate (wrench corridor, MAROVI] et al. 2001).
The process of destruction of the previously formed structures, related to the shaping of the Pannonian Basin and its periphery, commenced after the transpressional events, already from the Ottnangian.
The performed investigations stress the problem which demands more detailed research and application of new methods in order to obtain more reliable and precise solutions.This refers, in the first place, to the necessity of performing detailed structural investigations and registering kinematic indicators of extensional processes and stress fields in general.Particular attention should also, be paid to an explanation of the manner of extensional unroofing, transpressional tectonics and related phenomena.
Fig. 1 .
Fig. 1.Geologic sketch of the wider area of Bukulja and Ven~ac.
Fig. 2 .
Fig. 2. Equal -area Lower hemisphere stereograms of: A, foliations in the crystalline; B, B-axis intrafolial folds; C, crenulation lineation; D, foliations in the granitoid; E, bedding in Cretaceous deposits north of Bukulja; F, bedding in Cretaceous deposits northwest of Ven~ac; G, bedding in Cretaceous deposits south of Ven~ac.Spheristat software was used for the analysis.
Fig. 4 .
Fig. 4. Distribution of stress states related to a E-W compressional event.Stereographic projection of the measured outcrop -scale faults and calculated stress axes.The circle, rectangle and triangle represent the orientation of the maximum, intermediate and minimum principal stress axes, respectively.
Fig. 5 .
Fig. 5. Distribution of stress states related to an extensional event with N-S to NE-SW trending V 3 .Explanation the same as for Fig. 4.
Fig. 6 .
Fig. 6.Distribution of stress states related to a dextral strike-slip regime with V 1 , trending NE-SW.Explanation the same as for Figs. 4. and 5. | 4,534.6 | 2007-01-01T00:00:00.000 | [
"Geology"
] |
Fabrication and photosensitivity of ZnO/CdS/silica nanopillars based photoresistor
A kind of Zinc oxide (ZnO)/Cadmium sulfide (CdS)/Silica nanopillars structure is fabricated to photoresistor for the first time. The silica wafer with countless nanopillars is used as the substrate for photoresistor. CdS and ZnO film are deposited by frequency (RF) magnetron sputtering onto the silica nanopillars surface to form ZnO/CdS/Silica nanopillars structure. The ZnO nanowires also can grow on the CdS surface by the hydrothermal reaction to ZnO nanowires/CdS/Silica nanopillars structures. The X-ray diffraction curves show that the ZnO and CdS film both on the planar silica and nanopillar silica surface are well-crystallized. The ZnO film deposition can reduce the reflection and increase the absorption of the incoming light, especially for the light with a wavelength less than 350 nm. And the ZnO/CdS heterojunction structure can retard the recombination probability of the photon-generated carries and prolong the lifetime of the photon-generated carriers, which lending to a remarkable photocurrent improvement. The photosensitivity property testing results show that the ZnO nanowires/CdS/Silica nanopillars structure has the best photosensitivity response of 135 for the white light with 10 mW/cm2 illumination.
Introduction
Cadmium sulfide (CdS) with a bandgap of 2.4 eV is an important II-VI semiconductor compounds semiconductor, and it has excellent optoelectronic properties in the visible light region, so it is widely used in electronic and optoelectronic devices such as solar cells [1,2] light emitting diodes [3], photoresistors [4,5], and so on [6,7]. For the CdS based photoresistor, when it is exposed to light with a certain wavelength, the resistance of the photon-sensitive material will be decreased sharply for the photongenerated carriers. When the light is removed, the resistance will be recovered gradually. For the photoresistor, many methods are adopted to increase the sensitivity, for example making the CdS nanostructures to increase the sensitivity surface, such as nanowires, nanorods, nanoflowers, and so on [8][9][10].
And it was proved that the photocurrent to the dark current ratio of the CdS based photodetectors can be further enhanced through the formation of heterojunctions [11,12]. Zinc oxide (ZnO) is a functional semiconductor with the bandgap of 3.37 eV, and the conduction band and valence band of the ZnO are 0.2 and 0.8 eV lower than those of CdS respectively, therefore the ZnO/CdS heterojunctions might be a promising structure for the photosensitive resistor of visible light [13,14]. ZnO nanostructures have excellent optical and physical properties due to high surface-to-volume ratio, therefore, different kinds of one-dimensional ZnO nanostructures are prepared, such as nanowires [15], nanorods [16], nanoflowers [17], and so on. In the previous study, we reported the preparation and photosensitive property of CdS film on silicon nanopillars surface, which proved that the nanopillars with the large high aspect ratio as the substrate can improve the photosensitive property of CdS material [18,19]. We have also fabricated the ZnO nanowires on the Si nanopillars surface successfully [20]. In my opinion, if the ZnO nanowires/ CdS layer fabricate on to the silica nanopillars surface, on one hand, the structure has the largest surface, and on the other hand, the ZnO/CdS heterojunction can improve the photosensitive property of CdS. Therefore, the ZnO nanowires/CdS/ silica nanopillars structure might be a promising structure for the visible light photoresistor application.
In this work, we fabricated the ZnO/CdS heterojunction on the silica nanopillars substrate for photoresistor application. Firstly, we deposited a layer of CdS film on the silica nanopillars surface by radio frequency (RF) magnetron sputtering, and then deposited a layer of ZnO film by RF magnetron sputtering or grew ZnO nanowires on the CdS surface by hydrothermal reaction to form ZnO/CdS heterojunction. This ZnO nanowires/CdS/silica nanopillars structure was used for the photoresistor application for the first time, and the testing results revealed that this structure can improve the photosensitive performance of the CdS layer based photoresistor for white light.
Experimental and section
In this work, 2-inches polished (100) monocrystalline silicon wafers with resistivity [ 2000 X cm and 400 lm thickness are chosen as substrates for photoresistors. The fabrication process are recorded in Fig. 1. Firstly, the Si nanopillars are fabricated by cesium chloride (CsCl) self-assembly and dry etching, which is low-cost and suitable for mass production. The fabrication process of the Si nanopillars includes the following four steps: CsCl film deposition, development to nanoislands, ICP etching, and CsCl nanoislands removal, which are recorded in Refs. [21,22] in detail. Secondly, to avoid the effects of silicon on heterojunctions, the Si wafer is heated at 900°C for 1 h in the O 2 atmosphere to form a layer of silica film on the Si nanopillars surface. Thirdly, a 150 nm thickness CdS film as the photosensitive layer is deposited on the planar or nanopillars silica surface by radio frequency (RF) magnetron sputtering. The RF sputtering condition is that: CdS target with 99.99% pure, 1 9 10 -4 Pa base pressure, 0.2 Pa working pressure, 20 sccm Ar, 40 W RF power, and 8 min. Fourthly, two methods are adopted to fabricate the ZnO film on the CdS film surface. The one is that: a 70 nm thickness ZnO film layer is directly deposited by RF magnetron sputtering with 99.99% pure ZnO target, 1 9 10 -4 Pa base pressure, 1 Pa working pressure, 20 sccm Ar, 120 W RF power, and 15 min. The other one is that: 20 nm ZnO film layer is deposited by RF magnetron sputtering as the seed layer, and then the ZnO nanowires grow on the seed layer surface by the hydrothermal reaction method with 0.05 M zinc chloride and 0.05 M hexamethylenetetramine in 80 ml deionized water, and at 95°C keeping 30 min in a water bath [19]. Finally, the Ti/Ag interdigitated electrodes with 100 lm width and 1 lm thickness are deposited onto the surface by the thermal evaporation method to the photoresistor.
The six different kinds of structures are fabricated to CdS based photoresistors for comparison, which are revealed in Fig. 2. Sample A is CdS layer on the planar silica surface, sample B is ZnO/CdS/planar silica structure, sample C is ZnO nanowires/ CdS/planar silica structure, sample D is CdS/ silica nanopillars structure, sample E is ZnO/CdS/ silica nanopillars structure, and sample F is ZnO nanowires /CdS/ silica nanopillars structure. Samples A-C is different structures on the planar silica surface to research the effect of the ZnO/CdS heterojunction, while Samples D-F is the structures on the silica nanopillars surface, which can be compared with the Samples A-C to research the effect of the nanopillars on the optical and electrical properties.
CdS and ZnO film is deposited by a magnetron sputtering system (JGP-450, made by SKY Technology Development Company, Chinese academy of sciences). Ti/Ag electrodes are deposited by a vacuum coating machine (DM-300B).
Results and discussion
The morphologies of ZnO/CdS/Silica structures are examined by scanning electron microscope (SEM, Hitachi-S4800), shown in Fig. 3. Fig. 3a, c, e, g and i are the samples on the polished wafer surface, while Fig. 3b, d, f, h and j are that on the nanopillars wafer surface. Figure 3a and b are the polished Si wafer and Si nanopillars after oxidation, and the nanopillars are about 200 nm average diameter and 1 lm height. After RF magnetron sputtering, the CdS layer is about 150 nm thickness on the planar silica surface and the thickness is uniform shown in Fig. 3c. From the Fig. 3d, the CdS layer fabricated by RF magnetron sputtering deposition can cover both the top and side-wall of the silica nanopillars tightly, and the thickness is slightly lower than that on the planar surface. After the ZnO film deposition, the film on the planar silica surface increases to 220 nm shown in Fig. 3e, which reveals that the ZnO layer is about 70 nm. Fig. 3g, h are the ZnO nanowires grow on the CdS surface, the nanowires on the planar silica surface is about 50 nm width and 500 nm high, while the ZnO nanowires growing on the silica nanopillars surface are about 20 nm width and 200 nm high. The growing rate of the ZnO nanowires on the planar surface is much higher than that on the silica nanopillars surface [22]. The reasons are that on the nanopillars surface, the seed layer is thinner than that on the planar surface, and the cramped space between the silica nanopillars limits the growth of the ZnO. Form Fig. 3g-j, the ZnO nanowires are thick both on the planar and nanopillars silica surface.
The crystal structure of the ZnO nanowires/CdS/ silica nanopillars structure is determined by X-ray diffraction (XRD) using a Rigaku diffractometer with CuKa X-rays from 20°to 65°and a scan step of 0.02°, which are recorded in Fig. 4 The reflectivity of the ZnO nanowires/CdS/silica nanopillars structures is measured by an ultravioletvisible-near-infrared spectrophotometer (Agilent Cary 5000) from 200 to 800 nm. For the planar silica surface in Fig. 5a, the bare silica wafer has a high reflectance above 45% for the whole wavelength. While after the CdS layer deposition, the reflectivity has a sharp reduction at the wavelength below 600 nm for the CdS absorption. With the ZnO film covering, the reflectivity for wavelength from 200 to 380 nm becomes much lower, which is absorbed by the ZnO. After the ZnO nanowires growing, the reflectivity is suppressed to below 15% for wavelength from 200 to 600 nm. On the silica nanopillars surface in Fig. 5b, the reflectivity for the whole wavelength is below 15%. For the bare silica nanopillars surface, the reflectivity for wavelength from 300 to 800 nm is lower than 8%, for wavelength 200-300 nm, the reflectivity is much high. After the CdS layer covering, there is a slight increase in the reflectivity of wavelength 300-800 nm, and there is a low reflectivity for wavelength from 200 to 300 nm. With the ZnO nanowires growing, the reflectivity is lower than that without ZnO nanowires, especially for the wavelength below 400 nm. Compared with Fig. 5a and b, the reflectivity of the structure on the nanopillars based wafers is much lower than that on the planar silica wafer surface. The lower reflection property of nanopillar arrays can be attributed to that the highly rough surface of the nanopillars lead to strong light scattering and most of the incoming light could be absorbed after multiple reflections in the forest of nanopillar arrays. The CdS material can absorb some incoming light of energy more than 2.4 eV (corresponding to 517 nm wavelength), and the ZnO film can absorb some incoming light of energy more than 3.37 eV (corresponding to 368 nm wavelength). The ZnO nanowires with large surface can reduce the reflection further, therefore, the ZnO nanowires/CdS/silica nanopillars structure has low reflection.
The photosensitive property of based ZnO/CdS/ silica photoresistors is tested by a homemade instrument, including a dark box, a white light source, and a resistance measurer [23]. The illumination of the testing environment in dark is 1 lW/ cm 2 , while that of in white light is 10 mW/cm 2 . When the light incomes, the CdS material absorb the photons and the electron-hole pairs appear, and then the resistance declines instantaneously. When the light shuts off, the electron-hole pairs recombine gradually and the resistance increases gradually. The photosensitivity response (S) is defined as the resistance of the photoresistor in dark dividing the resistance exposed to light. The photosensitivity performance is recorded in Fig. 6; Table 1, and we can see that all the ZnO/CdS/silica based photoresistors have obvious photosensitivity properties. When the light turns on or off, the resistance declines or increases sharply without hesitating for all the photoresistors. Take the photoresistors with only the CdS layer as the example, on the planar silica surface, the resistance under 10 mW/cm 2 white light is 18 KX, when the illumination of environment change to 1 lW/cm 2 , the resistance restores to 1.35 MX, so the response is 1.35 MX / 18 KX = 75. With the same calculation method, the response for CdS layer on silica nanopillars surface is 2.1 MX / 23 KX = 91, which is obviously higher than that of the planar based one. After 70 nm ZnO covering the CdS film, the S increase to 105 and 125 respectively, when the ZnO nanowires grown, the photosensitivity response further improves to 112 and 135, which indicates that the ZnO film covering can increase the photosensitivity response of the CdS based photoresistor both on the planar silica and silica nanopillars surface. On one hand, the ZnO film can reduce the reflection and increase the absorption of the incoming light, especially for the light with the wavelength of less than 350 nm. On the other hand, the conduction band and valence band of the ZnO are 0.2 and 0.8 eV lower than those of CdS respectively, when the light incoming, the photon-generated electrons of the CdS layer will be injected from the conduction band of CdS into the conduction band of ZnO, so the holes will transport along with CdS film, while the electrons will move along ZnO layer. As a result, the heterojunction structure between CdS and ZnO can prevent the recombination probability of the excess photon-generated carries and prolong the lifetime of the photon-generated excess carriers, lending to a remarkable photocurrent improvement. Compared with the photosensitivity performance of sample A to F, we can see that the nanopillars based photoresistors have the higher response than that of the polished silica based one. The advantages of the nanopillars based wafer are that: (1) the nanopillars substrate has a larger surface radio, which is covered by the more sensitive material surface than the planar one. (2) The nanopillar substrate has a larger surface
Conclusions
In this work, it is the first time that ZnO/CdS heterojunction on the silica nanopillars substrate are fabricated for photoresistor application. We fabricated the ZnO film and ZnO nanowires\CdS heterojunction structure on both the polished silica wafer and nanopillars silica surface for comparison. Both the CdS and ZnO structures on the polished silica and silica nanopillars surface are well-crystallized gotten from the XRD curves. After the ZnO covering, the reflectivity can be reduced, especially for the wavelength less than 350 nm. With the ZnO nanowires growing, the reflectivity is much lower than that without ZnO nanowires. No matter with or without ZnO or CdS film, the reflectivity of the silica nanopillars wafer is much lower than that of the planar surface. After the ZnO/CdS/Silica nanopillars photoresistor fabrication, the photosensitive property is tested and the results show that the ZnO/CdS structure can improve the photosensitive performance and the silica nanopillars can increase the photosensitivity response. The ZnO nanowires/CdS/ Silica nanopillars structure has the best photosensitivity response of 135 for the white light with 10 mW/cm 2 illumination, which proved that this structure can be used for the photoresistor with high response. | 3,437 | 2021-02-02T00:00:00.000 | [
"Materials Science"
] |
Initial Coin Offerings and Agile Practices
An ICO (Initial Coin Offering) is an innovative way to fund projects based on blockchain. The funding is based on the selling of tokens by means of decentralized applications called smart contracts written in Solidity, a programming language specific for Ethereum blockchain. The ICOs work in a volatile context and it is crucial that the team is capable of handling constant changes. The Agile methods, proven practices enabling to develop software in presence of changing requirements, could be a means for managing uncertainty. The main goals of this work are to understand software engineering activities related to ICOs, recognize the ICOs developed using Agile methods, and make a comparison between ICOs and Agile ICOs. In addition, we perform a deeper analysis of Agile ICOs concerning project planning, software development, and code features. Our work shows that the roles of the people involved in an ICO can be compared to the typical roles of the SCRUM methodology. The majority of Agile ICOs use tool of testing before storing smart contract on blockchain. Finally, the application of volumetric and complexity software metrics shows that the files of Agile ICOs is on average shorter and less complex than in other smart contracts.
Introduction
An ICO (Initial Coin Offering) is a new way to perform crowdfunding campaigns, based on blockchain technology like Bitcoin and Etherum [1][2][3].It allows to finance startups working on blockchain technology or applications on a global scale, directly and without intermediaries.With the aim of involving as many investors as possible, a startup will create and distribute its tokens.The token of the ICO can be developed through a smart contract, a computer program running on a public blockchain [4].Smart contracts are the base for the development of decentralized applications (DApps).Most smart contracts, used for ICOs, run on Ethereum blockchain, and are usually written in a programming language called Solidity.Although startups performing ICOs share many characteristics with software startups not based on blockchain technology, several factors make the software development context of ICOs unique.Consequently, the research presented in this paper aims primarily at understanding the main software development characteristics of ICO startups, from the planning phase to the testing phase, also taking into consideration the quality of code.To this purpose, we analyzed the whole set of ICOs gathered from ICObench (available at http://icobench.com) and registered between August 2015 and 20 February 2018.We focused on their main features, such as: The team size and roles in order to understand if its composition is consistent with the best known software development methodologies; the rating provided by ICObench in order to investigate the projects quality; the use of social media in order to discover if ICOs teams believe that the communication with investors and their feedback are important; the financial aspects related to motivating the people involved in an ICO.
The market of ICOs is extremely volatile and complex.ICO teams and investors should pay particular attention to the speed of changes and to the technological risks that characterize its evolution.
Furthermore, software startups in general, and even more so, startups founded through an ICO, operate in conditions of great uncertainty and the team's ability to manage changes is crucial.Software startups, therefore, need to follow the market trends, and to adapt to ever-changing business activities and risks [5].For this reason, in this work we investigated the use of Agile methodologies in ICOs, as a mean of managing uncertainty.
According to prior studies [6][7][8], Agile methodologies are optimal for projects that, like ICOs, demonstrate high variability in the development process, in the team or stakeholders abilities, and in the technology being used.In particular, Agile development is especially appropriate for products or services providing high value for the customers, and for all involved stakeholders [7].Considering the decentralized nature of the blockchain, an ICO can have investors, customers, and other stakeholders from all over the world.In our work we aim to understand if and how Agile practices are used in ICOs in response to such high technological, process, and market variability.To do this, we take into account the principles of Agile Software Development and we study if and how Agile methodologies and practices are used in the ICOs development.
First, for each ICO registered on ICObench until 20 February 2018, we carried out a textual analysis of its online available documentation, in order to detect any typical Agile keywords.In this way, we found a subset of ICOs developed with an Agile approach (for the sake of simplicity named Agile ICOs in the following).We also made a comparison between the characteristics of Agile ICOs and the properties of the whole dataset in terms of team composition, rating, financial aspects, and use of social media.
In addition, we analyzed more in depth the set of Agile ICOs, specifically focusing on their roadmap and their project development.We conducted an analysis of smart contacts [9] source codes of the Agile ICOs in terms of code metrics and use of test tools.
To summarize, the main contributions of this paper are: • Understanding the main characteristics of ICOs, investigating software engineering activities related to ICOs, recognizing the ICOs developed using Agile methods, and making a comparison between the characteristics of Agile and not Agile ICOs.
•
Providing a deep analysis of the Agile ICOs in terms of their project planning, software development, and source codes.
The remainder of the paper is organized as follows.Section 2 describes the most relevant related work on ICOs and on Agile methodologies.Section 3 presents the research method that we followed in our study.This section describes the steps which allowed us to define our dataset, to find the Agile ICOs, and to perform our analysis.Section 4 describes the results of the analysis of the ICOs dataset, including the comparison between the results obtained for the ICOs in general, and the results obtained for the Agile ICOs.These results include the team composition, the rating, the financial aspects.Section 5 analyzes in depth Agile ICOs.This section includes the results of the analysis of the ICO roadmaps, of the software projects, and of the smart contracts.Section 6 provides a discussion about the outcomes of our analysis.Finally, Section 7 concludes the paper.
Related Work
Research literature on blockchain in general, and on ICOs in particular, is limited to the last few years [10].The principles described in the Agile Manifesto [11] are the basis for most Agile methodologies.Agile methodologies were used in Sabrix, Inc. (San Ramon, CA, USA) [12], a startup software company that, to accommodate the pressing demands of users, exploited the urgency as the main engine for the product development, and allowed the startup to switch from a initial chaotic management to the correct implementation of a software product-team and product progressed simultaneously.In order to verify if Agile practices are applied in software startups, [13] performed a survey involving 1526 software startups, with questions related to five Agile practices, including regular refactoring, agile planning, frequent release, and daily standup meeting.The survey was focused on the relationship between velocity and quality in Agile practices, and they discovered that the software startups favored velocity over quality.Ref. [14] claims that startups run the risk of failure and of being quickly out of business, if some engineering practices are not used.They studied the software development strategies employed by startups, and found that it is necessary to reduce time-to-market, speeding up the development of product using users' feedback.Ref. [15] provides an exploration of the state-of-art on software startup research and specifies which software engineering practices must be chosen to increase the ability of startups to survive under highly uncertain conditions.Recently, Ref. [16] proposes a unified and multidimensional framework to represent together the role, active or passive, of digital startups with respect to change, and to the level of dynamism of the environment.According to the results of this exploratory study, Lean Startup Approaches (LSA) are strongly related to the use of Agile development methodologies.In addition, startups oriented to have an active role in determining changes use the approach called Business model innovation.
Ref. [17] shows that the Agile and Lean Startup methodologies can be compatible and complementary.Agile methodologies drive software development, whereas the Lean Startup methodology is more oriented towards the development and management of the business and of the product.The blockchain technology is an invention that led to a high dynamism in several business areas [2].It gave rise to the definition of a new branch of software engineering called "Blockchain-Oriented Software Engineering" [18].According to [19], the blockchain technology and in particular the invention of digital tokens, has the potential to create a new entrepreneurial landscape, representing the opportunity to invest in early-stage projects, and, on the other hand, the opportunity for startups to fund their projects in a more democratic way.These opportunities represent the core of the Initial Coin Offerings (ICOs) phenomenon, subject of this study.
Ref. [20] presents a model that rationalize the use of an ICO for the launch of a peer-to-peer platform that still needs to be built.This work highlights two strategies that can generate value: A coordination model among many subjects involved in a peer to peer network and the use of "popular wisdom", that is, the analysis of information available on the web and posted by users or by other stakeholders that describe the quality of the platform.
The possibility of using an ICO as a fundraising tool to finance business and technology initiatives directly and without intermediaries was analyzed in [21].In this work, the Lean Startup methodology as a tool to face the main critical aspects of a startup and examined some ICOs based on lean startup approach, is analyzed.
A first overview about ICOs was made in [22].They examined all ICOs, published on 2017 on icobench.comwebsite in order to evaluate their quality and software development management and to discover the features that can influence the ICO success.A similar analysis is described in [23].According to this work, the success factors of an ICO originated in the process behind the organization of the ICO.Another success factor is the quality of the services provided to the investors who buy the tokens.
The main problem of an ICO is the capability of investors to make a distinction between a genuine fundraising activity and a scam.Ref. [24] analyzed this phenomenon, collected information from specific websites, and categorized the ICOs in order to identify their key success factors.Other issues are related to the legal aspects of an ICO.Another aspect to be managed, is the possibility of changes in ICO legal regulation.The rapid explosion of the ICOs phenomenon has generated some legislative loopholes.In [25] it is pointed out the lack of clear rules for the accounting of ICO funds in the company balance sheet.In [26] it is described how ICOs are regulated in different countries.At the moment, there are no uniform ICO regulation standards.
Research Method
The phenomenon of crowdfunding based on cryptocurrency is very recent.This study is the first in literature about this subject, and has an exploratory nature.We analyzed all ICOs records on ICObench (https://icobench.com) until 20 February 2018.This is a free ICO rating platform that collects data about thousands of ICO.It provides the ICObench Data API (Application Programming Interface described in https://github.com/ICObench/data-api),allowing developers to get the information stored in the platform, including ICO listings, ratings, and stats.The research approach is composed of two main parts: Data collection and analysis of all ICOs registered on ICObench until 20 February 2018, and the choice, through appropriate keywords, of the ICOs to be specifically analyzed, because they are managed with an Agile approach.
Data Collection
We divided the data acquisition process into different steps, as described below.To collect the data related to ICOs we used ICObench Data API, introduced on 12 December 2017 (according to https://medium.com/@ICObench/icobench-2017-in-numbers-d987b0a280d0), to study the list of all ICOs, the list of ICOs by search parameters and filters, the list of all ICOs ratings, all information in the ICO profile and other statistics.ICObench Data API has already been used in other major studies [27][28][29].We called the extraction of this data "Step 1".
To identify which ICOs exhibit an Agile approach, we established a search string with pre-defined keywords which we will introduce in Step 2.
Step 1
We collected the ICOs' data from the specialized website called icobench.com.To date, this website provides an useful API for developers and data scientists who want to create new applications or to analyze the ICO phenomenon.For the API specification see https://icobench.com/developers,accessed on 16th July 2018.
In particular, this API needs user authentication and uses the HMAC method with SHA384 algorithm to authenticate the query.The data provided are in JSON format.In order to acquire the full available data, we used the POST request called "ICO-Profile", which we sent for all ICOs' "id".The"ICO-Profile" request has URL https://icobench.com/api/v1/ico/id|url.
We implemented in R [30] the procedure to automatize the connection to the API, to call the request, to organize and save the ICO data in the R list data type.We collected the data of the first 1.952 ICOs recorded in the icobench.comdatabase until 20 February 2018.We discovered that 115 of them had no available data.The data of the acquired ICOs occupy about 50 MB of memory.
Each list item describes an ICO with up to 25 named sublists, that groups hundreds of named values.We focused on five sublists: Team, rating, finance, dates, links.The team includes name, country, title, link to socials media, and so on for each team member.The rating provides the icobench ICO evaluation vote.The dates includes the date related to the timing of the ICO (opening, closing).Links contains the URLs of the ICO official website and the link to the ICO white paper.Because of the importance of the ICOs' white papers, we wrote a R script to download all the available documents.We collected a total of 1.144 readable PDF files.The ICOs white papers pdf files occupy 4.3 GB of memory.
Step 2
We identified the ICOs that apply Agile practices by searching for keywords in their white papers.A white paper is a comprehensive technical report describing the ICO product or service.We chose the keywords to look for in the following way.
•
We used the Google Keyword Planner tool and we obtained all the keywords associated with the main keyword: "Agile methodology".In this way we obtained a total of 700 keywords.
•
We selected all keywords that have an average monthly number of searches on Google above 1000.
•
We included keywords consisting of single words (for example "scrum") and their specifications covering at most another word.For example, we selected the keyword "scrum programming" and we excluded the keyword "scrum programming development" because it is implicitly included in the previous one.
Eventually, we obtained 90 keywords.We performed a textual analysis, by means of a R script, to verify in which white papers at least one of the 90 selected keywords was present.We analyzed all 1.144 readable white papers.The script, converts the pdf content in a text string.Subsequently, for each white paper it counts the number of occurrences of each keyword in a case-insensitive mode.As a result, the script returns the table of keywords occurrences per ICO white paper.We identified 55 ICOs in which at least one of the 90 selected keywords is present.We, then, manually verified that each of these 55 white papers actually referred to an Agile software development mode.The obtained subset is about the 5% of the total analyzed ICOs.For simplicity, in the rest of the document, we will call this subset Agile ICOs.It must be pointed out that in our classification we focused only in the ICOs development approach, that can be Agile or not Agile.To be clear, in these two categories of approaches, the use of the blockchain technology is the same, and the differences between Agile and not Agile ICOs regard other aspects we will discuss in the following.Figure 1 schematically describes the process that allowed us to identify the 55 ICOs.
Analysis Setup
After the creation of the set of Agile ICOs, we compared them with the overall set of ICOs.We implemented specific R scripts to collect and analyze the information available in the ICO dataset.We collected data about the size of the team, and its composition in terms of roles and gender.We obtained the gender of team members by means of a names database realized by Mark Kantrowitz (available from http://www.cs.cmu.edu/afs/cs/project/ai-repository/ai/areas/nlp/corpora/names/).We used R to classify the team members by gender, to perform statistical and comparative analysis, and to collect and analyze some of the business and financial information available in the ICO dataset, also including the rating and the use of social media.
Data Analysis
In this section, we report the results of the analysis.As described in section Research Method, we organized the analysis in steps.In the followings we present the numbers, the distributions and the statistical values that characterize the acquired ICOs, and in particular the Agile ICOs.
Analysis of ICOs Teams
According to [31], one of the main factors that impact the sustained usage of Agile methods is the team composition, that should have a right balance in terms of technical skills, domain knowledge, team size, and gender, race, and culture.
Team Size and Composition
We analyzed a total of 18,699 people involved in ICO founding teams.We firstly analyzed the size of the team that develops each ICO.We consider 1646 ICOs that declare at least one team member.We found that the mean size of the ICO team amounts to 11.4 people.According to the results of the Anderson-Darling normality test, the team size distribution does not follow a normal distribution.The Anderson-Darling normality test, computed in R using the library nortest, produces a p-value < 2.2 × 10 −16 (the distribution is normal if p-value > 0.05 [32] ).The maximum team size includes 67 people.These results are higher than the results reported in [21], where the average team size was 10.87, and the maximum team size was 57.
When we computed these statistics on the subset of 55 Agile ICOs, we found that ICOs in this typology have larger teams.The average size is 14.9 people, despite the fact that the larger team includes 43 people.Also in this case, the distribution is not normal.The Anderson-Darling normality test provide a p-value = 0.0055, lower than the threshold.We, then, performed the Wilcoxon test [33] which is not based on distributional assumptions and therefore provides reliable results even when data do not have a Gaussian distribution.Through the test carried out using the implementation provided by R, we can see that the differences are significant.The p-value = 5.003 × 10 −6 is lower than the threshold value α = 0.05.Empirical evidence is strongly contrary to the null hypothesis and the observed data are statistically significant.
This data has its intrinsic importance.In an ICO, a very small and anonymous team increases the investment risk and may be a scam.It is therefore important for investors to be able to consult the information relating to each team member, both on the ICO website and on LinkedIn or Twitter.The greater number of people involved, with a detailed description of each person supported by the link to the related social pages, the lower the probability that the ICO may be a scam.
An advisor is a domain expert (for instance an academic), or an investor, or a consultant.We found that 4.461 people involved in ICOs are advisors.As shown in Table 1, advisor represents 18.9% of team composition, or in other terms there is nearly an advisor every five people.In Agile ICOs, there are more advisors.On average, they represent over the 23% of the team.According to [22], in a very large team sometimes there are many advisors who contribute suggestions, but are not really involved in the ICO operations.We can consider advisors as a group of individuals with experience and able to provide credibility and value to the project, as long as they specifically work in the ICO and create added value.We can see that some advisors have their names in over 100 projects, each of which spans overlapping periods of time.Such an advisor cannot physically allocate enough time to a project to provide true value to their customers.We can compare the roles inside an ICO team with the typical roles described in the SCRUM methodology.In particular, in the SCRUM methodology the world is divided into "pigs" and "chickens" [34].The former, during the project development, bring into play "the skin".All other stakeholders are spectators (chickens).The chickens may also be strongly interested in the project, but they do not work in a strict and direct way like the pigs.By analogy, in an ICO we can define team members as pigs, while advisors can be considered as chickens.The advisors are consultants who can support the team if it is needed; often they are selected for marketing reasons, and do not work directly to the development of the project.
Gender Heterogeneity
We investigated the gender heterogeneity in ICO teams.It is well known that a global gender gap exists in entrepreneurship, and in particular in the technological startup founding teams.Female presence in startup teams is typically below 30%, which is the highest percentage recorded in the Chicago startup ecosystem [35].
For example, in a recent survey [13] that examined about 1526 software startups, they discovered that in their teams a very small percentage (8%) are females, in comparison to the percentage of males (76%).Note that 16% of team members did not reveal gender information.
In addition, ICO teams consist of predominantly males.Our algorithm classified 9776 people; we found that the female presence is equal to 16.3%.Considering the average of the number of men and women per ICO founding team, we found that the number of men is about 5 times larger than the number of women.Considering the Agile ICO set, we noticed that the female presence is slightly lower than the presence computed in the total dataset.These teams have 15% of women, being on average composed by 6.5 men per 1 women.There is no case of team composed only by women.
In the ICOs, however, the presence of women is twice the presence detected by [13] in software startups not based on an ICO.Ref. [36] shows that gender diversity plays a significant role when considering productivity and collaboration within a software development team.These results could also be applied to ICO teams.In addition, also in relation to investors, according to [37], the number of women interested in investments in cryptocurrencies represents currently the 13% (one in eight women) of the total.This research also suggests that women invest very differently than men: The former take a much more strategic approach, and suffer less the "Fear of Missing Out" [38] than their male counterpart.The study also shows that women investors tend to collaborate much more than men, consulting family and friends about their investment before proceeding.
ICO Rating
The statistical analysis of the ICO Rating (a score parameter provided by ICObench that summarize the ICO reliability) allow us to compare the overall results with the subset of Agile ICO.The Rating is a real number which ranges from zero to five computed by ICObench, using a weighted average of two distinct evaluations.The first is computed by a proprietary assessment algorithm which considers the team composition, ICO information, product presentation, the marketing campaign and the presence on & social media.The second is assigned by experts who evaluate from 1 to 5 the ICO for team, vision, and product (as described in https://icobench.com/ratings).Actually, we found that ICOs Ratings vary from 0.4 to 4.9.
On average, ICOs have a Rating equal to 3.1.Agile ICOs have, on average, a better Rating score.The average Rating value of the 55 ICOs is 3.6.Considering the diversity of samples analyzed-the first with 1646 elements, the second with 55 elements-we performed a statistical analysis on the significance of the differences.We first verified that the values of the ICO rating do not follow a normal distribution.This distribution is not normal.The Anderson-Darling normality produces p-value < 2.2 × 10 −16 lower than α = 0.05.We then performed the Wilcoxon test and by its result we can see that the differences are significant.The p-value = 2.476 × 10 −5 is lower than the threshold value α = 0.05.This result is strongly contrary to the null hypothesis and then the observed data are statistically significant.The minimum value of Agile ICOs is 2.1.
Social Media and Tools
We analyzed the social media use in order to understand how ICOs use this channel to communicate with investors and customers.[39], social media became part of the standard communication tools in recent years.Telegram groups and Slack channels are used as tools to which interested parties can ask questions about the ICO.On the other hand, social networks are used by the team to share information and news, and to raise awareness on their cryptocurrency and ICO.According to [40], the communities of cryptocurrency users require transparent and reactive communication.According to Agile Methods, a software product is a constantly evolving project in which the initial idea could be modified and adapted, using the feedback provided continuously by users.External feedback at each stage, allows the development of true competitive value of the product and services offered, promoting quality, efficiency, and trust.Given the decentralized nature of the blockchain, social media are the main channels, for an ICO, to communicate with users and investors.It is therefore not surprising that the use of social media is greater in Agile ICOs than in non-Agile ones because they can enhance contacts and interactions between customers and the ICO's team, simulating the costumer on site prescription.
Financial Aspects
A token is a digital asset that, in addition to having an exchange value, has an intrinsic value that derives from its use.ICObench describes the way in which the tokens are sold.The parameters reported on the website are: The number of tokens for sale, the percentage of token to be sold during the ICO, the hard and the soft capitalization, namely the goal of the ICO offer expressed in a reference currency, and the minimum selling target to be reached to develop the product.The number of ICOs which provide this parameter is 1053.We discovered that over 60% of such ICOs provides more than 50% of their tokens to investors.The remaining tokens are managed by the team.In particular, about 44% of ICOs choose to distribute more than 60% of its tokens to investors during the crowdfunding.This set is not characterized by a normal distribution of values.Applying the Anderson-Darling normality test we obtained a p-value equal to 6.404 × 10 −12 , lower than the threshold α equal to 0.05.
Agile ICOs differ with respect to the general statistics.The 38 out of 55 Agile ICOs which provide this parameter, on average distribute a lower percentage of tokens to the investors during the crowdfunding.In particular, only 50% of ICOs distribute more than 50% of tokens during the crowdfunding, and only 23% of Agile ICOs assign more than 60% of tokens to investors.The remaining tokens are managed by the ICO team.According to the results of the Anderson-Darling normality test, we can consider these values as sample of a normal distribution of values (p-value = 0.06406 > 0.05).The result of the Wilcoxon test allows us to consider significant the differences between the two sets of data.The p-value is equal to 0.0262 and is lower than 0.05.So the null hypotesis is rejected, and the two samples can be considered as taken from different populations.Figures 2 and 3 show the histograms of the number of ICOs per percentage of Token to be sold during the ICO for All ICOs and for Agile ICOs.
Ico Market Capitalization
The market cap is the value of tokens expressed in a reference currency.An ICO, through its hard cap (hard capitalization), sets the limit of how much money will be accepted to finance the product development.The excess money received is returned to the investors.The soft cap (soft capitalization) is the minimum amount required by the project in order to continue its development.If this amount is not reached, investors can withdraw their contribution.A crowdsale that reaches the soft cap is considered successful.
We found that 469 ICOs provide both the hard cap and the soft cap.Among these, the soft cap is on average 19% of the hard cap, so a crowdsale that reaches 19% of the hard cap is considered successful.Analyzing in particular Agile ICOs, 25 out 55 provide both hard cap and soft cap.In this case, on average, the soft cap is 25% of the hard cap, so in Agile ICOs only a crowdsale that reaches 25% of the hard cap is considered successful.Therefore, Agile ICOs needs a initial capital proportionally higher than the non-Agile ICOs to develop the project.No Agile IC reached 100% of the hard cap (the maximum percentage is 85%).
In this case, the differences found between Agile ICOs and other ICOs are not significant.The results of the Wilcoxon test suggest that the two set of data can be considered as elements of the same population.The resulting p-value is equal to 0.05878, greater than the threshold value, α, equal to 0.05, and therefore the null hypothesis can be considered valid.
The distribution of the ratio (in percentage values) between softcap and hardcap is shown in Figures 4 and 5.
Analysis of Agile ICOs Projects
In this section we focus on the analysis of software projects of Agile ICOs.We studied in particular two aspects: The first is the ICO roadmap, the typical step by step description with which the ICO proposers declare their development program of the proposed product or service.The second is the software development project, that is the available repository of the ICO source files.This second aspect includes the analysis of development tools and testing practices used by the developers of the ICO.
Roadmap and ICO State
As we said in section Data Analysis, a team describes, usually in the white paper, the roadmap of activities after the crowdfounding, and outlines the actions which they aim to achieve during the product development.Generally, this description is a simple graphical overview of the project's goals and deliverables, and of the related timeline.Investors look the roadmap to know when and how the business idea will be operative and profitable, to understand the development phases of the product.We chose to identify the starting-point of a roadmap as the time when the ICO crowfunding closes, and the team gathers the money needed to develop the product.In the subset of 55 Agile ICOs, only 9 ICOs don't provide a roadmap.In the remaining 46 ICOs the roadmap is also called milestone, timeline or highlights.In these, the time period that roadmaps cover, ranges from few month to over five years, as shown in Figure 6 and summarized in the following: We define the time of post-ico state as the number of months passed after the ICO period.Figure 7 shows the distribution of the state of ICOs in terms of number of months passed after the end of the ICO selling phase.
The roadmaps therefore concern the future of the ICO and provide a realistic plan on the use of the funds in view of the objectives set.The content of the roadmap helps investors to understand when and how they are involved in the project, and provides them a view on possible changes.A too detailed roadmap is typical of the plan-driven methodologies and contradicts one of the fundamental principles of Agile methodologies, that aim to respond to change more than to follow a predetermined plan [41].In a roadmap developed with an Agile approach, the possible difficulties that the team can meet and the way to deal within possible obstacles should be taken into consideration.A roadmap developed with an Agile approach should therefore be compared not only to the future features of a project, that is difficult to accurately detail and predict, but should also show the daily work and progress of the team, with a special focus on feedback and opinions of the people involved.The roadmap is therefore an useful tool to promote the transparency of development, also in order to manage customer expectations.Focusing on specific features diverts attention from the general vision of the project.An Agile roadmap is therefore able to embrace the inevitable changes, to communicate a short-term plan, but it must also include a flexibility that allows this plan to be adjusted to the customer's value or changes imposed by the market.According to [42,43] we define below some guidelines that characterize a roadmap designed with an Agile approach.
•
The roadmap must be oriented towards objectives much more than towards the features be developed, so that everyone involved can understand the evolution of the product.
•
The creation of Agile roadmaps requires continuous communication within the team, and with investors.It also needs to harness the effort of all the parts involved.
•
In an ICO, it is essential to respond adequately to the needs of investors.Responding promptly to customers' needs through continuous dialogue is one of the essential characteristics of the Agile approach.The roadmap should therefore take into account all investors' feedback for possible improvements.The new ideas, evaluated through a score, must be included in a future release backlog.The ideas of investors and customers should therefore guide the definition of future priorities.
•
The roadmap should be changed quite frequently (from one to three months) in order to adapt the plans with the obtained feedback.Updating the roadmap can help a project to face changes without diverting attention from long-term goals.
Software Development
In order to better understand the process of software development of ICOs, for each of the 55 ICO projects selected in step 2 we examined the documentation to find the availability of software project repositories in which the development team stores and manages the software under development.We found that: Summarizing, 48 out of 55 Agile ICOs provide at least the smart contracts used to develop the token of the ICO.These smart contracts are written in Solidity [44], which is the most popular high-level language for implementing smart contracts.
We analyzed the 32 Agile ICOs software projects available on Github.In particular, we counted the number of repositories (the folders of the project), the typology of files, and the number of solidity files.
In summary, the 32 Github projects contain a total of 14.199 files.The total number of folders is about 2800.On average, each project contains 5.8 repositories, and the maximum number of repositories per project is 38.
Regarding the contents, source code files represents the most of the files present in ICO software projects.In particular, js files (javascript) dominate the scene with 5.015 files, equal to 35.32% of the total.The projects contain 786 smart contracts written in solidity (.sol), equal to 5.54% of the total.
Graphics file formats (like png and svg formats) represent the second most frequent kind of files.Table 2 summarizes the number of occurrences of the ten most common file types found in Agile ICO software projects.
To better understand the process underlying the development of Agile ICOs, we have verified for each project the use of specific development frameworks and in particular the use of Truffle.
Truffle is a popular development framework for Ethereum that includes built-in smart contract compilation, linking, deployment, binary management, and automated testing.Truffle is available at https://truffleframework.com.We found that Truffle is commonly used in Agile ICOs.Nineteen out of 36 projects include the typical Truffle elements.
Smart Contract Code Metrics
In this section, we report the results of the analysis of the smart contracts used to implement the Agile ICOs.In particular, in order to characterize the content of these solidity files (with extension .sol),we applied a selection of code metrics.We also report the comparison between our results with the results provided by [45], related to more than 12,000 smart contracts published on the blockchain explorer Etherscan and deployed on the Ethereum blockchain until January 2018.
In our analysis, we examined 502 solidity files found in Agile ICOs github projects, which could be referred directly to the ICO developers.We excluded from our analysis the files copied (or forked) from other projects (including templates taken from development framework like Openzeppelin (https://openzeppelin.org/).
We added to the analysis the 12 smart contracts published only in Etherscan, as described before.In total, we examined 514 smart contract files.For each of them we applied the software metrics defined in Table 3, that include volume metrics and complexity metrics.In the following, we will use the term contract to refer to a specific type of object of the Solidity language [44].The declaration of an object contract is similar to the declaration of a class of object oriented languages.In a contract definition, it is possible declare functions and variables, that can be modified to be private, public, or internal to the contract.A solidity contract can also inherit from other contracts.We considered "function" the declaration, through the keyword function, of the executable units of code within a contract.We did not considered as functions the definition of function modifiers.In solidity, a modifier is a short portions of code defined through the keyword modifier that can be called and incorporated in functions.They can be used to easily change the behavior of functions.
Table 4 reports the summary of the statistics of the computed volume source code metrics.In the table, we also provide, for each metric, the value of the First Quartile that separates the lowest 25% of values from the highest 75%, and the Third Quartile that separates the lowest 75% of values from the highest 25%.All the reported statistics represent an overview of the distribution of metrics values.All statistical analysis were performed using R. Results show that smart contracts in Agile ICO projects have on average 65.59 lines of code.The maximum number of LOC is 808 and the minimum is 2.
For comparison, results of [45] report that the mean number of LOC is 183.8.We found that smart contracts present in Agile projects are characterized by a lower number of LOC.The mean and the median value of CPL allows us to state that examined files are well commented.As reported, there is expected about one line of comments every two lines of code.
The examined solidity files declare, on average, only two contracts.This number can be considered low, in relation to the value of 9.2 reported by [45].The maximum number of contract declarations is 66 and the minimum is 1.
In total, the mean number of functions (NDF) declared in a contract is about 6.6, the maximum value is 71, and the minimum is zero.Also in this case, the mean number is lower than the results of [45] (25.9 functions per file).The mean values of NDC and NDF can be considered related to the value of the LOC metric.Solidity files are shorter and consequently less functions and contracts are declared.Contracts having no function declarations can be used to define variables and data structures to be inherited by other contracts.In general, if there is no constructor, the contract will assume the default constructor.
The metric Function Per Contract (FPC) represents the equivalent of the number of method per class in object oriented languages.Considering each contract in the files, we found that, on average, each of them declare 4.3 functions.The maximum number of declared functions per contract is 28 and the minimum is zero.On average, each contract has functions that are long 7.8 lines (AFL).The maximum average length is 172.5.The related distribution is characterized by a high standard deviation.
From these results, we can deduce that the smart contracts of Agile ICOs tend to be short programs, with a limited number of elements (contracts and functions) and with short functions.This favors easier reuse and maintenance of the code, according to [46].
In general, the development of the ICO token pass through the implementation of a standard interface called ERC20 (specifications described in https://theethereum.wiki/w/index.php/ERC20_Token_Standard).Given the availability of already implemented ERC20 tokens, the reuse of code is commonly adopted during the creation of new tokens.The high values of standard deviation show that smart contracts are very different from each other; these results are typical of long-tail distributions, whose tails collect the highest values.
For each solidity file, we computed the McCabe cyclomatic complexity [47], of all the functions implemented in it.The cyclomatic complexity measures the number of linearly independent paths through a function.We used a commercial software.In particular, we computed the cyclomatic metrics using Understand, by scitools.Cyclomatic metrics are described in https://scitools.com/support/cyclomatic-complexity/.Table 5 summarizes the results related to the average, the maximum and the sum of the cyclomatic complexity of all the functions defined in each solidity file belonging to Agile ICO projects.The minimum values of these metrics are equal to zero due to the presence of contracts that do not implement any function.We found that the average cyclomatic complexity (ACyclo) has a value equal to 1.2.The maximum value of the average cyclomatic complexity is 7.
The maximum cyclomatic complexity (MaxCyclo) is, for each contract, the the maximum value of McCabe cyclomatic complexity among the functions of the contract.Its mean value is 1.83, and its highest value is equal to 17.
Contracts are characterized by a limited sum of the cyclomatic complexity (SumCyclo) computed for each function in their solidity files.The mean value of the sum is 7.97, lower than the value reported by [45], due to the fact that contracts of Agile ICO projects are shorter in terms of LOC.Values of this metric are characterized by a standard deviation equal to 12.27, and a maximum value equal to 134.These value are typical of long tail distributions.
Testing
In the Ethereum platform, each smart contract deployed in the blockchain both the data related to the transactions, and the code that implements the logic to allow the sending of transactions between two or more actors.Therefore, data and logic that compose a smart contract are stored irreversibly.Given the principle of the immutability of the blockchain, once a smart contract gets deployed, its code cannot be changed.
If a developer finds a bug or wants to correct an error, s/he has to develop a new smart contract, deploy it on the blockchain and transfer all the existing data to the new contract.The deployment of a smart contract includes an Ethereum transaction that requires the payment of a fee, which depends on the size of the smart contract.For this reason, the test phase before the deployment is very important and it should be managed appropriately also through the adoption of best practices and specific tools for continuous testing, typical of Agile methodologies.
We, therefore, analyzed the use of development tools and practices for smart contracts.In particular, we look for the use of the Truffle suite for testing practices.We found that Truffle is commonly used in Agile ICOs.Nineteen out of 36 projects include the typical Truffle elements (the file truffle.jsand the directories build, contracts, and migration).Using Truffle, developers can take advantage of several development tools included in this suite.One of the most relevant is the possibility to create a Test suite.Tests can be written both in solidity and in javascript.We found that 16 of the 19 Agile ICO projects that use Truffle include a test suite.In addition, we found the presence of other kinds of testing code developed to test other components of the project.The use of Truffle, which provides an automated testing framework to test smart contract before the deployment on blockchain, is fully consistent with the application of Agile methodologies.
Discussion
The results of this study provide an overview of the world of ICOs, which can be described as is a new blockchain based fundraising mechanism in which a startup sells its tokens in exchange for Bitcoins or Ethers.The ICOs, therefore, offer important new possibilities related to the intrinsic properties of blockchain technology.In order to make the most of this potential, in such recent and innovative context, it is necessary to employ appropriate software development methodologies.We believe that the Agile methodologies, which are suitable for the management of innovative startups because they allow to easily face the changes, can also be useful when applied to the ICOs.Agile methods are suited to develop system whose requirements are not completely understood, or tend to change.These characteristics are present in ICOs because they are typically very innovative applications and often there is a run to launch an ICO to be the first on the market.An Agile approach, is based on iterative and incremental development with short iterations, and is suited to deliver quickly and to deliver often.In addition, Agile methodologies are suited for small, self-organizing teams working together, as is the case for many ICO teams.
Table 6 summarizes the results of the comparison between the full set of ICOs and the Agile ICOs performed in our analysis.We found that in Agile ICOs the team is on average made up of 14.9 people, of which 23.04% on average are advisors.This means that the development team is on average made up of 10 people.Among them, we include 6 or 7 developers, but also other professionals, like testers or UI designers.In SCRUM methodology, for example, a development team consists of 3-9 people.According to [48], for larger projects, Scrum provides a mechanism called Scrum of Scrums that assign the large project to several teams.ICOs are developed using technologies like the ERC20 Token Standard, a particular smart contract interface, written in the new programming language named Solidity, which abstracts many development process useful to set up a new cryptographic asset.This abstraction can be related to the Agile practice named Coding Standards, as described by [11].Moreover, the practice of Collective code ownership is also employed in ICOs, due to the transparency of smart contracts in the blockchain.Most of the ICOs' smart contracts are public and readable by everyone.On the other hand, we need to take into account some important factors inherent in decentralized technologies that need to be carefully controlled in applying Agile methodologies.An ICO is based on the use of smart contracts that record data transactions and manage the specific functionalities of the project.The data stored on the blockchain are immutable, so, in order to modify a smart contract we have to deploy a new smart contract.Agile development, on the contrary, insists on continuous feedback from users to foster innovation and it also strongly highlights the necessity to continually iterate on the product.This is the part of the process that can create issues for blockchain products.As previously mentioned, the data stored on the blockchain are immutable.The smart contracts, once deployed on the blockchain, cannot be updated except by uploading a new smart contract.Developers therefore must be sure that the smart contract will always work.It must be right the first time it is deployed, and this challenges the present Agile wisdom.Before storing a smart contract on the blockchain, it is therefore important to perform a very thorough testing, auditing and verification phase, also through the use of tools such as Truffle, a tool to build smart contracts that provides an automated test framework and allows writing tests in Javascript and Solidity.The practice of software development called Test Driven Development (TDD) described by [49], used in Agile methodologies and especially in Extreme Programming according to [50], which consists of writing the tests before the actual implementation of the functionalities, could therefore be of great help to develop smart contracts of the highest quality.These contracts will be able to be deployed in the blockchain, guaranteeing that the need to change them will not arise.Another aspect that must be taken into great consideration is the one related to project planning.The planning of an agile project is based on the split of functionalities in user stories, which are fragments of functionalities that give value to the user.The analysis of the requirements leads to the definition of a certain number of stories, each having an intrinsic value to the project.A user story must be measurable, and must be developed in a limited time.A roadmap developed with an Agile approach should therefore pay particular attention to the feedback and wishes of all involved stakeholders.The roadmap is therefore a useful tool to promote the transparency of the development, also in order to manage customer expectations.It must not be too detailed, and must allow the implementation of the project as a set of user stories or features.The implementation flow could take place in Sprints, and each user story should be provided with one or more Acceptance Tests, according to [51].
Regarding the quality of the code, the results obtained by us are not consistent with those found by [52], that made a comparison between software metrics in a software system (written in Python and in Java) and developed using Agile methodologies and in systems developed using plan-driven methodologies.They assert that metrics distribution generated from Agile methodologies are not related to better quality of software.Unlike ours for example they found that the LOC distribution does not demonstrate major differences.In our analysis smart contracts present in Agile projects are characterized by a lower number of LOC in comparison with ICOs that not use Agile methodologies.In general all smart contracts tend to be short with a limited number of contracts and functions.It is also connected with the fact that to deploy a file on the blockchain it is necessary to pay a fee proportional to the size of the file.The simplicity of the code is perfectly consistent with one of the five fundamental values of the XP programming, reported by [50].Moreover in order to improve our study for the future work, we summarize below the main limitations of our study.The first issue is related to the novelty of our work.The typology of analysis we made is therefore pioneering.We focused part of our study on the collection of Agile Keywords inside the white paper, in order to recognize which kind of ICOs funded project used Agile practices.We therefore did not analyze ICOs in which no agile keywords are officially stated in the white paper, but the development could be anyway Agile based.We are indeed aware that not all ICOs they may have declared whithin its own documentation Agile keywords even if they use agile Methods for developing projects.
We collected 55 Agile ICOs and we calculated the software metrics only on the smart contracts of these ICOs (for a total of 514 smart contracts).We made a comparison between this subset and the code metrics related to all smart contracts stored on Ethereum blockchain in 2017 (around 12,000 smart contracts).Our subset may be small but the results obtained can serve for a future generalization considering a consistent dataset.
Conclusions
The present work proposed an analysis of Initial Coin Offerings, a new phenomenon that has recently become a relevant topic of study within the blockchain community.In order to better understand this phenomenon, operating in an uncertain and constantly changing context, we investigated all engineering activities related to ICOs, from the planning phase to the testing phase.We analyzed the whole set of ICOs registered in ICObench until the 20 February 2018 in order to discover the team composition, the communication channels with investors distributed around the world and the financial aspects.We therefore studied the use of Agile practices in ICOs as a method to cope with change.We have selected and analyzed in detail a subset of ICOs specifically developed with Agile methodologies relatively to their roadmap, project development, and source code quality.Overall, about the 5% of the examined ICOs apply Agile practices.In addition we conducted an analysis of smart contacts of Agile ICOs in terms of code metrics, language versions, and use of test tools.We discovered that the Agile methodologies are suited to develop ICOs because these are highly innovative projects, whose requirements are not completely understood or tend to change.The Agile approach is iterative and incremental with short iterations and is suited to deliver quickly and often.This property is very useful in the context of ICOs.On the other hand the necessity to continually iterate on the product, typical of Agile methodologies, can create issues for ICOs.The immutability of the blockchain must be taken into consideration, smart contracts can not be updated once they have been loaded.To face this difficulty the Test Driven Development is very useful.The practice of Collective code ownership is also guaranteed in ICOs by the transparency of smart contracts in the blockchain.Another practice of Agile development that are applied in ICOs is the use of Coding Standards.The too detailed roadmaps are instead typical of the plan-driven methodologies.Finally, we can say that the smart contracts of Agile ICOs have good metrics of the software because their files are very short and simple.
Future Work.In the next iteration of this empirical study we want examine more aspects of ICOs to recognize the use of Agile practices in a more comprehensive way taking into consideration an analysis related to each single Agile methods.In the future will be possible use the results of this work as a snapshot dated February 2018 of ICOs characteristics, and use it for a time-based comparison focusing mainly on the rate of adoption of Agile practices.In addition, a future work should provide a correlation analysis between the usage of Agile practices and the ICOs success, both in terms of capitalization and in terms of the development of their projects.
Figure 1 .
Figure 1.Flow diagram describing the selection process of the Agile ICOs.
Figure 2 .
Figure 2. Histogram of the number of ICOs per percentage of Token to be sold during the ICO.
Figure 3 .
Figure 3. Histogram of the number of Agile ICOs per percentage of Token to be sold during the ICO.
Figure 4 .
Figure 4. Histogram of the percentage of the Hard Cap per ICO to be reached to consider the ICO as a success.
Figure 5 .
Figure 5. Histogram of the percentage of the Hard Cap per Agile ICO to be reached to consider the Agile ICO as a success.
• only 18
ICOs present a roadmap longer than one year; • the longest roadmap extends 72 months after the end of the ICO; • the average duration of roadmaps is 16 months after the ICO end; • in 2 cases, the roadmap is concluded with the closing of the ICO.
Figure 6 .
Figure 6.Histogram of the number of months of development described in the Agile ICO roadmaps.
Figure 7 .
Figure 7. Histogram of the number of months passed after the end of the ICO.
Table 1 .
Summary of statistics on ICO team.The average percentage of advisors and females per team are computed on teams of at least one person.
ICO Stats Avg. Size Max Size % Advisors % Women
1810 ICOs out of 1952 use at least one social media.1769 ICOs have at least one Twitter account, 1528 have a Facebook page.Telegram is used by 1231 ICOs, Youtube by 1112, Medium by 1069, Reddit by 812, GitHub by 796, Slack by 555, and Discord by 46 ICOs.All Agile ICOs communicate with investors and customers by means of social media.All Agile ICOs in fact have a Twitter account and 51 out of 55 have at least one Facebook page.Telegram is used by 42 Agile ICOs, Youtube by 39, Medium by 38, Reddit by 32, GitHub by 32, Slack by 14, and Discord by 3 ICOs.Given the decentralized nature of ICOs, the developers have to create a strong and active virtual community to support projects.As reported in • 36 ICOs have a software project publicly available on the Github platform.• 12 of the remaining ICOs published in the Ethereum blockchain explorer Etherscan (https:// etherscan.io/)the solidity code of the smart contract used to implement the token selling.• 7 ICOs do not have a publicly available software project nor provide any smart contract solidity file.
Table 2 .
The ten most common file extensions in Agile ICO projects.
Table 3 .
Definition of computed source code metrics.
Table 4 .
Volume metrics of smart contracts belonging to Agile ICO projects.
Table 5 .
Cyclomatic metrics computed in the solidity files belonging to Agile ICO projects.
Table 6 .
Comparison between ICOs and Agile ICOs. | 12,965.4 | 2018-10-23T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Availability of E-commerce Support for SMEs in Developing Countries
Although research indicates e-commerce offers viable and practical solutions for organizations to meet challenges of a predominantly changing environment, the few available studies related to SMEs in developing countries reveal a delay or failure of SMEs in adopting ICT and e-commerce technologies. The various factors identified as causes for the reticence can be broadly classified as Internal Barriers and External Barriers. This paper presents a model for barriers to adoption of ICT and e-commerce based on the results of an exploratory pilot study, survey and interviews of SME intermediary organization. It identifies support for SMEs in Sri Lanka with regard to ICT and e-commerce. It also determines a strong need for necessary support and discusses the availability of the support.
IntroductIon
Developing countries have the potential to achieve rapid and sustainable economic and social development by building an economy based on an ICT-enabled and networked SME (Small and Medium-sized Enterprise) sector, capable of applying affordable yet effective ICT solutions [1].It is accepted that e-commerce contributes to the advancement of SME business in developing countries [2].With the development of ICT and the shift to a knowledge-based economy, e-transformation and the introduction of ICT is becoming an increasingly important tool for SMEs both to reinvigorate corporate management and promote growth of the national economy [1].E-commerce technologies facilitate organizations to improve their business processes and communications, both within the organization and with external trading partners [3].
However, the adoption of ICT and e-commerce in developing countries has fallen below expectations [2], as they face unique and significant challenges in adopting ICT and e-commerce [4].Nevertheless, it is imperative for SMEs to adopt e-commerce technologies to survive in intense competitive national and global markets.
The SME sector plays a significant role in its contribution to the national economy in terms of the wealth created and the number of people employed [5].Forging ahead SMEs need to accept the challenges,
Availability of E-commerce Support for SMEs in Developing Countries
Mahesha Kapurubandara 1* , Robyn Lawson 2 1, 2 Locked Bag 1797, Penrith South DC, NSW 1797, Australia mahesha<EMAIL_ADDRESS>the barriers as they move towards successful adoption of available technologies while raising awareness of relevant support activities and preserving limited available resources to avoid severe repercussions from costly mistakes.
This paper contributes to the ability to understand factors that inhibit ICT and e-commerce adoption in SMEs in Sri Lanka, a developing country on its way to an e-society.Believing that research findings from Sri Lanka will prove to be useful for other developing countries it explores how best the barriers could be overcome by way of support activities.The paper first outlines current research into adoption in developing countries, discussing models for adoption by previous research and presents a framework established for use with this research.The research methodology and the results are subsequently discussed.
SMEs in Sri Lanka
SMEs everywhere play a critical role in economic development, and Sri Lanka is no exception.Many countries use different parameters to define SMEs by referring to: number of employees, amount of capital invested or amount of turnover [6].In Sri Lanka a clear definition of a SME is absent with government agencies using different criteria to define SMEs [6,7].The National Development Bank (NDB), the Export Development Board (EDB), and Industrial Development Board (IDB) use value of fixed assets as the criterion, whereas the Department of Census and Statistics (DCS), Small and Medium Enterprise Development (SMED), and the Federation of Chambers of Commerce and Industry (FDCCI) use number of employees as criteria [7].
Following the World Bank definition, for this study we consider enterprises with 10-250 employees as SMEs [6].The 2004 mission statement of the International Labour Organization (ILO) reported that 75% of Sri Lanka's labour force was employed in the SME sector depicting SMEs' contribution towards employment and income generation.
The domestic market is the main outlet for SMEs.SMEs are also sub-contracted to large exporters with larger entrepreneurs coordinating direct exports as is seen with coir -based products, wood, handicrafts,
I C T e r
plants and foliage.If Sri Lanka wishes to ride high on the electronic highway it should provide Sri Lankan SMEs 'a ramp to the digital highway' and stimulate e-commerce.This is supported by the government's e-Sri Lanka vision, championed by the Information and Communication Technology Agency of Sri Lanka (ICTA), aiming to harness ICT as a lever for economic and social advancement.
Barriers to ICT and e-commerce Adoption by SMEs
Developing countries face insurmountable barriers getting on to the electronic highway.Yet, it is encouraging to note existing research to identify barriers in a variety of factors grouped into several categories.A number of authors [8,9] group such factors into three major categories: owner/manager characteristics, firm characteristics, costs and return on investment.Support for SMEs to adopt e-commerce technologies, demand consideration for each of these categories.
While diversity among owner/managers and the decision makers for SMEs, reflects on a number of factors towards adoption of e-commerce technologies it can be concluded that factors affecting adoption relate to owner/manager characteristics.A significant factor here is little or no knowledge, firstly of the technologies, and secondly of the benefits from such technologies.This is a major barrier to the take up of e-commerce [10], lack of knowledge on how to use the technology and low computer literacy, mistrust of the IT industry and lack of time also hinders the adoption.SME owners, concerned about a return on their investments, are reluctant to make substantial investments particularly since shortterm returns are not guaranteed [11].
Other factors, such as the current level of technology usage within the organization related to the characteristics of the organization, also affect adoption of e-commerce [10].The Organization for Economic Co-operation and Development (OECD (1998) has identified that: lack of awareness; uncertainty about the benefits of electronic commerce; concerns about lack of human resources and skills; set-up costs and pricing issues; and, concerns about security as the most significant barriers to e-commerce for SMEs in OECD countries.Low use of e-commerce by customers and suppliers, concerns about security, concerns about legal and liability aspects, high costs of development, limited knowledge of e-commerce models and methodologies, and unconvincing benefits to the company are among other factors [12].SMEs definitely have limited resources (financial, time, personnel).This "resource poverty" has an effect on adoption, as they cannot afford to experiment with technologies and make expensive mistakes [13].
Barriers to e-commerce in Developing Countries
If governments believe that e-commerce can foster economic development it is necessary to identify inherent differences in developing countries with diverse economic, political, and cultural backgrounds to understand the process of technology adoption [9].SME studies of e-commerce issues in developed countries [14 -16] indicate issues faced by SMEs in developed countries can be totally different.Organizations adopting ICT and e-commerce in developing countries face problems like: lack of telecommunications infrastructure, lack of qualified staff to develop and support e-commerce sites, lack of skills among consumers needed in order to use the Internet, lack of timely and reliable systems for the delivery of physical goods, low bank account and credit card penetration, low income, and low computer and Internet penetration [4,17,18].Lack of telecommunications infrastructure includes poor Internet connectivity, lack of fixed telephone lines for end user dial-up access, and the underdeveloped state of Internet Service Providers.
Disregard for e-commerce is not surprising where shopping, a social activity in Sri Lanka, recognizes face-to-face contacts as important.Distrust of what businesses do with personal and credit card information in countries where there may be good justification for such distrust, could become a serious obstacle to ecommerce growth [18,19].
Absence of legal and regulatory systems inhibits development of e-commerce in developing countries.A study of SME adoption of e-commerce in South Africa found that adoption is heavily influenced by factors within the organization [12].Lack of access to computers, software/hardware, affordable telecommunications, low e-commerce use by supply chain partners; concerns with security and legal issues; low knowledge level of management and employees; and unclear benefits from e-commerce were found to be major factors that inhibit adoption.Similar study in China found that limited diffusion of computers, high cost of Internet and lack of online payment processes directly inhibit e-commerce.Inadequate transportation and delivery networks, limited availability of banking services, and uncertain taxation rules indirectly inhibit e-commerce adoption.
A study in Egypt [20] found main contributory factors to non-adoption include: awareness and education, market size, e-commerce infrastructure, telecommunications infrastructure, financial infrastructure, the legal system, the government's role, pricing structures, and social and psychological factors.A comparison of two studies in Argentina and Egypt suggests key factors affecting e-commerce adoption in developing countries are: awareness, telecommunication infrastructure, and cost.The Internet and e-commerce issues of SMEs in Samoa are consistent with the studies conducted in other developing countries [21].Studies in Sri Lanka revealed inhibiting factors as: lack of knowledge and awareness about benefits of e-commerce, current un-preparedness of SMEs to adopt e-commerce as a serious business concept, insufficient exposure to IT products and services, language barriers and lack of staff with IT capability [7].Web-based selling was not seen as practical as there is limited use of Internet banking and Web portals, as well as inadequate telecommunications October 2008 The International Journal on Advances in ICT for Emerging Regions 01 (01)
I C T e r
infrastructure [7].Thus, available literature reveals significant factors dealing with internal and external barriers that can be grouped to develop a framework for investigations affecting adoption of e-commerce technologies.
Internal Barriers: SMEs can control internal factors categorising them into: Individual (owner/ manager), Organizational and barriers related to cost or return on investments.
External Barriers: Those that cannot be resolved by the SME organization and are compelled work within the constraints.Inadequate telecommunication infrastructure and legal and regulatory framework are examples of external barriers.These could be further subdivided into: infrastructure related, political, social and cultural and legal.Some external barriers could be addressed by clustering sharing expenses, resources and facilities.The study was conducted in two stages preliminary pilot interviews, a survey, and interviews with SME intermediary support organizations According to Mingers [22], the use of such multiple methods is widely accepted as providing increased richness and validity to research results, and better reflects the multidimensional nature of complex real-world problems.Besides, a multimethod approach allows for the combination of benefits of both qualitative and quantitative methods, and permits empirical observations to guide and improve the survey stage of the research [23,24].
The preliminary pilot interviews brought in barriers imperative to SMEs with the model (Figure 1) and the survey instrument, forming outcome from interviews and observations supported by an extensive literature review.The survey and interviews with intermediary support organizations followed.Face-toface interviews were semi-structured to gather qualitative empirical data and provide flexibility [25] as they allow researchers to explore issues raised by respondents, generally not possible through questionnaires or telephone interviews.
The research was carried out in three stages.The research approach has been discussed elsewhere [26] and therefore it suffices to discuss the results in the following section.
rESuLtS dIScuSSIon
Barriers: Pilot Interviews: A majority (88%) of respondents ranked lack of awareness the most significant barrier.This can be attributed to the fact that the majority of owner/managers, described themselves as basically computer literate.Knowledge of available technologies or suitability for effective use towards improved productivity for benefits was negligible.They appear to be confused with choices in software/hardware.Computers, were underutilized with adhoc purchases and isolated implementation shadowing ICT strategy; a major concern, being decision makers.Next, was the cost of Internet, equipment and e-commerce implementation.Inadequate telecom infrastructure chosen by 83% was the third most frequently cited barrier chosen by the more advanced in usage of ICT using e-mail and Internet, more likely to have experienced problems.Unstable economy, political uncertainty, lack of time, channel conflict, lack of information about e-commerce and lack of access to expert help, was cited as barriers by 70% of respondents.
Analysis of Survey data:
More than 75% of the respondents (96% males and 4% females) were either professionally qualified or graduates.Of the Tables produced below, Table 1 identifies the top 6 internal barriers of 9 listed.Table 2 shows external barriers, divided into Cultural, Infrastructure, Political, Social, and Legal and Regulatory.Tables 4 and 5 illustrate internal and external support needed.Analysis of survey results reveal that lack of skills, lack of awareness of benefits and return on investments prevent SMEs from adopting ICT and e-commerce technologies, reinforced by "awareness and education" ranking top for support by nearly 90% of the respondents, not surprising for a developing country like Sri Lanka trying to implement technologies.It reflects on other internal barriers too and awareness and education can, to a great extent, counter this barrier.Since use of ICT in Sri Lanka is low, e- "Lack of popularity in online marketing" and "low Internet penetration" rate high in the list of external barriers.Improving ICT diffusion in Sri Lanka can address this problem.'Inadequate infrastructure' impedes SMEs as reinforced by their request for "improvement of national infrastructure" raking very high on the support needed.SMEs in Sri Lanka are adversely affected by the high cost and unreliable service of infrastructure services such as electricity and telecommunications.The steps taken by the government to improve telecommunication facilities breaking telecom monopoly is noteworthy.Policy inertia and the lack of legal and regulatory framework also rank high and enforce constraints on SMEs.Policy reforms introduced by governments support the large export-oriented foreign direct investments leaving SMEs with ad-hoc policy prescriptions and weak institutional support [27].The government's role in an overly bureaucratic regulatory system results in delays in its deliberations and is extremely costly [27].Appropriate legal and regulatory framework would ensure that SMES operate on a level playing field.
Social barriers come next.A one-stop shop facility helps SMEs access information, technology, markets, and the much needed credit facilities.This concept, implemented for export-oriented foreign direct investments (EOFDI) by the Board of Investments (BOI) found it to be successful.Being policy makers working towards progress of SMEs, senior management lacking in ICT knowledge is identified as an important constraint directly impacting operational efficiency of SMEs.Awareness building and education with regard to ICT and technologies would help to alleviate this problem.Government, academia, and industry sectors can take leadership roles in promotion of ICT by conducting awareness and training programs, technical and non-technical catering to the needs of SMEs at grass-roots level.SMEs place a very heavy reliance on external advice and support.Such support and advice seem unavailable.
Perceptions of the SME Intermediaries:
The intermediaries, with a consensus for awareness building programs at national level agree lack of awareness and lack of skills are major barriers for SMEs to adopt technologies.Training programs; workshops and seminars conducted in the local language need to be especially designed for SMEs at grass roots level.
Absence of a "one-stop shop" for advice and support is de-motivating and affects SMEs.It is fundamental to educate senior management of government organizations prior to providing support for SMEs with ICT and e-commerce.SMEs need not only ICT technologies but also quality control and standards.
Inter-institutional coordination, staff development, and institutional capacity are also vital.Much effort seems replicated and wasted with public sector, private and non-governmental SME intermediary organizations working in isolation.The government is best equipped to reach rural SMEs at grass roots level.Tapping and utilizing all available strengths in a more coordinated manner would prove much more productive.
Analysis of Barriers and Support
This section discusses the extent to which the barriers are addressed by the support provided by the SME intermediary organization.The Table 5 and Table 6 given below illustrate the barriers that ranked as most significant, Table 7 identifies the support required to alleviate the barriers and table 8 indicates whether support is available by the SME intermediary organisations.
Barriers
The internal barrier "employees lacking the required skills are ranked the highest in the list".The interviews with the intermediary organizations reveal that this barrier is addressed only partially.While they admit that the SMEs need skill training from the grass root level, they are not in a position to deliver that support as they do not have the resources nor the mechanism to address that barrier."Security concerns with payments" ranked next on the list.Support is not available from the intermediaries and they are not in a position to provide any support in this regard.The next two barriers on the list can be addressed with awareness building programs.The intermediaries do not seem to be addressing that fully.
October 2008
The International Journal on Advances in ICT for Emerging Regions 01 (01) The above table 6 illustrates that in order to resolve the external barriers, support in infrastructure, awareness building, education, and training and consultancy is required.The SME intermediaries are helpless in providing the support with infrastructure and legal framework as the government intervention is required.The lack of availability of information and the lack of popularity of e-commerce can both be addressed by appropriate awareness building programs.Even though it is available to a certain extent, the SME intermediaries are not capable of proving fullest support to address this barrier due to lack of proper mechanism to reach the SMEs at grass root level.Even though the intermediaries are making an effort to generate awareness, they also seem to be hindered by finances and resources and lack of properly formulated strategies and coordinated programs.
Support
The survey results revealed the support strongly requested by the SMEs.The Tables 8 and 9 given below illustrate the support that ranked as most significant, within the organization: internal support and outside the organization: external support.It identifies the support required to assist them and also whether support is available with the SME intermediary organizations.Guidance to overcome risks with implementing e-commerce ranked highest in the list of support for SMEs.They need support in every aspect of the implementation of e-commerce starting with knowledge, technical, management, and consultancy.This support is not available from the SME intermediary organizations.Assisting with hardware and software and also advice and direction is minimal and almost non-existent.It could also be contributed to the fact that the main focus of the intermediaries is first and foremost to elevate the standard of SMEs in general.Adoption of e-commerce has taken a back seat in view of the other pressing problems.The above table shows the support required from outside the organization: external support.Other than the item listed last; "improve collaboration among SMEs" the intermediaries are not capable of proving support with the other barriers.They need to liaise with the government to provide such help to the SME organizations.
I C T e r
The evidence from the above tables is informative.A few barriers; namely, awareness creation and training, appear to be addressed to a certain extent while a majority of the barriers are either disregarded or totally neglected by the participant SME intermediary organizations.It is also evident that where such support is available, it is restricted to the urban areas.Further, there appears to be a disparity between the SMEs' requirements and the support available with the SME intermediaries.The SME intermediaries, on the other hand, seem to be facing trouble, meeting with their objectives.Apparently, this hinders progress with the efforts of the SME intermediaries to assist SMEs.Perhaps this drawback can be attributed to uncoordinated efforts that lack proper strategy, frequent changes of government resulting in changes of rules and regulations, lack of interest from the authorities concerned or may even be due to lethargy from both the public and the private sectors towards heavy investment in what is often seen as an unstable economic environment.
cHALLEnGES FAcInG SMEs
The objectives of this study were to understand and determine the importance of internal and external barriers; and support required to overcome them.The importance of barriers shows that SMEs are extremely hindered by external barriers.The internal and external support required reveals that there is a strong request for it.
The difference between adoption patterns in developing and developed countries focuses on support activities needed in the development.Support is available in developed countries and it is a matter of finding the appropriate support for a SME encountering barriers, whereas in developing countries this support is almost non-existent.Another difference centers on the external barriers identified, such as the need to improve the national telecommunications infrastructure.
This research contributes by identifying the absence of a government and industry coordinated approach to providing support for SMEs, and not addressing problems at a grass-root level.In addition an initial framework for eTransformation of SMEs in a developing country is proposed for trial towards validation.
Next on the Agenda
Next, further statistical analysis of survey data attempts to validate initial outcomes, test construct validity, and assumption testing.Barriers and support that predominate at various levels of sophistication need determination for unique perspectives in examining issues and understanding.Problems need to be prioritised at different levels to enable SMEs better equip themselves progress through e-Transformation.Finally the initial framework will be trialled with case study organizations.
concLuSIon
This study provides an understanding of the challenges faced by SMEs in the adoption of ICT and e-commerce in developing countries.Assessing and determining current levels of ICT and e-commerce sophistication of SMEs it examines barriers impeding SMEs, while identifying support required for eTransformation.The developed conceptual model identifies internal and external limitations while assessing necessary support to overcome obstacles.
The results of the exploratory interviews and the survey clearly indicate the necessity to provide support to SMEs if they are to successfully adopt ICT and e-commerce.Faced with identified barriers, both external and internal, these barriers are found to impede SME uptake of ICT and e-commerce.Accordingly, necessary support to overcome or alleviate the barriers discovered had also to be recognised.This support, in the form of suggestions were later confirmed in a series of interviews carried out with SME intermediaries whose task is to provide some support, but agree with the existence of many internal and external barriers that prohibit the uptake of ICT or e-commerce by SMEs.The intermediaries go further with their observations.They believe and confirm that SMEs are restricted with the strength or the capacity to address these barriers on their own.
The little available support extended by intermediary organizations at present, seems to be inadequate.Besides, the available support programmes are incapable of meeting SME requirements.It was not surprising, therefore, to note that some SMEs were
I C T e r
even unaware of the existence of intermediaries, leave alone the support programmes offered by them.Apparently, only a few SMEs have opted to receive assistance from the intermediaries.The identification of a lack of support, an important outcome from the SME intermediary interviews as a possible factor, contributes towards the slow uptake of ICT and e-commerce as explained by them.Their projects do not seem to be sufficiently geared towards the needs of the SMEs.Moreover, it is apparent that activities of such bodies are seen as uncoordinated and bureaucratic.This fact is in agreement with previous research [28].
In an age where information and technology combine to evolve new and emerging technologies that are speedily snapped up by the developed world for their betterment, it is sad to see the developing, trailing behind for the want of necessary financial, and other support.In such a scenario, it is vital that both industry and government step in with the correct advice and support to help SMEs with their uptake of e-commerce.One of the major outcomes of the study presented in this chapter is the necessity to review current initiatives aimed at promoting ICT and e-commerce with the SMEs and to develop strategies with systematic focus to help SMEs to e-Transform their organisations.This information can be fed up to the relevant government authorities to assist them with strategy formation.
Figure 1 :
Figure 1: Conceptual Model -Barriers to Adoption rESEArcH MEtHodoLoGY Stage 1: was the Pilot Exploratory Study withSMEs, Stage 2: Survey of the SME organisation using a questionnaire and Stage 3: Interviews with intermediary SME Organisations.
9
Availability of E-commerce support for SMEs in Developing CountriesThe International Journal on Advances in ICT for Emerging Regions 01 (01)October 2008 Availability of E-commerce support for SMEs in Developing CountriesThe International Journal on Advances in ICT for Emerging Regions 01 (01)
Table 1 :
Internal Barriers to using or extending use ICT & e-commerce I C T e rThe International Journal on Advances in ICT for Emerging Regions 01 (01)October 2008
Table 2 :
External Barriers to using or extending use of ICT & e-commerce
Table 3 :
Internal Support for SMEs to use or extend use of ICT & e-commerce
Table 4 :
External Support for SMEs to use or extend use of ICT & e-commerce
Table 7 :
Internal Support
Table 8 :
External Support | 5,677 | 2009-03-26T00:00:00.000 | [
"Business",
"Computer Science",
"Economics"
] |
Volatile organic compounds from Paenibacillus polymyxa KM2501-1 control Meloidogyne incognita by multiple strategies
Plant-parasitic nematodes (PPNs) cause serious crop losses worldwide. In this study, we investigated the nematicidal factors and the modes and mechanisms of action involved in nematode control by Paenibacillus polymyxa KM2501-1. Treatment of the second-stage juveniles (J2) juveniles of PPN Meloidogyne incognita with the biological control agent KM2501-1 resulted in a mortality of 87.66% in vitro and reduced symptoms on tomato by up to 82.61% under greenhouse conditions. We isolated 11 volatile organic compounds (VOCs) from strain KM2501-1, of which 8 had contact nematicidal activity, 6 had fumigant activity, and 5 acted as stable chemotactic agents to M. incognita. The VOCs provided a comprehensive strategy against PPNs that included “honey-trap”, fumigant, attractant and repellent modes. Furfural acetone and 2-decanol functioned as “honey-traps” attracting M. incognita and then killing it by contact or fumigation. Two other VOCs, 2-nonanone and 2-decanone, as well as strain KM2501-1 itself, destroyed the integrity of the intestine and pharynx. Collectively our results indicate that VOCs produced by P. polymyxa KM2501-1 act through diverse mechanisms to control M. incognita. Moreover, the novel “honey-trap” mode of VOC–nematode interaction revealed in this study extends our understanding of the strategies exploited by nematicidal biocontrol agents.
(PGPR) and produces many beneficial bioactive substances including antibiotics polymyxins, fusaricidins, and antimicrobial proteins that display broad-spectrum antifungal and antibacterial activity [11][12][13] , as well as phytohormones, that can promote plant growth 14 . Some strains of P. polymyxa stimulate plant growth via nitrogen, phosphorus and potassium uptake in nutrient deficient soils 15 . While P. polymyxa is seldomly reported for its nematicidal activity, strain GBR-1 has been reported to suppress RKNs in pot experiments 7 , and two other species of Paenibacillus suppressed a disease complex caused by a root-knot nematode and a fusarium wilt fungus 16 . As a sporeformer, P. polymyxa is an ideal candidate for development as a nematicide because it can easily be formulated for agricultural use.
P. polymyxa produces antifungal and insecticidal volatile organic compounds (VOCs) as determined by solid phase microextraction-gas chromatography-mass spectrum (SPME-GC-MS) 17,18 . Such compounds can inhibit fungal mycelium growth and the germination of many fungal plant pathogens 19 , and play a vital role in recognition between entomopathogenic nematodes and their hosts 20 . VOCs also are effective against PPNs 18,21 and recently have been found to exhibit strong nematicidal and fumigant activity against M. incognita 21 . This fumigant effect is of great importance because it helps to reach target nematodes that reside far from nematicides in the soil 22 .
That PPNs locate their plant hosts by chemoreception was proposed over 90 years ago 23 , and has since been demonstrated in vitro by chemotaxis assays on agar-filled dishes [24][25][26][27] . It has been suggested that the use of some compounds like neuropeptides to control nematode behavior might be an efficient way to reduce the infection levels of PPNs 28 . We hypothesize that a similar approach based on VOCS with which PPNs have a stable chemotactic response might also serve as a novel nematicidal mechanism.
We previously isolated P. polymyxa KM2501-1, a strain with highly nematicidal activity against M. incognita. In this study, we evaluated the nematicidal activity of P. polymyxa KM2501-1 against M. incognita in vitro and in the greenhouse. VOCs produced by the strain were isolated and identified, and their mechanisms of action were explored. The results of these studies indicate that KM2501-1 produces multiple nematicidal VOCs with diverse modes of action, including VOCs that exhibit a novel "honey-trap" mechanism involved in luring and then killing M. incognita.
Results
Nematicidal activity of KM2501-1 in vitro and in the greenhouse. Culture filtrates (CF) of KM2501-1 diluted 1:3 (1/4×), 1:1(1/2×) or not diluted (1×) were highly toxic to J2 juveniles of M. incognita, causing mortality rates of 80.00% to 87.66% at 72 h, whereas the mortality in the control group (CK) was only 5.12% ( Fig. 1A and Table S1). When CF of KM2501-1 was tested against M. incognita, clear dose response relationships and significant mortality of J2 juveniles were evident after 24, 48, or 72 h of exposure. These results indicate that P. polymyxa KM2501-1 has strong nematicidal activity against M. incognita, and are consistent with previous reports that some strains of P. polymyxa are effective against PPNs 7,16,29 . P. polymyxa KM2501-1 also produced nematicidal VOCs that caused 92.30% mortality of M. incognita, whereas, the mortality of the control group (CK) was less than 2% in the three-compartmented Petri dish assay (Fig. 1B). The results indicated that P. polymyxa strain KM2501-1 can produce extracellular nematicidal VOCs.
In the greenhouse, M. incognita infected and formed numerous large root galls on the control roots of tomatoes, while fewer and smaller galls were observed on roots after treatment with culture filtrate (CF) or bacterial suspension (BS) of P. polymyxa KM2501-1 (Fig. S1). The root gall index of the control group (CK), the undiluted CF-treated plants, and undiluted BS-treated plants were 4.80, 0.60, and 2.40, respectively in the first set of pot The nematicidal activity of VOCs of P. polymyxa KM2501-1 and control group (CK) in a three-compartment Petri dish. The data are shown as the mean ± SD (n ≥ 3). Statistical comparisons between the values of samples and control (CK) were performed using a t-test. Significant differences were determined according to a threshold of *P < 0.05; **P < 0.01, and ***P < 0.001.
VOCs from the P. polymyxa KM2501-1 have contact nematicidal activity. To confirm the hypothesis that the nematicidal VOCs are the primary nematicidal factor of KM2501-1, SPME-GC-MS was conducted to identify the VOCs produced by P. polymyxa KM2501-1. Apart from the 3 peaks produced by the KMB medium ( Fig. 2A), 11 peaks were present in the chromatograms of the fermentation broth of KM2501-1 (Fig. 2B). Most of these were identified as alkanols, alkanones and acids. Of these, the ten that were commercially available were purchased for bioassays and listed in Table 2. Then J2 juveniles were used to test contact nematicidal activity against M. incognita immersed in treatment wells with VOCs at various concentrations. Furfural acetone, 2-undecanol, 4-acetylbenzoic, and 2-decanol acid were the most active, with LC 50/2d (50% lethal concentration at 2 days) values of 4.44, 5.05, 16.24, and 23.12 mg/L, respectively, followed by 2-nonanol, 2-undecanone, 2-decanone, and 2-nonanone, with LC 50/2d values of 75.49, 87.41, 126.00, and 340.84 mg/L, respectively (Fig. 3). The mortality rates of acetone and 2-heptanone against M. incognita were below 10% even at a concentration of 1,000 mg/L (data not shown). acetone was also the most active in the contact assay against M. incognita immersed in treatment wells. This is the first report that these 6 VOCs have fumigant activity against M. incognita.
Chemotaxis of M. incognita towards VOCs. Using a population chemotaxis assay, we screened J2 juveniles of M. incognita for responses to the 10 VOCs listed in Table 2 at concentrations ranging from 1 to 10,000 mg/L. If the chemotaxis indexes (C.I.) of the VOCs at 5 concentrations are all not significantly different compared with the C.I. of control group (0 mg/L), it means that the activity of VOCs influencing the chemotaxis of J2 juveniles is variable. The results (Table 3) showed that acetone, 2-decanol, and furfural acetone acted as attractants to M. incognita, whereas 2-undecanone acted as a repellent. 4-Acetylbenzoic acid showed as an "attractant at low concentration and as a repellent at high concentration" (ALRH) towards to M. incognita. There were significantly difference among the C.I. values of some concentrations of acetone, 2-decanol, 4-acetylbenzoic acid, 2-undecanone, and furfural acetone (Table 3). Whereas, the C.I. of 2-heptanone, 2-nonanol, 2-nonanone, 2-decanone and 2-undecanol at all 5 concentrations are not significantly different compared to the C.I. of control group, so chemotaxis of M. incognita towards to 2-heptanone, 2-nonanol, 2-nonanone, 2-decanone and 2-undecanol was variable and the results are not shown. Table 3 shows data of concentrations up to only 1,000 mg/L for 2-decanol and 2-undecanone, whereas the other VOCs were tested at up to 10,000 mg/L. This is because at a concentration of 10,000 mg/L, M. incognita paralyzed in the buffer area of 2-undecanone ( Fig. S2), whereas they were paralyzed in the test area and alive in the control area of 2-decanol (Fig. S3). The C.I. of these 2 groups at a concentration of 10,000 mg/L are not shown. However, the observation that VOCs of the 2-decanol and 2-undecanone groups killed M. incognita at 10,000 mg/L also is consistent with their fumigant activity.
Comprehensive strategy of VOCs against M. incognita. VOCs produced by KM2501-1 have comprehensive array of activities including contact nematicidal activity, fumigant activity, and activity affect the chemotaxis of the nematodes (Fig. 5A). As shown in Table 4, it is clear that VOCs like furfural acetone and 2-decanol have both contact nematicidal activity and fumigant activity against M. incognita, and also function as stable attractants. These VOCs have a novel "honey-trap" mode of action ( Fig. 5B) in that they can attract M. incognita and then kill it by contact or fumigation. In contrast, VOCs like 2-undecanone had nematicidal and fumigant activity against M. incognita, but acted as stable repellent for M. incognita and could be applied to seeds or the roots of vegetables to initially repel, and subsequently kill invading nematodes. VOCs like 2-undecanol possessing both contact nematicidal and fumigant activity against M. incognita acted as fumigant mode, and could be more efficient when applied for nematode suppression for it could reach to the target nematodes that reside far from fumigants in the soil. VOCs like acetone, which do not themselves have nematicidal or fumigant activity but function as attractants, could be applied in combination with chemicals like abamectin to improve their efficiency.
Nematicidal mechanism of VOCs and culture filtrate of KM2501-1 against M. incognita. VOCs kill nematodes by mechanisms that can affect the nervous system 28 , surface coat, intestine 8 , pharynx, or other tissues. Vacuoles were observed in the intestine of M. incognita after exposure to some VOCs (Fig. S4) and further investigated due to their resemblance to the effects of VOCs on other nematode tissues. We first assessed the pathological characteristics of M. incognita disposed by 2-nonanone and 2-decanone. J2 juveniles of M. incognita (50-60 per well) were exposed to 250 mg/L 2-nonanone or 100 mg/L 2-decanone for 48 h and then compared morphologically to control nematodes by optical microscopy (Fig. 6). We excluded dead nematodes from these observations because we wanted to avoid morphological changes that may have occurred as part of the death process. The results showed that the pharyngeal tissues of J2 juveniles had shrunken or even disappeared, and that the intestinal tissues became indistinct after treatment with 2-nonanone or 2-decanone. No disruption was observed in the intestine or pharyngeal tissues of the control group exposed only to solvent (distilled water). These results demonstrate clearly that 2-nonanone and 2-decanone disrupt the intestine and pharynx of M. incognita. Culture filtrate of KM2501-1 similarly disrupted the intestine and pharynx of M. incognita (Fig. 6), providing evidence that these VOCs are the primary nematicidal factor in strain KM2501-1.
Compounds Chemical Abstracts Service Number Manufacturer (Country) Purity
Acetone
Discussion
M. incognita is an important plant pathogen causing severe damage to crops worldwide. Chemical nematicides are the primary means of control for plant parasitic nematodes, but their potential negative impacts on human health and the environment have led to a total ban or greatly restricted the use of such compounds. Researchers have therefore searched for decades for antagonistic microorganisms as an alternative to nematicidal chemicals, and many such fungi and bacteria have been reported 30,31 . However, the development of biological nematicides has been constrained by difficulties in commercial production and formulation 32 . Some nematicidal fungi are difficult to produce or are inhibited by soil 33 , and obligate bacteria like Pasteuria penetrans cannot easily be cultured 34 . Therefore, environmentally friendly, effective and affordable alternatives for PPN control remain urgently needed. The results of the present study indicate that P. polymyxa strain KM2501-1 has strong nematicidal activity against M. incognita and great potential for further development because as a sporeformer, it can easily be formulated for agricultural use. Table 3. Meloidogyne incognita chemotactic response to volatile organic compounds (VOCs). ND, not detectable. The data are shown as the mean ± SD (n = 6). Duncan's multiple range test was employed to test for significant differences between treatments at P < 0.05. Different lowercase letters indicate significant difference between treatments (P < 0.05). In this study, 11 VOCs were isolated from strain KM2501-1, of which acetone and 2-heptanone had no contact nematicidal activity against M. incognita. These results are in line with an earlier report that 2-heptanone is inactive against M. incognita 35 , but differ from a previous study indicating that acetone is active against Panagrellus redivivus and Bursaphelenchus xylophilus 21 . When the nematicidal VOCs were tested individually over a range from 10 to 1,000 mg/L in the contact nematicidal experiment, clear dose response relationships were observed against M. incognita. These results are the first to show contact nematicidal activity of furfural acetone, 2-undecanol, 2-decanol, and 4-acetylbenzoic acid against M. incognita. Interestingly, there was a significant difference in the contact nematicidal activities of 2-alkanone and 2-alkanol homologues of different carbon chain length (Fig. 3). The LC 50/2d values of 2-nonanone, 2-decanone, and 2-undecanone were 340.84, 126.00, and 87.41 mg/L, respectively, while the LC 50/2d values of 2-nonanol, 2-decanol, and 2-undecanol were 75.49, 23.12, . Schematic for chemotaxis assay plates A thin layer of agar in a 9-cm Petri dish was used as a substrate for chemotaxis. Roughly 200 J2 juveniles of M. incognita were placed near the center of the plate with a filter paper wetted with 30 μL VOCs at one side of the plate (test area in the figure) and a filter paper wetted with 30 μL solvent at the opposite side (control area in the figure). The distance between the center of filter paper and the midline of the plate was 25.6 mm. After 8 hours, the numbers of nematodes in the test and control areas were counted. Nematodes that remained within the 8-mm buffer zone at the midline were not counted for chemotaxis assays.
In contrast to the results of the contact nematicidal activity assay against M. incognita, some VOCs lacked fumigant activity even after 3 days. For example, 2-nonanone and 4-acetylbenzoic acid were active against M. incognita immersed in treatment wells, but showed no fumigant activity, with mortality below 20% even at a concentration of 1,000 mg/L after 3 days (data not shown). Fumigant activity differs from contact nematicidal activity against M. incognita immersed in treatment wells, and is of great importance for nematode control because it helps to reach target nematodes that reside far from nematicides in the soil 22 . The VOCs with both fumigant activity and contact nematicidal activity against M. incognita may be more efficient when applied for nematode suppression.
It has been reported that many VOCs have nematicidal 35,38 and fumigant activity 39 , and affect the chemotaxis of the nematodes 40,41 , but VOCs with the comprehensive array of activities of those produced by KM2501-1 have not been previously described. The collective activities of these VOCs make KM2501-1 likely to be more effective as a nematicide. Especially effective was furfural acetone, with strong contact nematicidal (LC 50/2d = 4.44 mg/L) and fumigant activity (LC 50/3d = 75.12 mg/L), and the highest chemotaxis index among those tested (C.I. of 0.28 to 0.47). 2-Nonanone, 2-decanone, and the culture filtrate of KM2501-1 destroyed the integrity of the nematode pharynx and intestine, a mechanism that deserves to be explored further.
There also have been some reports about new modes of nematode control like the "Trojan horse" mechanism 42 and the use of urea released by bacterial to mobilize nematode-trapping fungi to kill nematodes 43 . However, these mechanisms have only been tested with the model nematode Caenorhabditis elegans, unlike the "honey-trap" and repellent modes of action we presented here that were tested directly against M. incognita. Our results suggest that the "honey-trap" and repellent modes could be potentially used for PPN control.
In conclusion, P. polymyxa strain KM2501-1 provided good biocontrol of M. incognita due to production of a comprehensive array of VOCs with nematicidal, fumigant, and chemotactic activity. As a sporeformer, the strain itself could readily be formulated for agricultural application, while the VOCs it produces also have potential for development as pesticides, either alone or in combination to improve the performance of existing chemicals. The results of this study demonstrate that the VOCs produced by strain KM2501-1 exhibit a complex array of strategies against nematodes including a "honey-trap" mode, a fumigant mode and a repellent mode. For the "honey-trap" mode, furfural acetone lures M. incognita and then kills it. This "honey-trap" pattern of VOCnematode interaction extends our understanding of the mechanisms of action of nematicidal VOCs. How the different mechanisms play roles in the various stages of host crop growth and interactions with nematodes should be subjects of further research.
Materials and Methods
Bacteria and nematodes. Paenibacillus polymyxa KM2501-1 was isolated from rhizosphere soil of buttercup (Ranunculus) polluted with recalcitrant organic compounds in Hukou county, Jiangxi province, China, and stored at −80 °C in our laboratory. The method that strain KM2501-1 isolated from rhizosphere soil of buttercup referred to the literature 44 pubilished by our group. Strain KM2501-1 proved to be Paenibacillus polymyxa by sequencing its 16 S rDNA and constructing Phylogenetic trees of strain KM2501-1 and other 13 Paenibacillus or Bacillus strain based on 16 S rDNA (Fig. S5). The strain was grown on Kings medium B (KMB) agar plate at 28 °C for 48 h. Individual isolates were then inoculated into 100 mL KMB broth and incubated on a rotary shaker (180 rpm) at 28 °C in the dark for 48 h. Cultures were centrifuged and the supernatant solution was passed through a 0.22 μm nitrocellulose filter to prepare sterile culture filtrates (CF) for the assays described below. The bacterial of KM2501-1 was washed by sterile water for 3 times and then suspended in sterile water. The bacterial suspension (BS) in water (OD 600 of undiluted bacterial suspension was 0.8) was prepared for the pot assays.
M. incognita was maintained on the roots of tomato (Solanum lycopersicum). Nematode eggs were isolated from the galls formed on infected tomato roots. To assess the nematicidal activity of KM2501-1 and VOCs, egg masses were peeled off from the root with needles and placed in water at 20 °C. Freshly hatched J2 juveniles were collected in sterile tube 3 days later and used in all of the assays.
Activity in vitro of culture filtrates of KM2501-1 against M. incognita. To examine nematicidal activity, 120 μL of undiluted culture filtrate, dilutions of 1:1 (v/v, 1/2 × CF), or dilutions of up to 1:3 (v/v, 1/4 × CF) were transferred to 96-well plates and each well was filled with a freshly hatched suspension of approximately 30 J2 juveniles. KMB broth was used as a control (CK). Each treatment was replicated three times. Plates were covered with plastic lids, maintained in the dark at 20 °C, and dead M. incognita were counted after exposure under an inverted microscope. M. incognita was considered dead when no movement was observed for 2 s after contact with a needle. The percentages of dead nematodes observed were corrected by eliminating natural death in a negative control according to the Schneider-Orelli formula 45 .
Nematicidal activity of P. polymyxa strains KM2501-1 VOCs. Nematicidal activity of P. polymyxa strain KM2501-1 VOCs was assayed in three-compartment Petri dishes according to the method of Gu et al. 21 with some modifications. Briefly, the bacteria were cultured in KMB agar for 24 h at 28 °C in one compartment and a layer of 2% water agar (WA) was added to the other two compartments. After 24 h, about 200 nematodes of M. incognita were added to the two compartments with WA. Plates were immediately covered with lids to prevent the escape of the volatiles. After incubation at 25 °C in the dark for 24 h, the numbers of mobile and immobile nematodes recorded by counting under a microscope, and the total number of nematodes must be more than 100 nematodes per compartment. Immobile nematodes were immediately transferred to tap water to determine their potential for revival. As a control, KMB agar without KM2501-1 was added to one compartment of plates. The test repeated 4 times.
Control efficiency of P. polymyxa strains KM2501-1 against M. incognita in the greenhouse. Plastic round pots (18 cm × 18 cm × 12.5 cm) were filled with about 1 kg sterile soil mixture (sand, field soil and organic matter, 1:1:1). One four-leaf stage tomato seedling was transplanted into each pot and incubated in the greenhouse at 22-25 °C. Each tomato seedling was irrigated around the roots with 10 mL of either P. polymyxa KM2501-1 culture filtrates (CF), a washed bacterial suspension (BS) in water (OD 600 of undiluted bacterial suspension was 0.8), or just water (negative control) around the roots 2 days after transplanted. Two days later, about 2,000 J2 juveniles of M. incognita were inoculated into the rhizosphere soil of each seedling. The culture filtrate (CF) and bacteria suspension (BS) of KM2501-1 were each tested at 4 concentrations, with five seedlings at each concentration and control group (CK). Two replicates were set up for each treatment. Post-transplantation for 60 d, the severity of root galling was assessed 46 . Extraction of VOCs from fermentation broth of strain KM2501-1 by solid phase microextraction (SPME). A new 75 mm CAR/PDMS SPME fiber (Supelco, Bellefonte, PA, USA) was conditioned with helium at 270 °C for 2 h prior to use. After each extraction cycle, the fiber was returned to the SPME needle to prevent contamination and conditioned again with helium at 270 °C for 20 min. Extractions were performed in 15 mL Supelco SPME vials filled with 9 mL bacterial culture containing a stir bar. The vials were clamped inside a thermostatic water bath placed on a hot stirrer. The SPME needle was allowed to pierce the septum and the fiber was exposed to the headspace of the vial for 90 min at 60 °C with constant magnetic stirring. The VOCs from 9 mL KMB broth were used as controls.
Identification of nematicidal VOCs by gas chromatography-mass spectrometry (GC-MS). A
Hewlett Packard 7890GC/5975MSD (Agilent Technologies, USA) equipped with a HP-5MS capillary column was used to separate and identify the VOCs. The carrier gas was helium with a flow rate of 1 mL/min in split-splitless mode. The SPME fiber was inserted directly into the front inlet of the gas chromatograph and desorbed at 270 °C for 2 min. The oven temperature was programmed as follows: 40 °C for 2 min, 40-180 °C at a rate of 4 °C/min, 180-250 °C at 5 °C/min, and held at 250 °C for 6 min. The temperature of the transfer line and ion trap were 150 and 250 °C, respectively. Identification of VOCs was based on a comparison of the mass spectrum of the substance with standards in the GC/MS system data bank NIST08.L (National Institute of Standards and Technology). The experiment was conducted three times.
Contact nematicidal activity of VOCs against M. incognita. Pure compounds of VOCs identified
by GC-MS, namely 2-nonanol, 2-decanol, 2-undecanone, 2-undecanol, 4-acetylbenzoic acid, and furfural acetone, were individually subjected to dose response experiments against J2 juveniles over a range of 10-300 mg/L. Being the most inactive of the pure substances tested, acetone, 2-heptanone, 2-nonanone and 2-decanone were tested over a range of 25-1,000 mg/L. Pure compounds were prepared in ethanol and were successively diluted in distilled water containing the polysorbate surfactant Tween 20. Final concentrations of ethanol and Tween 20 in treatment wells never exceeded 1% and 0.1% (v/v), respectively.
To examine nematicidal activity in vitro, 120 μL of commercial VOCs at various concentrations were transferred to 96-well plates and then the wells were filled with J2 juveniles (approximately 30 M. incognita/well). Solvent carriers were used as controls. Each treatment was replicated three times. Plates were covered with plastic lids, maintained in the dark at 20 °C for 2 days, and dead M. incognita were counted after exposure under an inverted microscope. M. incognita was considered dead when no movement was observed for 2 s after touching with a needle. The percentages of dead nematodes observed were corrected by eliminating natural death in a negative control according to the Schneider-Orelli formula 45 .
Fumigant activity of VOCs against M. incognita. The assay of fumigant activity was conducted by the method of Nikoletta 35 with some modifications. A central well in each 96-well plate was filled with 200 μL test solution of a compound identified by GC-MS at a dose of 125-1,000 mg/L and 0 mg/L as negative control, and the four surrounding adjacent wells each received about 100 J2 juveniles suspended in 120 μL water. Mortality percentages in the four surrounding wells were recorded after 72 hours and the experiment was conducted four times. Immobile nematodes were immediately transferred to tap water to determine their potential for revival. The percentages of dead nematodes observed were corrected by eliminating natural death in a negative control according to the Schneider-Orelli formula 45 .
Chemotaxis of the M. incognita towards commercial VOCs. Chemotaxis was assessed on in 9-cm Petri dishes (Fig. 7) according to the method described by Bargmann et al. 41 with some modifications. Ten mL of 2% water agar was poured into a 9-cm Petri dish with 3 areas: a buffer area including the 0.8 cm width of the middle line, a test area and a control area. Two 11.2 mm-diameter sterile filter paper discs were placed in the test and control areas respectively, with a distance between the center of the filter paper disc and the midline of the plate of 25.6 mm. Then 30 μL of different concentrations of VOCs were spotted onto the filter paper in the test area, and the same volume of solvent was spotted onto the filter paper in the control area. About 200-300 J2 juveniles of M. incognita (20 μL) were placed at the center of the plate. Chemotaxis assays were performed at 20 °C for 8 hours in the dark. The numbers of M. incognita in the test and control areas were then counted under an inverted microscope. Each VOC tested at 5 concentrations (1, 10, 100, 1000, and 10000 mg/L) and a control group (30 μL of 0 mg/L VOCs were spotted onto the filter paper in both test area and control area). The experiment was repeated 3 times, with 2 replicates at each concentration and control group at each test time.
For each VOC at each concentration, the chemotaxis index was equal to (the number of nematodes in the test area minus the number of nematodes in the control area) divided by (the number of nematodes in the test area plus the number of nematodes in the control area). For 0 < C.I. < 1, the VOC was considered as an attractant. For −1 < C.I. < 0, the VOC was considered as a repellent, and if the C.I. = 0, the VOC had no significant effect on chemotaxis.
Microscopic observation. M. incognita was observed under an inverted microscope (Olympus, IX73) to determine the integrity of the intestine and the pharyngeal region. Under normal conditions, the pharynx and intestinal tissues were well-organized and could be seen clearly.
Statistical analysis. Data were corrected by Schneider-Orelli formula 45 and then analyzed using SPSS (Statistical Package for the Social Sciences), version 17.0 software (SPSS, Chicago, IL, USA). LC 50 values were calculated using PROBIT analysis 47 and the data shown as the mean ± Standard Deviation (SD) (n ≥ 3). Duncan's multiple range test was employed to test for significant differences in pot experiment, chemotaxis experiment and experiment of nematicidal activity of P. polymyxa KM2501-1 culture filtrate between treatments at P < 0.05. Different lowercase letters indicate significant difference between treatments (P < 0.05). Statistical comparisons in other experiment between two values were performed with a t-test, and significant differences were determined according to a threshold of *P < 0.05; **P < 0.01; ***P < 0.001. | 6,582.8 | 2017-11-24T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Some Contributions to the Study of Oenological Lactic Acid Bacteria through Their Interaction with Polyphenols
Probiotic features and the ability of two oenological lactic acid bacteria strains (Pediococcus pentosaceus CIAL-86 and Lactobacillus plantarum CIAL-121) and a reference probiotic strain (Lactobacillus plantarum CLC 17) to metabolize wine polyphenols are examined. After summarizing previous results regarding their resistance to lysozyme, gastric juice and bile salts, the three strains were assessed for their ability to release phenolic metabolites after their incubation with a wine phenolic extract. Neither of the two bacteria were able to metabolize wine polyphenols, at least in the conditions used in this study, although a certain stimulatory effect on bacterial growth was observed in the presence of a wine-derived phenolic metabolite (i.e., 3,4-dihydroxyphenylacetic acid) and a wine phenolic compound (i.e., (+)-catechin). Bacteria cell-free supernatants from the three strains delayed and inhibited almost completely the growth of the pathogen E. coli CIAL-153, probably due to the presence of organic acids derived from the bacterial metabolism of carbohydrates. Lastly, the three strains showed a high percentage of adhesion to intestinal cells, and pre-incubation of Caco-2 cells with bacteria strains prior to the addition of E. coli CIAL-153 produced a notable inhibition of the adhesion of E. coli to the intestinal cells.
Introduction
Probiotics are live microorganisms that promote healthy gastrointestinal microbiota and boost the immune response [1].The main probiotic strains belong to the genera Lactobacillus and Bifidobacterium and were mainly isolated from dairy products or the human gastrointestinal tract.Some studies have evaluated the probiotic potential of bacterial strains isolated from alcoholic fermented beverages such as cider [2] and wine [3].Recently, the probiotic features of strains of lactic acid bacteria (LAB) from an oenological bacteria collection, including Lactobacillus spp., Pediococcus spp.and Oenococcus oeni, have been assessed, although their mode of action is still poorly understood [4].On the other hand, consumption of probiotics, which are able to metabolize polyphenols into physiologically active metabolites, has been proposed as a nutritional approach to improve the bioavailability of these phytochemicals, which would, in turn, enhance the health effects attributed to them [5,6].
Beneficial health effects derived from the moderate consumption of wine and its bioactive compounds, especially polyphenols, have been evidenced mainly in relation to diseases associated with oxidative stress and inflammation [7,8].Currently, the beneficial effects of wine polyphenols on intestinal microbiota growth and functionality is a topic that is attracting research [9,10].Wine polyphenols include benzoic and cinnamic acids, phenolic alcohols and stilbenes among the non-flavonoids and anthocyanins, and flavan-3-ols, flavonols and others among the flavonoids.Most of them are minimally absorbed in the small intestine but they are extensively metabolized by enzymes from the colonic microbiota [11,12].As colonic catabolites could be present in higher concentrations than the parent compounds, the biological activities attributed to polyphenols seem to be mainly due to them [13,14].Therefore, the bioactivity of wine polyphenols is likely to be dependent on the microbiota activity that shows great human inter-individual differences [12].
On the other hand, numerous studies seem to indicate that phenolic compounds could positively modulate gut microbiota through prebiotic effects either promoting the growth of beneficial bacteria or having antimicrobial activity against pathogenic intestinal bacteria [15].For instance, grape seed extracts of different flavan-3-ol composition have shown to promote the growth of potentially beneficial bacteria (Lactobacillus sp.) and decreased undesirable bacteria such as clostridia after batch culture fermentations [16].Phenolic compounds contained in a cocoa powder reduced the growth of some members of the genera Staphylococcus and Clostridium, affecting the intestinal microbiota profile [17].
In this paper, we aimed to investigate more deeply the properties of potentially probiotic wine-isolated LAB.Thus, the objectives were: (a) to assess whether LAB were able to degrade wine polyphenols with the subsequent release of phenolic metabolites; (b) to monitor LAB growth in the presence and absence of some wine-related phenolic compounds; and (c) to evaluate LAB adherence to human intestinal cells, also considering the potential inhibition of the adherence of a pathogen E. coli strain.
In vitro analyses were previously carried out to evaluate the resistance of LAB strains to conditions in the gastrointestinal tract including saliva and acid resistance, and bile tolerance [4], and the data obtained are now summarized in Table 1.All strains showed great resistance to lysozyme (>51%) and capacity to survive at low pH values (pH 1.8), thereby suggesting good adaptation of the wine LAB strains to the hostile gastrointestinal environment.Moreover, the growth percentages of both oenological LAB strains at the maximum concentration of bile assayed (1%) were higher than 84%, which was even greater than that exhibited by the reference probiotic strain, L. plantarum CLC 17 (73%), which reflected good bile resistance.Average values from three independent repetitions are presented.
Incubations of LAB Strains with Wine Polyphenols
Incubations of bacteria with wine phenolic extract (Provinols™) were carried out as previously described [22].Briefly, 1 mL of the wine extract solution (0.6 mg/mL) was mixed with 9 mL of inocula of each LAB strain (10 8 CFU/mL) or sterile saline solution (control).Mixtures of each LAB strain suspension and saline solution (blank) (9:1) were also prepared.The mixtures were incubated at 0, 6 and 24 h, in duplicate, under anaerobic conditions at 37 • C with continuous stirring.Samples were centrifuged (10,000 rpm, for 10 min at 4 • C), and supernatants were kept at −20 • C until undergoing UPLC analyses, which were performed in duplicate.
Growth of LAB Strains in the Presence of Phenolic Compounds
Bacterial growth was performed using the method of García-Ruiz et al. [25], slightly modified.Aliquots of 100 µL of (+)-catechin or 3,4-dihydroxyphenylacetic acid solutions (0, 100, 200 and 500 µM) were placed in microplate wells with 100 µL of culture medium (MRS).Then, 20 µL of the diluted LAB strain (inoculum of 1 × 10 6 CFU/mL) were added to all the microplate wells.The microtitre plates were incubated at 37 • C for 24 h in a Biotek Synergy H1™ multi-mode microplate reader (Winooki, VT, USA).Bacterial growth was determined by reading the absorbance at 600 nm.Assays were conducted twice in triplicate.
Growth of Pathogen E. coli in the Presence of Free Supernatants (CFS) from LAB Strains
Cell-free supernatants (CFS) of each LAB strain were collected from overnight cultures centrifuged at 4500 rpm for 10 min.After measuring the pH of the CFS (ranging from 5.1 to 5.4), aliquots were taken and adjusted to pH 7 using 1 M NaOH solution.All supernatants were sterilized by filtration (Symta, 0.22 µm PVDF 17 mm pK100).The bacterial growth in the presence of the CFS was measured using the microtitre assay described above.Aliquots of 200 µL of culture medium (MRS), CFS or neutralized CFS were placed in microplate wells.Then, 20 µL of the diluted E. coli strain (inoculum of 1 × 10 6 CFU/mL) were added to all the microplate wells.Assays were conducted twice in triplicate.
Cell Culture Assays: LAB Adhesion and Inhibition of E. coli Adherence to Caco-2 Cells
Caco-2™ cells from human colon adenocarcinoma (Caco-2™ ATCC ® ) were grown and maintained in Dulbecco's Modified Eagle's medium (DMEM, Sigma-Aldrich), supplemented with 10% (v/v) foetal calf serum at 37 • C in a 5% CO 2 /95% air atmosphere.For the adhesion and inhibition experiments, Caco-2 cells were seeded in 24-well tissue plates at 25,000 cells/m 2 density and grown over 15 days to obtain a monolayer of differentiated and polarized cells, as previously described by García-Ruiz et al. [4].Cell culture assays were performed in duplicate and three independent experiments were carried out.
To assess the adhesion of the LAB strains to Caco-2 cells, 0.5 mL of inocula of the LAB strains (10 8 CFU/mL) was added to Caco-2 cell monolayers previously washed with PBS.After 1 h of incubation at 37 • C in a 5% CO 2 atmosphere, the wells were gently washed three times with PBS solution to remove unbound bacteria.Caco-2 cells and adhered bacteria were then detached using a 0.05% trypsin-EDTA solution and bacteria counts were carried out on MRS Agar medium as described above.The adhesion capacity was expressed as the number of adhered bacteria (CFU/mL) relative to the total number of bacteria added initially (% Adhesion = (Adhered bacteria/Total of added bacteria) × 100).
In order to study the effects of LAB on the adhesion of E. coli to Caco-2 cells, two different experiments were carried out: (a) inhibition, to test the ability of the LAB strains to inhibit the adhesion of E. coli; and (b) competition, to test the ability of the LAB strains to compete with E. coli for adhesion to Caco-2 cells.For the inhibition experiments, LAB suspension (10 8 CFU/mL) was firstly added to Caco-2 cell monolayers, and after 1 h of incubation non-bound bacteria were removed and E. coli suspension (10 8 CFU/mL) was added to the wells, and the mixture was again incubated for 1 h.Caco-2 cells and adhered bacteria (LAB/E.coli) were then detached and E. coli counting was carried out on TSA plates.The inhibition of the adhesion of E. coli was expressed as a percentage using the following formula: Inhibition of adhesion = 100 × (1 − T1/T2), where T1 and T2 are the percentage of adhesion by E. coli cells in the presence and absence of LAB strains, respectively.The same experimental protocol was carried out for the competition experiments, but adding the LAB and E. coli strains simultaneously (in an initial ratio of 1:1) to the Caco-2 cells followed by incubation for 1 h.Non-bound bacteria were removed and the bacterial counts were carried out as described above.
Statistical Analysis
A paired-sample t-test was used to evaluate whether the changes in phenolic content of the wine extract (% referred to the values at time 0) after incubations with bacteria was different from 100.Also, one-way analysis of variance (ANOVA) and Tukey test (at p < 0.05) were used for the comparison of the mean values of the LAB growth in each time for each one of time-course graphs.The IBM SPSS program for Windows was used for data processing.
Results and Discussion
The selected LAB strains (two oenological LAB strains P. pentosaceus CIAL-86 and L. plantarum CIAL-121 and reference strain L. plantarum CLC 17) were used in the different designed experiments: incubations of LAB strains with wine polyphenols (Section 3.1.),growth of LAB strains in the presence of phenolic compounds (Section 3.2.), and growth of pathogen E. coli and its adherence to Caco-2 cells in the presence of LAB strains (Section 3.3.and Section 3.4.)The three LAB strains used had previously proven good probiotic features in vitro (Table 1) [4].Other strains belonging to Lactobacillus and Pediococcus genera from different origins have also shown good probiotic properties such as tolerance to gastric conditions and bile tolerance [4,26,27].
Metabolism of Wine Polyphenols by LAB Strains
The capacity of three selected LAB strains (the probiotic L. plantarum CLC 17, and the oenological strains P. pentosaceus CIAL-86 and L. plantarum CIAL-121) to metabolize wine polyphenols was assessed through their incubation with a commercial wine phenolic extract (Provinols™) under nutrient-restricted culture conditions.Among different phenolic compounds targeted (mandelic acids, benzoic acids, phenols, hippuric acids, phenylacetic acids, phenylpropionic acids, cinnamic acids, 4-hydroxyvaleric acids, valerolactones and flavan-3-ols), a total of 15 compounds were previously quantified by UHPLC-MS/MS analysis [22].Table 2 shows the data corresponding to the sum of the concentrations of individual compounds at 0, 6 and 24 h of incubation.Out of the three LAB strains tested, only L. plantarum CLC 17 produced significant increases in the concentration of phenolic compounds after 24 h of incubation (133.2% in relation to t = 0 h).Therefore, phenolic-degrading enzymatic activities might be strain-dependent (i.e., L. plantarum CLC 17), as other potential probiotic bacteria belonging to the same species were not active on wine polyphenols (i.e., L. plantarum CIAL-121).However, additional studies to shed light on the enzymatic activities of LAB will be intended to be carried out.Previous studies with the same wine phenolic extract used in this study also reported release of the same phenolic acids, after batch fermentations [18] or gastrointestinal digestion simulation [28] inoculated with human faecal microbiota.
Effects of Phenolic Compounds on the LAB Growth
In order to look more deeply into the effects of wine phenolic compounds and their metabolites on bacteria performance, the growth of the probiotic strain L. plantarum CLC 17 and the two oenological LAB strains P. pentosaceus CIAL-86 and L. plantarum CIAL-121 was monitored in the presence of (+)-catechin, a main phenolic compound present in wine, and 3,4-dihydroxyphenylacetic acid, a microbial-derived phenolic metabolite whose concentration in faeces had been reported to significantly increase after moderate consumption of red wine [12].Time-course graphs indicated a certain stimulatory effect of the growth of the three strains in the presence of 3,4-dihydroxyphenylacetic acid (Figure 1).On the other hand, the monomer (+)-catechin only promoted the growth of L. plantarum CLC 17 (Figure 1).Results of the one way analysis of variance (ANOVA) and Tukey test (at p < 0.05) did not show significant differences in the most of the mean values, except in the case of the L. plantarum CLC 17 in the presence of 3,4-dihydroxyphenylacetic acid (50, 100 and 250 µM) in comparison to the absence of the compound, from 6 to 24 h (Figure 1b).Therefore, our results confirmed that the chemical structure of polyphenols did indeed influence their effects on bacterial growth.In relation to this, other authors observed that flavanols with galloyl moiety ((−)-epigallocatechin, (−)-epicatechin-3-gallate and (−)-epigallocatechin-3-gallate) exhibited more activity on bacteria growth than those without galloyl moiety (catechins and (−)-epicatechin) [29].Also, the microbial potency of polyphenols towards bacteria growth has also been reported to be dependent upon bacterial strain, species and genera [30], as we have also observed in our study.
Effects of Phenolic Compounds on the LAB Growth
In order to look more deeply into the effects of wine phenolic compounds and their metabolites on bacteria performance, the growth of the probiotic strain L. plantarum CLC 17 and the two oenological LAB strains P. pentosaceus CIAL-86 and L. plantarum CIAL-121 was monitored in the presence of (+)-catechin, a main phenolic compound present in wine, and 3,4-dihydroxyphenylacetic
Effects of the Cell-Free Supernatants (CFS) from the LAB Strains on Growth of Pathogen E. coli
Bacteria cell-free supernatants of LAB strains have been reported to exhibit functions similar to the living bacteria from which they were derived, and to reduce the infection risk of the use of probiotic bacteria in patients with depressed immune systems [31].So, CFS from the LAB strains were prepared in MRS broth and their antibacterial activity against E. coli CIAL-153 was evaluated (Figure 2).For the three LAB studied, CFS delayed the bacterial lag phase (from 5 to 12 h), and a strong inhibition of pathogen bacteria growth was observed (Figure 2a).Other authors have also reported an extension of the bacterial lag phase and lower growth rates of pathogen bacteria in the presence of the CFS from strains belonging to the Lactobacillus, Bifidobacterium, Lactococcus, Streptococcus and Bacillus genera [32,33].In agreement with them, we hypothesized that these antimicrobial effects were mainly due to the organic acids that were produced in significant quantities (and consequent lowering of pH) as a result of the ability of LAB strains to ferment carbohydrates.In addition, Gram-negatives pathogens, such as E. coli, tend to be more sensitive to organic acids than to bacteriocins [33], which explains the observed strong inhibition.We confirmed that the pH of the CFS from oenological LAB strains (P.pentosaceus CIAL-86 and L. plantarum CIAL-121), and the probiotic strain (L.plantarum CLC 17), had acidic pH values of 5.25, 5.14 and 5.11, respectively.been reported to be dependent upon bacterial strain, species and genera [30], as we have also observed in our study.
Effects of the Cell-Free Supernatants (CFS) from the LAB Strains on Growth of Pathogen E. coli
Bacteria cell-free supernatants of LAB strains have been reported to exhibit functions similar to the living bacteria from which they were derived, and to reduce the infection risk of the use of probiotic bacteria in patients with depressed immune systems [31].So, CFS from the LAB strains were prepared in MRS broth and their antibacterial activity against E. coli CIAL-153 was evaluated (Figure 2).For the three LAB studied, CFS delayed the bacterial lag phase (from 5 to 12 h), and a strong inhibition of pathogen bacteria growth was observed (Figure 2a).Other authors have also reported an extension of the bacterial lag phase and lower growth rates of pathogen bacteria in the presence of the CFS from strains belonging to the Lactobacillus, Bifidobacterium, Lactococcus, Streptococcus and Bacillus genera [32,33].In agreement with them, we hypothesized that these antimicrobial effects were mainly due to the organic acids that were produced in significant quantities (and consequent lowering of pH) as a result of the ability of LAB strains to ferment carbohydrates.In addition, Gram-negatives pathogens, such as E. coli, tend to be more sensitive to organic acids than to bacteriocins [33], which explains the observed strong inhibition.We confirmed that the pH of the CFS from oenological LAB strains (P.pentosaceus CIAL-86 and L. plantarum CIAL-121), and the probiotic strain (L.plantarum CLC 17), had acidic pH values of 5.25, 5.14 and 5.11, respectively.
Absorbance
Incubation time (h) Moreover, with the aim of researching the effect of other antimicrobial substances, such as bacteriocins, in addition to organic acids on E. coli growth, CFS were adjusted to pH 7 and their antibacterial activity against E. coli CIAL-153 was again evaluated (Figure 2b).Neutralization of the supernatants from all the LAB strains counteracted the antagonistic effects of the acid CFS against the pathogen strain, so the lag phase was similar to that of a standard growth curve and a significant increase in the growth of the pathogen was observed.Other authors have also observed that the neutralization of CFS reduced the antimicrobial activity on pathogen viability and growth, but they still observed some effects [32,33].In our case, neutralized supernatants from the L. plantarum CLC 17 strain still exhibited some inhibition of the growth of E. coli CIAL-153, which suggested that this strain produces other antibacterial active compounds against E. coli.Arena et al. [34] reported that antimicrobial activity is mainly strain-specific rather than genus/species-specific and provided evidence that several of the 79 screened L. plantarum strains possess a significant ability to contrast various pathogenic bacteria, including both Gram-negative and Gram-positive species.
Effects of LAB on Adherence of Pathogen E. coli to Caco-2 Cells
An important property of probiotic candidates is their ability to adhere to intestinal mucosa, which excludes pathogens from cell adherence and infection progression.Initially, we investigated the ability of the three LAB strains (L.plantarum CLC 17, P. pentosaceus CIAL-86 and L. plantarum CIAL-121) to adhere to human intestinal Caco-2 cells because this cellular model expresses morphological and functional differentiation in vitro and shows characteristics of mature enterocytes.Adhesion levels to Caco-2 cells of the three LAB strains ranged from 8.65% to 10.01% (Figure 3) and were in line with those obtained previously [4] and in the range of other probiotics previously reported in the literature under in vitro conditions [35,36].
Beverages 2016, 2, 27 8 of 12 Moreover, with the aim of researching the effect of other antimicrobial substances, such as bacteriocins, in addition to organic acids on E. coli growth, CFS were adjusted to pH 7 and their antibacterial activity against E. coli CIAL-153 was again evaluated (Figure 2b).Neutralization of the supernatants from all the LAB strains counteracted the antagonistic effects of the acid CFS against the pathogen strain, so the lag phase was similar to that of a standard growth curve and a significant increase in the growth of the pathogen was observed.Other authors have also observed that the neutralization of CFS reduced the antimicrobial activity on pathogen viability and growth, but they still observed some effects [32,33].In our case, neutralized supernatants from the L. plantarum CLC 17 strain still exhibited some inhibition of the growth of E. coli CIAL-153, which suggested that this strain produces other antibacterial active compounds against E. coli.Arena et al. [34] reported that antimicrobial activity is mainly strain-specific rather than genus/species-specific and provided evidence that several of the 79 screened L. plantarum strains possess a significant ability to contrast various pathogenic bacteria, including both Gram-negative and Gram-positive species.
Effects of LAB on Adherence of Pathogen E. coli to Caco-2 Cells
An important property of probiotic candidates is their ability to adhere to intestinal mucosa, which excludes pathogens from cell adherence and infection progression.Initially, we investigated the ability of the three LAB strains (L.plantarum CLC 17, P. pentosaceus CIAL-86 and L. plantarum CIAL-121) to adhere to human intestinal Caco-2 cells because this cellular model expresses morphological and functional differentiation in vitro and shows characteristics of mature enterocytes.Adhesion levels to Caco-2 cells of the three LAB strains ranged from 8.65% to 10.01% (Figure 3) and were in line with those obtained previously [4] and in the range of other probiotics previously reported in the literature under in vitro conditions [35,36].Having confirmed the ability of LAB strains to adhere to Caco-2 cells, the adhesion of E. coli CIAL-153 to these intestinal cells was assessed in the presence of the different LAB strains.Initially, it was found that the adhesion of E. coli CIAL-153 to Caco-2 cells on their own was 8.51% ± 1.83%.The inhibition of adherence of E. coli CIAL-153 to Caco-2 cells for probiotic LAB strains in the anti-adhesion assays (competition and inhibition) is shown in Figure 4. Pre-incubation of Caco-2 cells with LAB strains prior to the addition of E. coli CIAL-153 (inhibition assay) produced a notable inhibition of the adhesion of E. coli to the intestinal cells for three strains in respect to the control (absence of LAB strains).P. pentosaceus CIAL-86 was the most effective strain in inhibiting the adhesion of E. coli CIAL-153 (>35%), while L. plantarum CIAL-121 showed similar inhibition values to those observed in the reference probiotic strain, L. plantarum CLC 17 (20.7%and 22.6%, respectively) (Figure 4).These percentages were similar when Caco-2 cells were incubated at the same time with Having confirmed the ability of LAB strains to adhere to Caco-2 cells, the adhesion of E. coli CIAL-153 to these intestinal cells was assessed in the presence of the different LAB strains.Initially, it was found that the adhesion of E. coli CIAL-153 to Caco-2 cells on their own was 8.51% ± 1.83%.The inhibition of adherence of E. coli CIAL-153 to Caco-2 cells for probiotic LAB strains in the anti-adhesion assays (competition and inhibition) is shown in Figure 4. Pre-incubation of Caco-2 cells with LAB strains prior to the addition of E. coli CIAL-153 (inhibition assay) produced a notable inhibition of the adhesion of E. coli to the intestinal cells for three strains in respect to the control (absence of LAB strains).P. pentosaceus CIAL-86 was the most effective strain in inhibiting the adhesion of E. coli CIAL-153 (>35%), while L. plantarum CIAL-121 showed similar inhibition values to those observed in the reference probiotic strain, L. plantarum CLC 17 (20.7%and 22.6%, respectively) (Figure 4).These percentages were similar when Caco-2 cells were incubated at the same time with both E. coli and probiotic LAB strains (competition assay), ranging from 17% to 22% in respect to the control (absence of LAB strains).The high values from the inhibition experiment could indicate an effective competition of LAB strains against E. coli CIAL-153 for common adhesion receptors [37] or other anti-adhesion factors [38].Thus, the ability to inhibit the adhesion of E. coli CIAL-153 to Caco-2 cells appeared to be influenced by LAB strains, which suggested a certain pathogen-LAB specificity as indicated by other authors [35].
Beverages 2016, 2, 27 9 of 12 both E. coli and probiotic LAB strains (competition assay), ranging from 17% to 22% in respect to the control (absence of LAB strains).The high values from the inhibition experiment could indicate an effective competition of LAB strains against E. coli CIAL-153 for common adhesion receptors [37] or other anti-adhesion factors [38].Thus, the ability to inhibit the adhesion of E. coli CIAL-153 to Caco-2 cells appeared to be influenced by LAB strains, which suggested a certain pathogen-LAB specificity as indicated by other authors [35].
Conclusions
In vivo reports suggest that wine polyphenols exert an essential impact on intestinal microbiota growth and functionality (see [9] for review).However, an important question that remains unsolved is whether these benefits may be enhance by the concomitant interactions by wine polyphenols and probiotics at the gut level.This paper investigates some new metabolic features and probiotic characteristics of oenological lactic acid bacteria, in particular P. pentosaceus CIAL-86 and L. plantarum CIAL-121, based on their interaction with polyphenols.Neither of these two oenological bacteria was able to metabolize wine polyphenols, at least in the conditions used in this study, although this metabolic potential migth be strain-dependent, as the probiotic reference strain L. plantarum CLC 17 was found to be effective in metabolizing wine polyphenols.However, growth of both oenological (P.pentosaceus CIAL-86 and L. plantarum CIAL-121) and reference (L.plantarum CLC 17) strains was stimulated in the presence of wine phenolic compounds (i.e., (+)-catechin) and wine-derived phenolic metabolites (i.e., 3,4-dihydroxyphenylacetic acid), although no clear dose-dependent effect was observed.Bacteria cell-free supernatants from the three LAB strains delayed and inhibited almost completely the E. coli CIAL-153 growth, which may be mainly attributed to the presence of organic acids derived from the metabolism of carbohydrates by LAB.In relation to their adhesion to intestinal cells, the three LAB strains showed a high adhesion percentage, especially P. pentosaceus CIAL-86.Moreover, pre-incubation of Caco-2 cells with LAB strains prior to the addition of E. coli CIAL-153 produced a notable inhibition of the adhesion of E. coli to the intestinal cells.Nevertheless, the effect of selected lactic acid bacteria on the growth and adhesion to intestinal cells of other gut pathogenic bacteria should be investigated.To our knowledge, there are very few reports considering probiotic features of LAB isolated from wine such as the ones investigated here, which emphasizes the novelty of these results.Overall, these in vitro results confirm the potential of oenological LAB strains as probiotics, with the aim of developing general nutritional strategies and designing specific dietary recommendations based on the combination of active phenolic compounds/extracts and probiotics, thus contributing to the ultimate goal of promoting intestinal health.Nevertheless, further in vitro and in vivo investigations are still necessary in order to confirm these potential beneficial effects.
Conclusions
In vivo reports suggest that wine polyphenols exert an essential impact on intestinal microbiota growth and functionality (see [9] for review).However, an important question that remains unsolved is whether these benefits may be enhance by the concomitant interactions by wine polyphenols and probiotics at the gut level.This paper investigates some new metabolic features and probiotic characteristics of oenological lactic acid bacteria, in particular P. pentosaceus CIAL-86 and L. plantarum CIAL-121, based on their interaction with polyphenols.Neither of these two oenological bacteria was able to metabolize wine polyphenols, at least in the conditions used in this study, although this metabolic potential migth be strain-dependent, as the probiotic reference strain L. plantarum CLC 17 was found to be effective in metabolizing wine polyphenols.However, growth of both oenological (P.pentosaceus CIAL-86 and L. plantarum CIAL-121) and reference (L.plantarum CLC 17) strains was stimulated in the presence of wine phenolic compounds (i.e., (+)-catechin) and wine-derived phenolic metabolites (i.e., 3,4-dihydroxyphenylacetic acid), although no clear dose-dependent effect was observed.Bacteria cell-free supernatants from the three LAB strains delayed and inhibited almost completely the E. coli CIAL-153 growth, which may be mainly attributed to the presence of organic acids derived from the metabolism of carbohydrates by LAB.In relation to their adhesion to intestinal cells, the three LAB strains showed a high adhesion percentage, especially P. pentosaceus CIAL-86.Moreover, pre-incubation of Caco-2 cells with LAB strains prior to the addition of E. coli CIAL-153 produced a notable inhibition of the adhesion of E. coli to the intestinal cells.Nevertheless, the effect of selected lactic acid bacteria on the growth and adhesion to intestinal cells of other gut pathogenic bacteria should be investigated.To our knowledge, there are very few reports considering probiotic features of LAB isolated from wine such as the ones investigated here, which emphasizes the novelty of these results.Overall, these in vitro results confirm the potential of oenological LAB strains as probiotics, with the aim of developing general nutritional strategies and designing specific dietary recommendations based on the combination of active phenolic compounds/extracts and probiotics, thus contributing to the ultimate goal of promoting intestinal health.Nevertheless, further in vitro and in vivo investigations are still necessary in order to confirm these potential beneficial effects.
Figure 2 .
Figure 2. Growth curves of E. coli CIAL-153 in the presence of cell-free supernatants of L. plantarum CLC 17, P. pentosaceus CIAL-86 and L. plantarum CIAL-121 strains before (a) and after (b) being neutralized at pH 7.
Figure 2 .
Figure 2. Growth curves of E. coli CIAL-153 in the presence of cell-free supernatants of L. plantarum CLC 17, P. pentosaceus CIAL-86 and L. plantarum CIAL-121 strains before (a) and after (b) being neutralized at pH 7.
Table 1 .
[4]istance to lysozyme (% Survival), tolerance to simulated gastric juice on the counts (log CFU/mL) at different pH values and incubation times, and bile resistance (% Growth) of the lactic acid bacteria (LAB) strains used in this paper[4].
a percentage in relation to t = 0 h.** Mean significantly different from 100 (p < 0.01) using paired-sample t-test. | 6,763.4 | 2016-10-05T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
An Explanation for China ’ s Economic Growth : Expenditure on R & D Promotes Economic Growth — Based on China ’ s Provincial Panel Data of 1997-2013
There are so many reasons to explain the China’s economic growth. This paper tries to give a different perspective. This study examines the relationship between expenditure on R&D from government and enterprise and economic growth by using China’s provincial panel data of 1997-2013 with a multiple linear regression. The study finds that there is a strong and positive correlation between expenditure on R&D and economic growth. Moreover, the study has a new finding. Compared to enterprise R&D strong correlation, the correlation between government R&D expenditure and the economic growth is nearly zero. One possibility is that government R&D expenditure is more directed toward basic research, which does not directly promote economic growth. The finding does not imply that government R&D expenditure is not a necessary component for economic growth. Both enterprise and government R&D expenditure are important for economic growth.
Introduction
In 2012 the rate of China's GDP growth was 7.7%, which fell below 8% for the first time since 2000.It means that the economy is slowing down.The rate of GDP growth in the first quarter of 2015 fell to 7 percent, because the demographic dividend was disappearing, and the cost of labor, energy and raw material was rising.The way of technological advances is changing-the profits coming from imitation technology and learning techniques are reducing, so that imitate technology is no longer a competitive advantage in the future.At the same time independent research and innovation become increasingly important.In this context, the government proposes to build a national innovation system, in order to change the traditional way of economic development to the way of innovation-driven economic development.Scholars from different countries and regions generally believe that research activities could promote economic development, which means increasing R&D investment has become a common requirement and a necessary thing in the countries and regions.The problem is what the interaction between public and private sectors R&D expenditure and economic growth is and why this happens.This article will try to explain it.
At present, almost none of scholars use the Chinese provincial R&D expenditure panel data to study the relationship between the R&D expenditure and economic growth.This article attempts to do this.The article will use Chinese 1997-2013 year's provincial data to study the relationship between the R&D expenditure from public sector and private sector and economic growth.Particularly, how public sector and private sector's R&D expenditure affect economic growth and what it is.After that, according to the research, we hope to provide some advice to improve R&D expenditure used efficiency and provide some evidence to support the truth-innovation from enterprises by R&D investment is the one of the mainly explanations for China's economic growth.
Studies on Relationship between the R&D Expenditure and Economic Growth out of China
The contribution of technological progress to economic growth had been demonstrated by many economists in earlier.Such as Schumpeter put forward that technical progress was an important driver of economic growth.Solow's study [1] had examined this view and provided empirical support.That was most of the contribution, which could not be explained by the accumulation of capital and labor, could be explained by the technological progress.After that, many scholars studied the relationship between technological progress and economic growth using empirical methods.Sylwester [2] took 20 OECD countries as sample for analysis, and used multiple regression models to study the relationship between the R&D expenditure and economic growth.The study result was that there was no significant evidence between them in the 20 countries of OECD.However there would be strong significant correlation between the R&D expenditure and economic growth if we put the United States, Japan, Germany, France, Britain, Italy, and Canada as sample.Sylwester considered there might be two explanations, one was that R&D investment in the developed countries was more important because the economic growth of these countries mainly came from the scientific and technological progress and development; Another explanation was that these countries whose service sector accounted for more than half of the country, R&D investment in the service sector's role was not clear.Madden and Savage took OECD countries and some Asian countries as the research sample, Kuo and Yang [3] analyzed the Chinese provincial panel data, Bronzini and Piselli [4] investigated the Italian data.They all found the positive correlation between R&D expenditure and economic growth.In addition, some researchers also found that the effect of the R&D expenditure in science and technology was cumulative and delayed so that it would not promote economic development immediately.
After confirming significant correlation between R&D investment and economic growth, some researchers began to study the rate of return of R&D expenditure coming from different sectors.Most people concluded that compared with the effect of promoting economic growth enterprise R&D expenditure in science and technology made, government made small.It mainly due to the greater spillover effects of government investment in science and technology, and the long time achievements and lag effects with the government R&D expenditure focusing in mostly basic research.Griliches [5] found that, rate of return on investment to science and technology of the private sector was more than the public sector', however the public sector R&D expenditure would produce driving effects and would generate additional productivity and indirect effect.Some studies had found government R&D expenditure to economic growth did not bring immediate results, but most national and regional governments still paid enough attention to science and technology investment, because of the lag and spillover effects of the government R&D expenditure.Salter and Martin [6] and Inekwe [7] found that, after World War II the United States federal government R&D expenditure showed a substantial long term growth, while in 1820th the United States economic growth began to stabilize and did not show a dramatic growth phenomenon.The two results did not seem to match.At the same time economic growth did not show a downward trend, they believed that a large number of government R&D expenditure had played an important role in maintaining the long-term stable growth of the US economy.
Some scholars had found that some studies showed no correlation or even negative correlation between government R&D expenditure and economic growth.Park [8] took 10 OECD countries as sample, on one hand his studies found there was negative correlation between government R&D expenditure and economic growth, on the other hand there was spillover effects that government R&D expenditure had an impact on private R&D expenditure among different countries.Because of this, Government R&D expenditure might have indirect, positive impact on productivity.Nadiri [9] used an empirical model to analyze the data of the United States, France, Germany, Japan and Britain, he found there was a significant correlation between R&D expenditure and outputs and productivity.Considering the spillover effects, the return of private sector R&D is generally 20% -30%, while the total returns of R&D approaching 50%.Griliches and Mairesse [10] found the contribution of R&D expenditure to manufacturing was 7% in the United States, and Griffith et al. [11] found it was 1.2% -2.9% in UK.
Studies on Relationship between the R&D Expenditure and Economic Growth in China
The researches in China were mainly focusing on using time series regression model to study the relationship between the R&D expenditure and economic growth.The general conclusion was that R&D expenditure promoted economic growth.Enha Hu [12], testing the Chinese 1983-2006 year's data released by Bureau of Statistics of China, found the significant correlation between the R&D expenditure and economic efficiency.Not only could R&D expenditure promote current economic growth, but also was there a 2-year lag effects.Kai Wang [13], taking 1978-2008 year's time series data as sample and using a variety of methods to exam the relations between Chinese financial R&D expenditure and economic growth, found that the financial R&D expenditure promoted economic growth, and there was "the time lag effects" and "marginal effects recede" phenomenon.
Yun Zhu [14], using the way of co-integration to exam 1978-2005 year's fiscal data, found that R&D expenditure and economic growth interacted with each other and promoted each other.Bonai Fan [15], examining 50-year fiscal data from 1953 to 2002 found there was a causal relationship between R&D expenditure and economic growth, and the contribution rate of R&D expenditure to economic growth is 17.6%.In addition, Fangyuan Lu [16] and other scholars, using provincial data to study the effect of the provincial R&D expenditure to economic efficiency, found that there were regional differences.Current researches out of China focused on the following aspects.The first was that what the relationship was between the R&D expenditure and economic growth and the conclusion was the R&D expenditure promoted economic growth.The second was that whether there were spillover effects about the R&D expenditure and how much it was because of coming from different sources-public sectors and private sectors.The result was the spillover effects of public sectors R&D expenditure were more significant.The third was that whether there were different effects about the R&D expenditure to economic growth between developed and developing countries and regions.The results showed the influence of R&D expenditure to the economy was more significant in high-income countries, particularly those developed countries in the field of science and technology.The fourth was the relationship between R&D expenditure from different sources and economic growth.The general conclusion was that there was a significant positive correlation between them.Chinese scholars had conducted a similar study and reached similar conclusions.First, China's R&D expenditure promoted economic growth by analyzing time series data of China.Second, Chinese scholars studied the promotion effect of China Financial R&D expenditure to economic growth, due to the time lag effects and the cumulative effects, they found promoting is less significant.Finally, Chinese scholars using provincial panel data to study the effect of provincial R&D expenditure found that there were regional differences.
Some Instructions
In China the R&D expenditure mainly comes from government funds, enterprise funds and other funds.Government funds are from government Finance, including budgetary expenditure and other extra-budgetary ex-penditure for science and technology research activities.Enterprise funds are mainly from enterprise for science and technology research activities and are used by enterprise R&D departments, research institutes and university's R&D.The rest of the R&D expenditure is other R&D expenditure 1 .
Government R&D expenditure and enterprise R&D expenditure are two main R&D expenditure, so we will focus on studying how both of them affect China's economic growth.We think the role on scientific and technological activities that the government plays is explained by government R&D expenditure, enterprise role explained by enterprise R&D expenditure.If we analyze the different roles, we will understand what role they play and how they play the role.Based on the above we can get some Conclusions and make some suggestions.Now we propose the following hypothesis and we will use Chinese provincial panel data to examine the hypothesis.
Constructing a Hypothesis
The hypothesis is that scientific and technological research activities can be transformed into productivity, thus contributing to economic growth.Whether the private sector R&D expenditure or public sector R&D expenditure, they have both contributed to economic growth.Compared with government R&D expenditure, enterprise R&D expenditure has more significant effects.
The Explanation for the Sample Data
We use 1997-2013 year's China's 31 provinces panel data as sample, so the number of sample is 527 (31 provinces × 17 years).They are gdp for GDP, grde for government R&D expenditure, erde for enterprise R&D expenditure, orde for other R&D expenditure, employment for number of employees, fdi for FDI.The unite of gdp, grde, erde, orde is 100 million RMB.The unite of fdi is 100 million dollars.The unite of employment is 10,000.The data comes from "China Statistical Yearbook of Science and Technology", "China Statistical Yearbook" and other files released by the government.Since there is no standard statistical classification before 1997, we only use 1997-2013 data (Table 1).Considering the multi-col-linearity and auto-correlation, we use panel data.In order to get the objective and accurate results, we conduct a stationary test and logarithms of the data.
Panel Data Stationary Test
Panel data contains time-series and cross-sectional data at the same time.If the data is not stable, it may cause spurious regression affecting the final result.It is necessary to carry out data stationary test.We use Im, Pesaran and Shin W-stat, ADF-Fisher Chi-square and PP-Fisher Chi-square to test the data.As shown in Table 2, after we use 1 st difference, the variables all pass the stationary test.
Model Explains
Our target is to tell the relationship between R&D expenditure and economic growth, and which of them plays a greater role on the regional economic growth.So we define every province's GDP to be the dependent variable, while government R&D expenditure and enterprise R&D expenditure to be the independent variable.The control variable contains other R&D expenditure, fdi and employment.Consistent with previous empirical researches, this article assumes that the production function on the regional economy is similar to the Cobb-Douglas production function.At the same time, we assume that economic growth is endogenous type.So the model is: We transfer it in logarithmic form.
( ) The equation is set as follows: Y it represents the i province's t year's GDP.K it , L it represent the independent variables.M it represents the control variables.According to most studies, M it contain FDI and number of employees.M it represents some factors that do not change with time and is the unique characteristics (for example, natural environmental features).N it represents changes in science and technology policy.ε represents the error term.In summary, the specific equation is: among them, gdp is for GDP, grde for government R&D expenditure, erde for enterprise R&D expenditure, orde for other R&D expenditure, employment for number of employees, fdi for FDI.
Regression Analysis of Data
We use the mixed effects model, fixed effects model and random effects model to do regression with the provincial panel data.The results are shown in Table 3. Regression results show that all of the models are very significant.Detailed explanation shown in Table 3.
Optimal Regression Model Selection
When the fixed effects model selected, F test show fixed effects model is better than mixed OLS model.In order to ensure the accuracy of model selection, this paper does the Redundancy test for fixed effects model and mixed effects model, and examine whether the random effects is significant.Redundancy test can help us to select the better one from fixed effects model and mixed effects model.By Hausman test we can select the better one from fixed effects model and random effects model.After trying these tests, in Table 4, we know the fixed effects model is the best choice. ( There are two main independences, which are government R&D expenditure and enterprise R&D expenditure.The coefficient of enterprise R&D expenditure is 2.016544.It means that there is a positive relationship between enterprise R&D expenditure and GDP.Increasing 1 unit investment of enterprise R&D expenditure for science and technology, GDP will raise 2.016544 units.However GDP cannot to be good explained by government R&D expenditure, because the coefficient of government R&D expenditure is 0.1049901.It means every unit investment of Government R&D expenditure will bring 0.1049901 unit increase of GDP.The Control variables containing FDI, other R&D expenditure and number of employees are positive correlated with GDP.
Under the fixed effect model, the independent variables-government R&D expenditure and enterprise R&D expenditure both pass the 1% significance level test.That means dependent variable can be good explained by the independent variable.The model fits better.
Conclusions and Policy Recommendations
After we use fixed effects model to analyze 1997-2013 year's China's 31 provinces government panel data.We find that there is a positive correlation between enterprise and government R&D expenditure and GDP, although enterprise R&D expenditure is high positive and government R&D expenditure is nearly non-correlation.R&D expenditure from various sources doesn't have the same effect on GDP.This is similar with Park's study [8].
These conclusions do not negate the government R&D expenditure contribution to economic growth.It can be explained by the following considerations: first, government R&D expenditure and enterprise R&D expenditure have different investment areas.The goal of enterprise is to pursuit more profits, so they usually invest in the kind of areas where researches transformation is short-term, large output, and high efficiency.We call them Applied Scientific Research.However, the government considers more about social long-term development strategy and public welfare.Their investment mainly focuses on basic research which needs large initial investment and is lower economic value in short-term but has an important role in social long-term development.As government funds will support basic research on mathematics, physics and other fields of universities and other institutions.Second, according to researches by scholars around the world, R&D expenditure has certain cumulative effects and time-lag effects, especially for government funds to basic research.The process from research into productivity is not in time.There is deviation when we do the regression analysis between government R&D expenditure and GDP.Third, given the spillover effects of government R&D expenditure in public research activities, its role on economic growth may be underestimated.
Compared with government R&D expenditure, enterprise R&D expenditure plays a more important direct role on economic growth.This is consistent with the mainstream viewpoint of China's, which is now proposed by the government "Let the market play a decisive role".This also provides innovative ideas for science and technology development.Innovation-driven economy is mainly relying on technological innovation of enterprises.Enterprises are not only the main body of the market economy, but also the main body of scientific and technological innovation.They know needs and supplies of the market.They invest in science and technology.They transform researches into productivity and products.They create profits and promote economic growth.At the same time, government's contribution to the scientific and technological innovation cannot be ignored.Government R&D expenditure should play the leading role of public capital to scientific and technological innovation, in order to complete changing the traditional economic development mode to innovative economic development mode.Government and enterprise as two wheels of a car, in scientific and technological innovation, should play comparative advantages, cooperate with each other and have a win-win relationship.
However, there are so many factors that may affect economic growth, such as the educational level of employees, and administrative efficiency of provincial government.This article just studies some of them.In order to explain economic growth better, we also need to continue to increase the appropriate variables.This issue will be explored in future studies.
Table 1 .
The description of the main variables.
Table 2 .
The results of panel data stability test. | 4,240.6 | 2015-11-19T00:00:00.000 | [
"Economics"
] |
Performance Of Concrete-To-Concrete Bond Strength in Wetland Area
. One of the techniques of building rehabilitation methods is repairing. Repair is a rehabilitation process to restore the initial capacity of damaged structures on structural components. A fairly popular repair technique is concrete-to-concrete. The strength of this bonding depends on several factors. The mixture used to repair the material affects the bond strength, as does the surface treatment and curing conditions. The study analysedanalysed the influence of strong bonds on surface treatment and curing conditions. Surface preparations were performed with four methods: as cast, drill holes, grooving, and bonding agent. The curing cycle applied two conditions: normal and wet-dry, and the test objects were compressive strength tests, slant shear tests, tensile tests, and flexural tests. The study results showed that the influence of wet-dry environmental conditions was lower than normal environmental conditions, and the highest bond strength values were found in grooving treatments, drill holes, as cast and bonding agents.
INTRODUCTION
Structural collapse can be prevented by building rehabilitation methods, including strengthening and repairing.Strengthening is the process of raising the existing capacity of an undamaged structure (or structural component) to a certain level.On the other hand, repairing is a rehabilitation process to restore the original capacity of the damaged structural component [1].A popular repair and reinforcement technique for degrading the quality of existing structural systems is reinforced concrete (RC) jackets.Several studies have shown that RC jackets increase load-bearing capacity and stiffness, improving overall structural performance.However, this method has disadvantages, such as heavy and thick material mass, time-consuming procedures, the large number of workers required, and increased stiffness.These reasons have led researchers to turn to new jacketing techniques with new alternative materials, such as fibre-reinforced polymer (FRP), cement-based materials, such as ferrocement, steel fibrous concrete or mortar, high-performance fibre-reinforced concrete, and self-compacting concrete (SCC) [2].Over the past two decades, there has been an increasing interest in using FRPs by engineers in the repair and capacity enhancement of brittle RC elements.One way of using FRP to repair RC structures is by attaching it to the external surface.Previous studies have indicated that the effectiveness of using an external concrete attachment with FRP (Fiber-Reinforced Polymer) is contingent upon the quality of the attachment to the concrete surface [3].The strength of the concrete-to-concrete bond (referring to the substrate that requires repair with an overlay on its surface) is influenced by *Corresponding author<EMAIL_ADDRESS>several factors, including the preparation of the substrate's surface, the use of adhesives, the mechanical properties of the concrete, moisture content, the type of substrate, environmental conditions, surface tension conditions, and the presence of cracks in the substrate.The choice of mixture for the new concrete used in repair materials also affects the bond strength, although its impact is relatively smaller when compared to the mixture used in the substrate [4].
The addition of silica fume with an optimum dosage of 8% was able to increase the bond strength values by 184% and 244% at SSD and dry humidity in overlay [5].Adding latex to the mix was also found to increase the bond strength.Lower bond strength is obtained with a higher water-to-cement ratio or smaller-sized fine aggregates due to higher shrinkage [6].Aysha [7] conducted research on the surface preparation of shot blasting, wire brushing, partial surface peeling, and as cast.She found that bond strength value was highest with the shot blasting method, wire brush, partial surface peeling, and as cast.An analysis reviewed by Fathy et al. [4] reported that the increase in bond strength occurred in the range of 25%-32% with grooving and sandblasting surface preparation.In Al-Madani's research [8], it was discovered that the bond strength after sandblasting was significantly higher than that of drill holes and as cast.This research also aligns with that conducted by Valikhani [9].An increase in bond strength of about 125% occurred in sandblasting, with an additional 17% increase in bond strength using mechanical connectors.From the above studies, it can be concluded that the surface preparation on the substrate affects the bond of the two concretes.
Based on the results of Zuo [10], severe degradation occurred on the surface of composite mortars during cool-heat rather than wet-dry environmental cycles.However, research conducted by Al-Madani [8] showed that, generally, environmental conditions did not significantly affect bond behaviour.Based on the above studies, there is a need for research on the effect of using an overlay with a higher compressive strength than the substrate, using surface preparation, such as grooving, as cast, bonding agent, and drill holes.Furthermore, the disparity in research results concerning environmental conditions necessitates further investigation into concrete repair.Therefore, this research aimed to evaluate the performance of concrete-to-concrete bonding under varying surface preparations and environmental conditions.
Experimental Program
This study aimed to investigate the variations of curing treatments (full wet/N and wetdry/W) and surface preparation to the bond strength of concrete.The bond strength tests included a slant shear test, splitting tensile test and flexural test, as shown Error!Reference source not found..There were four surface preparation methods, including As Cast (AC), Drill Holes (DH), Grooving (GV) and Bonding Agent (BA).A total of 104 specimens were tested, as outlined in Table 2..The substrate (old concrete) was cast into different moulds, both cylindrical and beam-shaped.After 24 hours of casting, the specimens were removed from the moulds and left to cure for 28 days.Subsequently, the specimens were allowed to air dry for 2 days.Following this, the surface of the substrate was prepared before applying the overlay.Various surface preparation techniques are depicted in Error!Reference source not found..The diamond head (DH) created grooves with a depth of 3-5 mm and a diameter of 2.25 cm, while grooving had a depth of 1 cm.The bonding agent used was a Bonding Adhesive Agent.After the surface preparation, the substrate specimens were transferred to moulds for overlay casting.After 24 hours of casting the overlay, the specimens were demoulded and subjected to two curing conditions: water/normal (N) and wet-dry (W), as shown in Error!Reference source not found..In the case of wet-dry (W), for the cylindrical specimens, they were watered for 24 hours and then left to dry for 24 hours.For the beamshaped specimens, they were watered for 7 days and subsequently dried for 7 days.
The mix proportions for both the substrate and overlay are provided in Table 1.The substrate employed coarse aggregate with a maximum aggregate size of 20 mm, a specific gravity of 2.613, and sand with a specific gravity of 2.783.The water-to-cement ratio for the substrate was 0.52.On the other hand, the overlay composition featured a water-to-cement ratio of 0.44, with coarse aggregate size and specific gravity matching those of the substrate.Additionally, this mixture incorporated a water-reducing agent, a high-range water reducer, and a retarding admixture (Sika Viscocrate 1050).
Bond Strength Tests
This study conducted three different bond strength tests to assess the performance of the bond between overlays and substrates, as shown in Fig. 1.The standard test procedures, specimen creation, and test setups are described as follows.
Slant Shear Test
According to ASTM C882 [11], the slant shear test is performed when the specimen is subjected to a combination of compression and shear.This process involves cutting the substrate at an angle of 30° from the vertical axis and re-casting it with overlay after surface preparation.The cylinders used in this study have a diameter of 11 cm and a height of 22 cm.The calculation of shear stress to determine the value of slant shear can be seen in Eq. 1.
Where P is the failure load (kN), is the area of the inclined plane (mm²) and α is the angle of the bonded inclined surface, i.e., 30°.
Splitting Tensile Test
Concrete's split tensile strength is a sign of its tensile strength.Cylinders were used as test specimens in this simple test procedure, carried out using typical compressive testing equipment.The reference for split tensile testing is ASTM C496 [12].The used cylinder has a diameter of 11 cm and a height of 22 cm, such as in the slant shear test.Eq. 2 shows how to calculate the split tensile strength value.
Where P is the failure load (kN), is the area of the bond plane (mm²).
One-Point Flexural Test
The ability of concrete blocks placed on two supports to withstand forces in a direction perpendicular to the axis of the test object until the test specimen is broken, according to ASTM C293 [13].The beam in this test is 15 by 15 by 55 centimetres.Using Eq. 3, one can get the flexural strength.
Where, P is the failure load (kN), L is the length of the supported span (mm) and b, d are the width and depth of specimen section (mm).
Slant Shear Test
The overall slant shear test results are presented in Table 3, Fig. 4 and Fig. 5, including the average compressive strength of substrate and overlay and average bond strength.AC demonstrates the highest score, followed by DH, GV, and BA.This outcome is directly related to both the compressive strength of the substrate and the overlay.According to research by L. Courard [14], there are three factors, each with varying levels of influence on bond strength, as shown in Fig. 6.The properties of the overlay fall within the category of medium influence factors.The higher slant shear value of AC specimens can be attributed to the compressive strength of the overlay.This observation aligns with the approach presented in ACI 546.3RACI 546.3R [15], where slant shear testing significantly impacts the compressive strength.Consequently, the AC specimens for slant shear tests exhibit notably higher values than the other three surface preparations.On the other hand, when comparing curing conditions, the slant shear testing results between N and W do not display substantial differences.
Splitting Tensile
The value of GV was the highest among the other three surface preparation methods, as shown in Fig. 7 and Table 4.The percentage increased in BA, DH, and GV in N conditions with AC by 33.51%, 50.6% and 151.35%, respectively.The percentages gained for DH, BA, and GV in the W conditions were 9.19%, 35.46%, and 97.29%, respectively.This indicates that GV has a greater impact than the other three surface preparations in the splitting tensile test.While the curing situation does impact GV and DH, it has a very minimal impact on AC and BA.The W condition was decreased by 33.93% in DH, whereas it was decreased by 28.47% in GV.
Flexural Test
However, the BA could not be tested as the beams broke before the flexural test could be conducted.The curing condition also significantly influenced the specimens; in condition N, the flexural value was higher than in condition W.There was a decrease of 31.28% in the AC approach under W conditions, 24.87% for DH, and 30.97% for GV.
The GV value in this test exhibited a higher bond strength value than the other three surface preparations, as seen in Table 5 and Fig. 8.Under N conditions, the DH and GV surface preparations showed percentage increases of 22.46 and 51.68 over AC, respectively.At the same time, BA experienced a 70% decrease.Under W conditions, DH and GV methods saw increases of 33.88% and 52.36%, respectively.However, the BA could not tested as the beams broke before the flexural test could be done.The curing condition also significantly influenced this specimen; in condition N, the flexure value was higher than in condition W.There was a decrease of 31.28% in the AC approach under W conditions, 24.87% for DH, and 30.97% for GV.The BA method consistently yielded the lowest test results among the three surface preparation methods.This finding aligns with various other research studies that have indicated that it can weaken surface joints by adding more fields to the surface layer.For instance, Al-Ostaz's research [16], showed that bonding agents can lower the value of slant shear by up to 40%.Furthermore, when considering the average flexural strength test results of connection beams, Novitasari's research [17] utilising the identical bonding agent product yielded the highest result of 0.77 MPa.In contrast, the monolith concrete beam registered 3.27 MPa, indicating a substantial decrease of 76.45 per cent.
Conclusion
The following inferences can be derived from this study.1. Slant shear, split tensile strength, and flexural strength tests were used to assess the bond strength of concrete.
• The slant shear test result showed that AC, followed by DH, GV, and BA, had the highest value.At 28 days, the slant shear test results under normal curing conditions were 4.73 MPa, 3.26 MPa, 3.04 MPa, and 2.65 MPa, respectively.This is due to the overlay's compressive strength having a significant impact.Thus, compared to the other three surface preparations, the AC value was significantly larger.• GV had the highest test value in the split tensile strength test, followed by DH, BA, and AC, with consecutive test values of 2.04 MPa, 1.22 MPa, 1.08 MPa, and 0.81 MPa under curing condition N. • GV had the highest test value in the flexural strength test, followed by DH, AC, and BA at 4.49 MPa, 3.63 MPa, 2.97 MPa, and 0.89 MPa under curing condition N. Based on the three tests mentioned above, it can be inferred that the GV surface preparation, followed by DH, has the most significant impact on bond strength.While using BA as a repair material remains debatable, this study determined that its use has no discernible impact on bond strength.2. In both slant shear and splitting tensile tests, the curing condition has minimal impact on bond strength, but in flexural testing, it exerts a significant influence.When comparing the W condition to the N condition, there was a notable decrease of 31.28 per cent.
Table 1 .
Mix Proportions of substrate and overlay materials (kg/m³)
Table 2 .
Details of specimens
Table 3 .
Slant shear test result
Table 5 .
Flexure test results | 3,168.2 | 2024-01-01T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
An Efficient and Accurate DDPG-Based Recurrent Attention Model for Object Localization
Using image processing algorithm to localize objects, which lack specific patterns and local features, has always been the research focus in industrial production. Compared with the traditional image processing algorithm, RAM (Recurrent Attention Model) in deep learning not only shows advantages in positioning accuracy and stability, but also has good adaptability in situations such as occlusion. However, RAM contains policy gradient (PG) algorithm, which is unstable in training process and has low convergence efficiency. To overcome this shortcoming, in order to improve the learning efficiency and stability of RAM, this paper proposes DDPG-based RAM. In addition, current random sampling algorithm in DDPG (Deep Deterministic Policy Gradient) does not make full use of the information contained in samples. Some samples are repeatedly learned, which slows down the convergence rate of the neural network model, and even causes the model to converge to the local optimal solution. To solve the above problems, a prioritized experience replay algorithm based on Gaussian sampling method is proposed. By constructing the localization and grasping simulation environment in V-rep, it is shown that compared with the traditional image algorithm, the proposed model algorithm in this paper has a greater improvement in localization accuracy, stability and model convergence speed.
I. INTRODUCTION
With the development of automation, machine vision has gradually become an indispensable part of industrial production. In this regard, the use of vision-guided mobile robot or robotic arm for positioning has become one of the hot research areas. In industrial production, image-based robot arm positioning and grasping has become one of the measurement standards of automation level. Positioning and grasping is a very common problem in industrial production, but most of the existing visual technology is aimed at the recognition of objects with rich surface texture features.
Most of the industrial parts are smooth, and lack of specific texture or local features. In addition to the smooth surface and lack of texture features and local shape feature of most parts in industrial production, the problem of occlusion caused by The associate editor coordinating the review of this manuscript and approving it for publication was Gianluigi Ciocca . randomly placed parts is also one of difficult problems to be considered. In view of these problems, this paper proposes to use the method of deep reiforcement learning, which only gives the pre-defined reward function and original image, to make the computer learn and recognize the positions of randomly placed parts and complete the subsequent grasping task.
In addition to the comparison with the traditional image processing algorithm for localization, this paper replaces the policy gradient (PG) algorithm in recurrent attention model (RAM) for deep deterministic policy gradient (DDPG) algorithm, and improves the experience replay method of DDPG by priority sampling.
The experimental results show that the proposed RAM model has better positioning precision and stability compared with the traditional localization algorithm. And the priority sampling algorithm further improves the learning efficiency of RAM. VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
II. RELATED WORK A. RAM IN DEEP LEARNING
In industrial production, the parts have various shapes and features. The traditional image processing algorithm is not only complex in algorithm design, but also needs to be designed according to the characteristics of each object, which is tedious and time-consuming. The emergence of deep learning algorithm has greatly simplified this process. As a research direction of machine vision, deep learning has gradually entered the attention of researchers in various fields in recent years. To be honest, deep learning is not essentially a new research direction, it is partly based on the neural network. Traditional neural network can be divided into input layer, hidden layer and output layer [1]- [4]. Therefore, the traditional neural network has the disadvantages of too many parameters and poor convergence. The proposal of convolution neural network and recurrent neural network improves the traditional neural network in the number of layers, so as to extract more abstract and deeper features of the object, and improve the robustness and universality of the model for many applications. In addition, with the improvement of computer computing performance and the graphics processors, deep model learning becomes possible for common researchers. Now it has been gradually applied in image classification [5]- [7], expression recognition [8]- [10], speech recognition [11]- [14], etc. Attention model is one of the research hot spots in deep learning model. The idea of creating attention model comes mainly from the human visual mechanism. When collecting images, human vision did not carefully observe all areas, but made quick scanning on the global image and then focused on the specific areas. So this model can output its own region of interest at each step. There are many kinds of applications of attention model, such as machine translation [15], image captioning [16], text summarization [17] and so on. There are two kinds of attention models, one is soft attention model and the other is hard attention model. Soft attention model is the weight ratio of the whole original input, so as to determine the focus of the model and weaken the area outside the focus [18]. In the hard attention model, the focus position is determined by reinforcement learning, and the out-of-focus area is ignored [19].
There have been many improvements to the attention model, such as self-attention model [20], residual attention model [21], and some module optimization in the attention model, such as GRU [22] and convolution kernel optimization [23]. Although there are some improvements in performance, they are only limited to the fine tuning of the model, and there is no in-depth analysis from the overall structure and operation mode of the model.
For the application studied in this paper, the main purpose of industrial production is to quickly and accurately process image data in real time. Compared with the soft attention model, the hard attention model only intercepts parts of the input image. It can quickly analyze and output it. Therefore, according to the processing mechanism of the hard attention model, this paper optimizes its learning process and proposes a priority sampling algorithm.
B. REINFORCEMENT LEARNING IN RAM
Reinforcement learning is considered as one of the main development paths of artificial intelligence [24], [25]. In 2015, deep reinforcement learning was first successfully applied in Atari2600 game, from Deep Q Network (DQN) proposed by Mnih [26]. The performance of the algorithm proposed in this paper exceeds that of ordinary human players in 40 games. Among them, it surpasses professional players in 34 games, and surpasses the most advanced intelligent algorithm at that time in at least 28 Games. The success of this model lies in two aspects: (1) the target network and the evaluation network; (2) experience pool. In the process of training, the method of sampling samples from the experience pool is random sampling. The disadvantage is that the information contained in the samples is not fully utilized, which reduces the efficiency of model training.
The agent of reinforcement learning accomplish a certain task through a large number of training. Reinforcement learning algorithms can be divided into model-based learning and model-free learning. The model-based learning algorithms are based on the transition probability model, which is to calculate the probability between the current state and the next state. However, the model-based algorithms are usually applied in simulation environment, which has the low dimension state space and low dimension action space, such as chessboard game and dobby slot machines (multi-armed bandits). The model-free learning algorithms don't use the transition probability model, but only use the samples in the process of task execution. The learning methods of samples are divided into online learning and offline learning. The online policy learning algorithms use the samples from the current policy for training. They does not need to store too many samples, thus they can avoid consuming too much computer memory. But this kind of algorithm is affected by the correlation between samples. The samples generated between two adjacent policies will produce different reward functions due to the influence of noise, which will lead to oscillation of the model learning process and even the failure to converge to the optimal solution. Therefore, off-line policy learning based on non-transition probability model is often adopted. Mnih proposed experience replay in offline policy learning. The off-line policy learning uses the experience pool to store the samples executed by the previous policy and make the agent learn from these samples repeatly. It can not only reduce the correlation between samples, but also improve the model convergence probability. Both deep Q-network and Actor-Critic network consist of the experience replay algorithm. However, the experience replay algorithm uses random sampling, which does not fully utilize the the information of samples, such as TD error, etc. Some samples that have been learned by models are still repeatedly selected for learning. The reverse gradient that these samples can produce is close to zero, which results in almost no updating of model parameters. Therefore, in response to this problem, many researchers have proposed how to use the information of the samples to accelerate the neural network learning process. Zhai jianwei et al. proposed to set the sampling priority based on positive and negative reward value, and use the number of sampling times, so as to take into account the sampling probability of various samples and ensure the diversity of samples in the learning process [27]. However, the disadvantage is that the reward value in the current time does not completely represent the cumulative reward or long-term benefit of the current action, but only indicates that there is a certain reward value for executing the action in this state. Moreover, in many tasks the agent can only get sparse reward, and even get the reward at the end of task. Therefore, this algorithm is not universal. Lin Ming et al. proposed a experience replay algorithm based on fixed length [28]. The main idea is to transform the traditional single-step length sample learning into fixed-length sample training. In his paper, it is proved that increasing training samples of fixed length can improve training efficiency in non-markov environment. However, this kind of model requires a long period of sample storage, which is not efficient. It is shown that there is no significant positive effect in markov environment, and sometimes even reduces the learning efficiency. Zhu fei et al. proposed a deep Q network based on experience sampling of the maximum confidence upper bound. This algorithm is mainly based on the sampling idea of the maximum confidence upper bound, and uses the reward value for setting the priority. However, this algorithm still has the problem of unclear probability distribution of positive and negative reward values, and cannot fully guarantee the diversity of samples in the sampling process [29].
RAM integrates the strong nonlinear fitting ability of neural network and the advantages of model-free learning in reinforcement learning. Reinforcement learning is an essential part of RAM. According to the above problem, this paper proposes a prioritized experience replay algorithm based on Gaussian sampling method to improve the learning rate and stability of RAM. The proposed algorithm uses the difference of TD error, which was calculated in the current model and the previous model, to set the priority for each batch of training samples. Then selects the samples according to the Gaussian distribution, so as to ensure the diversity of samples, and make fully use of sample information.
III. RAM BASED ON DDPG AND PRIORITIZED EPERIENCE REPLAY ALGORITHM A. RAM BASED ON DDPG
In order to avoid the design of feature extraction algorithm in traditional algorithm, this paper proposes to use RAM for object recognition to guide the robot arm to complete the corresponding grasping work.
Compared with CNN, RAM can better fit the real-time requirements of industrial production by intercepting partial input to complete the target. The RAM used in this paper is a hard attention model, which is mainly composed of two parts. One part is the recurrent neural network, which extracts features and locates the final object by intercepting part of the input image, and the other part is the reinforcement learning network (e.g. PG in Fig.1), which is used to output the location of the focus of the whole model at the next moment, as shown in Fig. 1. As mentioned before, most of the existing improvements to the attention model are improving the structure other than PG method in Fig. 1, which cannot fundamentally improve the learning efficiency and stability of RAM. PG is one of the most basic strategy models of reinforcement learning. It does not have to occupy memory. However, its disadvantages are unstable learning process and poor convergence. This is one of the reasons for the slow learning efficiency and even failures of RAM. To solve this problem, the paper replace the original PG method with a more stable and efficient DDPG method, as shown in Fig. 2.
B. PRIORITIZED EXPERIENCE REPLAY ALGORITHM
In addition to the improvement of RAM on the learning strategy model, this paper also makes an in-depth improvement on the learning method of DDPG itself, and proposes the prioritized experience replay sampling algorithm. In order to better express the algorithm proposed in this paper, the following are the notations and definitions involved in the algorithm.
Markov Decision Process (MDP) is usually used to simplify the modeling of reinforcement learning [30]. Markov decision process can usually be represented by a quaternion array, that is (s, a, p, r), where is s the state, a is the action performed in the current state, p is the probability of moving to the next state s after taking the action, and r is the real-time reward value obtained when reaching the next state.
There are two learning methods that do not require the state transition probability model. One is the Monte Carlo (MC) method. The other method is Temporal Difference (TD). They do not need the transition probability model, but only need the ''experience'' in the environment, that is, the state, action and reward value.
The state value function V (s) is the average of the cumulative reward value after experiencing status s, which is defined as follows where return (·) is the cumulative reward value after experiencing state s. Similarly, the Q (s, a) value function is defined as where return (·) is the cumulative reward value after experiencing state s and action a. MC method can predict the V value and Q value in two ways. One is to save the average of the cumulative rewards experiencing state s for the first time; The other is to save the average of cumulative reward experiencing state s for each time. According to the definition of MC method, V values and Q values need to be determined after each end of the task, which is not conducive to real-time update. TD method can converge to the real value more quickly in time. Unlike the MC method, TD method updates the V values and Q values incrementally, which are defined as follows where α is the incremental update coefficient of the state value function, indicating the trust degree of the agent to the new sample, γ is the attenuation coefficient of the future reward value, and represents the importance to the current reward value. Similarly, the Q value function of TD method can be defined as TD error is defined as the difference between the right and left sides of (3) and (4). TD error can evaluate the learning degree of an agent in the simulation environment. A small TD error can indicate that the agent has completed the learning under the current state and action.
According to the idea proposed by Schaul et al. [31], the TD error value is used for setting priority to sort the samples. However, if only based on the TD error values saved in the previous policy model, these TD error values can only indicate the learning status of the samples under the previous policy model. It can not represents the relationship between the previous and the current policy model. Therefore, in this paper, we calculate the TD error values under the previous policy model and the current policy model, denoted as TD error1 and TD error2 respectively.
When TD error2 is small, it indicates that the corresponding V value or Q value at current state and action is close to convergence, so there is no need to learn too much. Otherwise, it indicates that the sample needs to be further learned. In addition, the difference TD diff between TD error1 and TD error2 also reflects the difference between the previous and current policy models, so it also needs to be taken into consideration. In order to avoid the same sample being sampled for many times, β c the attenuation coefficient of the number of samples is added, namely, c is the number of samples, and β can be set to 0.95∼0.99, so as to decrease the sampling priority of samples. Finally, the sample with high reward value indicates that it is close to the optimal policy and has the necessity to be selected first. Therefore, the specific definition of priority is as follows where r t is the reward value of the sample, f (·) can be defined as functions such as sigmoid to prevent the TD error2 value from affecting the priority too much, TD diff is defined as the absolute value of the TD error difference between the old and new policy models, as shown below In this paper, the follow-up experimental data show that using only the samples selected from (5) will affect the diversity of samples. Therefore, we propose to use Gaussian sampling method in each batch of training samples to ensure the diversity of samples in the learning process.
The prioritized experience replay algorithm based on Gaussian sampling method is shown in Table 1. The proposed method is based on DDPG algorithm. The DDPG algorithm has two main parts, Actor network and Critic network. Both of them have two network, evaluation network and target network. The evaluation network and target network are used for reducing the correlation between samples to ensure the stable convergence of the model. In the table, the memory_size is the number of samples to store in experience pool. M is the number of total episode, T is the number of total steps in each episode. N (·) is the Gaussian probability model for sampling. J and K are used to control the update frequency of the target network (θ µ and θ Q ) from the evaluation network (θ µ and θ Q ).
IV. EXPERIEMENTS AND DATA ANALYSIS
In order to verify the feasibility of the algorithm, the Dobot robot positioning task is used under the V-rep simulation environment.
As shown in Fig. 3, the top left corner shows a panoramic view of the entire simulation environment, while the other three images are observed from different angles. Dobot is placed in the middle of the environment, a camera with a 60-degree angle of view and a resolution of (640, 480) is placed directly above the robot, and a square object is randomly place in the field view of the camera. The task of the robot is to avoid the flower pot, grab the object and place them on the conveyor belt using only the original image from the camera. The workflow involves how to avoid part positioning problem, occlusion problem, flowerpot collision problem, kinematic inverse solution problem and so on.
In order to make the robot locate the object automatically and realize that it catches the object instead of randomly swinging to reach the target position, the reward function is designed as follows. When the end of the robot arm is not in the square area of the object, the reward function is defined as the negative value of the distance between the robot end and the center of the object. When the end of the robot arm is in the square area, the reward value is increased by 1. When staying in the block area continuously, the reward is increased by 10. When the accurate positioning of 10 time steps is completed, the task is completed and the next episode restarts. If five successive episodes succeed, the experiment is stopped. An experiment contains 600 episodes. Each episode contains 200 VOLUME 8, 2020 steps. In the case of completing episode five times in a row, it shows that the robot has learned the task of positioning and completed the experiment. Record the episode after completing each experiment, and judge the learning efficiency of the proposed reinforcement learning algorithm by the mean number of episode. We have done 10 experiments for each algorithm to reduce randomness. Compared with the discrete action control problems, the continuous action control problems in this paper is more difficult to converge, which is suitable for verifying the feasibility of our proposed model and algorithm.
The hardware configuration of this paper is Intel i7-7700HQ quad-core processor, the main frequency is 2.7Ghz, 8G DDR4 2400MHz memory, the graphics card is NVIDIA GeForce 940MX. The system is Ubuntu 16.04, and Tensorflow 1.5 is used to build the neural network.
A. TRADITIONAL OBJECT RECOGNITION AND LOCALIZATION
In this paper, the traditional localization algorithm is also compared and analyzed. In Fig. 4, SIFT (Scale-Invariant Feature Transform) and FLANN (Fast Library for Approximate Nearest Neighbour) algorithm are used for feature extraction and feature matching algorithm to locate the object position.
The blue circles in the figure represent feature points, and the green lines connect the matched features. From the figure, we can see that although SIFT can detect many features, it is difficult to match them correctly. This is due to the smooth surface and the lack of obvious texture features of the measured object.
To overcome the above problems, the color adaptive threshold (CAT) algorithm and the minimum external rectangle (MER) algorithm are used to extract the center position of the part.
As shown in Fig. 5(a), is the detection effect of the adaptive threshold segmentation. The green area in the Fig. 5 is the result after positioning, and the result shows the accurate positioning result of the algorithm. However, when the occlusion, the algorithm can only locate the visible part, as shown in Fig.5(b). It can be seen from Fig. 4 that SIFT + FLANN, a traditional feature detection and localization method, requires the detected object to have rich surface features. It is difficult to accurately locate the target in the simulation environment in this paper. Therefore, the experimental data of SIFT + FLANN are not listed in Table 2. In addition to Fig. 5, the relative positioning errors and standard deviation between different algorithms and models shown in Table 2. The mean value in Table 2 comes from the relative error value of positioning accuracy of the model (or algorithm) after training when the measured object is in different positions, and the variance is calculated by multiple mean error. The reduction of the average positioning error proves that the end effector of the robot arm is closer to the center of the measured object. On the premise of ensuring the reduction of mean error, the reduction of variance proves that the stability of the algorithm is improved. From the table, we can see that the traditional algorithm has poor adaptability in special cases.
B. RAM AND PRIORITIZED EXPERIENCE REPLAY ALGORITHM
Before outputting the coordinate value of the measured object, RAM needs to undergo four image processing, and each image comes from cutting part of the original image (as shown in the red box in Figure 6). The coordinate of the measured object will be output only after the information of the four clipped images is fused. The green curve in the Fig. 6 is the road map of RAM's four clipping images. Fig. 6 shows the experimental results of RAM after training. According to the experimental results, RAM predicts the object position through its own repeated learning, whether the objects are blocked or not. Table 2, after model training, DDPG+ PER has better positioning accuracy and stability than other two algorithms. As shown in Fig. 7, the four algorithms have been tested for 10 times in the simulation environment, the envelope of each curve in the figure is the 95% confidence interval of the algorithm's cumulative reward for each episode. PG, DDPG and PER represent the RAM using PG, DDPG and DDPG+PER, respectively. In order to verify the necessity of using Gaussian sampling algorithm in PER algorithm proposed in this paper, two different sampling methods are used, one is to select the second batch of 3 batches directly in the training process, which is named PER (Center); the other is the Gaussian sampling method (as shown in Table 1), which is marked as PER (Gaussian).
As shown in
Most of the deep neural networks need to learn through a lot of time and training samples in advance, which can be accepted by people. Therefore, the operation time of the algorithm is not within the scope of our measurement index. Here we only compare their convergence effect. It can be seen from the figure that at the beginning of learning, each algorithm shows certain volatility, especially for the DDPG and PER algorithm proposed in this paper. After reaching the peak value, the two algorithms fall back to a certain extent. From the definition of the reward function, it can be seen that when the end of the robot arm is stably and continuously in the cube area, the reward value is 10. When the end effector of the manipulator can stay 10 steps in the area of the measured object continuously, the episode is completed. When completing the episode successfully 5 times in a row, the task will be terminated in advance. As can be seen from the figure, the proposed algorithm can converge faster and obtain higher cumulative reward than the other two algorithms. As for the PG algorithm which exists in original RAM, due to the influence of noise and the instability of the PG algorithm, it is difficult to converge, which is why the cumulative reward from 0 to 600 per episode is almost flat, because every episode is likely to converge. The difference is that the DDPG and PER proposed in this paper both had very low cumulative rewards before 50 episodes, but they quickly climbed to very high cumulative rewards after that and finally backed to around 0. This is because most of the two algorithm in the beginning of the accumulation of experience is wrong, it is difficult to make the right judgment. When the correct experience appears, it will be repeatedly compared with the previous strategy, so as to quickly complete the training. In addition, the learning curve of PER (Center) is similar to that of PER (Gaussian). However, PER (Center) algorithm ignores the diversity of samples and only selects the middle part of samples, it is slightly inferior to PER (Gaussian) in convergence speed and cumulative reward. According to the grasping experiment results, 600 episodes are enough for the robot to learn grasping skills. Once the robot has learned the grasping skill, it will terminate the experiment ahead of time. This is why the reward value in the second half of the learning curve of each algorithm in Fig. 7 tends to zero. According to the experimental data, the average convergence rounds of PG, DDPG, PER(Center) and PER(Gaussian) are 382.4, 296.2, 275.6 and 237.7, respectively. Therefore, the experimental results show that prioritized experience replay algorithm proposed in this paper is better to converge to the optimal solution compared to the traditional PG and DDPG algorithm.
V. CONCLUSION
In this paper, a deep learning model RAM is used to locate and identify the objects, which is placed on the ground randomly. The surface of the object is smooth and lacks texture features. We compared with localization performance of the traditional algorithms SIFT+FLANN and CAT+MER with RAM. Deep learning model RAM has great advantages in object recognition, occlusion and so on. We also propose DDPG-based RAM to solve the problem of poor convergence of PG algorithm in the original RAM. In addition, for the experience replay algorithm involved in DDPG, we also propose a priority experience replay algorithm based on Gaussian sampling. Experimental results show that compared with PG algorithm, DDPG and PER algorithm improve the convergence speed and convergence effect of the original model. The PER algorithm has a great improvement in the learning priority of samples, which improves the training efficiency and avoids the model convergence to the local optimal solution.
The algorithm proposed in this paper has obtained good preliminary research results in the simulation environment of Dobot robot. Future research will be improved in the following aspects: how to apply the model to the real application, without damaging the corresponding robot equipment; how to add dimension evaluation mechanism for faster training and learning. | 6,760.6 | 2020-01-01T00:00:00.000 | [
"Computer Science"
] |
A dataset of factors influencing intentions for organic farming in Vietnam
This paper presents a survey dataset on factors influencing farmers' intention to produce organic farming in Hanoi, Vietnam. The survey was designed based on the theoretical integration model of theory of planned behavior (TPB) and norm activation model (NAM) including 7 factors, 33 items inherited from previous studies to collect information of the respondents and 5 other items used to find out the respondent's characteristic include: gender, age, educational qualification, farming experience and farming annual income. Questionnaires were sent directly to farmers at their home or farm in October 2019. 318 valid questionnaires were collected for the study of intentions for organic farming. The dataset was obtained as a reference source for later studies on organic farming development and organic farming production intent/behavior promotion.
Specifications
Agriculture Specific subject area Organic farming, intention of farmer, theory of planned behavior, norm activation model Type of data
Value of the Data
• The dataset describes the assessment of farmers in Hanoi, Vietnam on organic farming production based on seven factors from the TPB-NAM integrated model including: intention, attitude, subject norms, perceived behavioral control, personal norm, awareness of consequences and ascription of responsibility. At the same time, the dataset also includes 5 characteristics of the respondent: gender, age, educational qualification, farming experience and farming annual income. • The data set explores factors influencing the intentions of organic farming of farmers in Hanoi, Vietnam. • The dataset is the source of reference for state management agencies in making policies to promote organic farming production, contributing to building a sustainable national agriculture in developing countries like Vietnam.
Data Description
Organic farming increases farmers' income, while also helping to protect environmental pollution by avoiding harmful chemicals and fertilizers [12] . Organic farming is an important tool for achieving green yields and reducing the negative effects of conventional farming by removing synthetic chemical inputs during production [1] . Therefore, developing countries like Vietnam are trying to come up with policies to promote organic farming intentions of farmers. The dataset collected farmers' opinions based on seven factors from the TPB-NAM integration model. In particular, TPB has been accepted and widely used in studies with the purpose of predicting individual intentions and behavior, empirical studies have shown the relevance of this theory in the study of farmers' intentions/behavior [ 3 , 5 , 6 , 8 , 11 ]. NAM is derived from a pro-social context and has been widely used in many studies to explain not only pro-social intentions/behavior but also pro-environmental intentions/behavior in a wide range of contexts [ 2 , 4 , 7 , 9 , 14 ].
The data set was collected through a 2-part survey: the first part explores the respondents' characteristics including: gender, age, educational qualification, farming experience and farming annual income ( Table 1 ); the second part explores respondents' consent to statements related to factors affecting the intention to produce organic agriculture ( Table 2 ); Table 3 shows more detailed results between the variables. It took the farmer about 20 minutes to complete the entire survey. Valuable responses were obtained in 318 questionnaires. The questionnaire and answers were shown in the supplementary files. The dataset includes: the respondent's characteristics ( Table 1 ) and seven factors: (1) intention; (2) attitude; (3) subject norms; (4) perceived behavioral control; (5) awareness of consequences; (6) ascription of responsibility; (7) personal norm ( Table 2 ).
Experimental Design, Materials and Methods
The survey was conducted directly at the farmer's residence or farm in October 2019. The survey team received the support from Department of Science and Technology in Hanoi to list and approach the target farmers. Respondents were farmers who were practicing conventional farming in Hanoi, Vietnam. Respondents were selected at random but still ensured their representativeness in some regions that were promoting the conversion to organic farming such as Soc Son, Chuong My, Ba Vi, ... in Hanoi. Each farmer participating in the survey received a support of 2 US dollar after completing all the contents of the questionnaire which were distributed directly and collected by the survey team.
The survey team designed a survey of 38 items, of which 5 were about respondents' characteristics, the remaining 33 items, are designed on a 5-point Likert scale (1: Strongly disagree; 2: Disagree; 3: Neutral; 4: Agree; 5: Strongly agree), focus on 7 factors: intention, attitude, subject norms, perceived behavioral control, personal norm, awareness of consequences and ascription of responsibility. All items in the survey are inherited from previous studies [ 10 , 13 ] and the replying is complete mandatory to ensure that the collected data does not contain missing data. The questionnaire did not use the reverse question, which was conducted directly by the survey team, with detailed observations and assisting farmers in the answer process. All responses of the respondents were imported into Excel software before importing to SPSS 22. Before the analysis, the variables were encoded ( Tables 2 and 3 ) and the data were checked to ensure the validity of each questionnaire. After discarding invalid questionnaires, the final dataset contained 318 questionnaires. Table 3 Correlations between variables and intention of farmers to produce organic farming. Based on the data set, further studies can study the relationships between the factors in the TPB-NAM integration model or separate each theory to find the factors that influence intentions to produce organic farming by farmers.
Ethics Statement
The authors kept to all ethical concerns during the data gathering process. The authors got the consent of the respondent when conducting surveys and ensured that all information was used for research purposes and was absolutely confidential.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships which have, or could be perceived to have, influenced the work reported in this article. | 1,363.6 | 2020-11-29T00:00:00.000 | [
"Economics"
] |
Survey: Recovering cryptographic keys from partial information, by example
. Side-channel attacks targeting cryptography may leak only partial or indirect information about the secret keys. There are a variety of techniques in the literature for recovering secret keys from partial information. In this work, we survey several of the main families of partial key recovery algorithms for RSA, (EC)DSA, and (elliptic curve) Diffie-Hellman, the classical public-key cryptosystems in common use today. We categorize the known techniques by the structure of the information that is learned by the attacker, and give simplified examples for each technique to illustrate the underlying ideas.
Introduction
In a side-channel attack, an attacker exploits side effects from computation or storage to reveal ostensibly secret information.Many side-channel attacks stem from the fact that a computer is a physical object in the real world, and thus computations can take different amounts of time [Koc96], cause changing power consumption [KJJ99], generate electromagnetic radiation [QS01], or produce sound [GST14], light [FH08], or temperature [HS14] fluctuations.The specific character of the information that is leaked depends on the high-and low-level implementation details of the algorithm and often the computer hardware itself: branch conditions, error conditions, memory cache eviction behavior, or the specifics of capacitor discharges.
The first work on side-channel attacks in the published literature did not directly target cryptography [EL85], but since Kocher's work on timing and power analysis in the 90s [Koc96,KJJ99], cryptography has become a popular target for side-channel work.However, it is rare that an attacker will be able to simply read a full cryptographic secret through a side channel.The information revealed by many side channel attacks is often indirect or incomplete, or may contain errors.
Thus in order to fully understand the nature of a given vulnerability, the side-channel analyst often needs to make use of additional cryptanalytic techniques.The main goal for the cryptanalyst in this situation is typically: "I have obtained the following type of incomplete information about the secret key.Does it allow me to efficiently recover the rest of the key?" Unfortunately there is not a one-size-fits-all answer: it depends on the specific algorithm used, and on the nature of the information that has been recovered.
The goal of this work is to collect some of the most useful techniques in this area together in one place, and provide a reasonably comprehensive classification on what is known to be efficient for the most commonly encountered scenarios in practice.That is, this is a non-exhaustive survey and a concrete tutorial with motivational examples.Many of the algorithmic papers in this area give constructions in full generality, which can sometimes obscure the reader's intuition about why a method works.Here, we aim to give minimal working examples to illustrate each algorithm for simple but nontrivial cases.We restrict our focus to public-key cryptography, and in particular, the algorithms that are currently in wide use and thus the most popular targets for attack: RSA, (EC)DSA, and (elliptic curve) Diffie-Hellman.
Throughout this work, we will illustrate the information known for key values as follows: Most significant bits Least significant bits
Known bits
The organization of this survey is given in Table 1.
Motivation
While this survey is mostly operating at a higher level of mathematical abstraction than the side-channel attacks that we are motivated by, we will give a few examples of how attackers can learn partial information about secrets.
Modular exponentiation.All of the public-key cryptographic algorithms we discuss involve modular exponentiation or elliptic curve scalar addition operating on secret values.For RSA signatures, the victim computes s = m d mod N where d is the secret exponent.
For DSA signatures, the victim computes a per-signature secret value k and computes the value r = g k mod p, where g and p are public parameters.For Diffie-Hellman key exchange, the victim generates a secret exponent a and computes the public key exchange value A = g a mod p, where g and p are public parameters.Naive modular exponentiation algorithms like square-and-multiply operate bit by bit over the bits of the exponent: each iteration will execute a square operation, and if that bit of the exponent is a 1, will execute a multiply operation.More sophisticated modular exponentiation algorithms precompute a digit representation of the exponent using nonadjacent form (NAF), windowed non-adjacent form (wNAF) [Möl03], sliding windows, or Booth recoding [Boo51] and then operate on the precomputed digit representation [Gor98].
Cache attacks on modular exponentiation.Cache timing attacks are one of the most commonly exploited families of side-channel attacks in the academic literature [Pag02, TTMH02, TSS + 03, Per05, Ber05, OST06].There are many variants of these attacks, but they all share in common that the attacker is able to execute code on a CPU that is co-located with the victim process and shares a CPU cache.While the victim code executes, the attacker measures the amount of time that it takes to load information from locations in the cache, and thus deduces information about the data that the victim process loaded into those cache locations during execution.In the context of the modular exponentiation or scalar addition algorithms discussed above, a cache attack on a vulnerable implementation might reveal whether a multiply operation was executed at a particular bit location if the attacker can detect whether the code to execute the multiply instruction was loaded into the cache.Alternatively, for a pre-computed digit representation of the number, the attacker may be able to use a cache attack to observe the digit values that were accessed [ASK07, AS08,BvSY14].
Other attacks on modular exponentiation.Other families of side channels that have been used to exploit vulnerable modular exponentiation implementations include power analysis and differential power analysis attacks [KJJ99,KJJR11], electromagnetic radiation [QS01], acoustic emanations [GST14], raw timing [Koc96], photonic emission [FH08], and temperature [HS14].These attacks similarly exploit code or circuits whose execution varies based on secrets.Cold boot and memory attacks.An entirely different class of side-channel attacks that can reveal partial information against keys include attacks that may leak the contents of memory.These include cold boot attacks [HSH + 08], DMA (Direct Memory Access), Heartbleed, and Spectre/Meltdown [LSG + 18, KHF + 19].While these attacks may reveal incomplete information, and thus serve as theoretical motivation for some of the algorithms we discuss, most of the vulnerabilities in this family of attacks can simply be used to read arbitrary memory with near-perfect precision, and cryptanalytic algorithms are rarely necessary.
Length-dependent operations.A final vulnerability class is implementations whose behavior depends on the length of a secret value, and thus variations in the behavior may leak information about the number of leading zeros in a secret.A simple example is copying a secret key to a buffer in such a way that it reveals the bit length of a secret key.
In another example, the Raccoon attack observes that TLS versions 1.2 and below strips leading zeros from the Diffie-Hellman shared secret before applying the key derivation function, resulting in a timing difference depending on the number of hash input blocks required for the length of the secret [MBA + 21].
Mathematical background
Lattices and lattice reduction algorithms Several of the algorithms we present make use of lattices and lattice algorithms.
For the purposes of this survey, we will specify a lattice by giving a basis matrix B which is an n × n matrix of linearly independent row vectors with rational (but in our applications usually integer) entries.The lattice generated by B, written as L(B), consists of all vectors that are integer linear combinations of the row vectors of B. The determinant of a lattice is the absolute value of the determinant of a basis matrix: det Geometrically, a lattice resembles a discrete, possibly skewed, grid of points in ndimensional space.This discreteness property ensures that there is a shortest vector in the lattice: there is a non-infinitesimal smallest length of a vector in the lattice, and there is at least one vector v 1 that achieves this length.For a random lattice, the Euclidean length of this vector is approximated using the Gaussian heuristic: |v 1 | 2 ≈ n/(2πe)(det L) 1/n .We rarely need this much precision; for lattices of very small dimension we will often use the approximation that The shortest vector in an arbitrary lattice is NP-hard to compute exactly, but the LLL algorithm [LLL82] will compute an exponential approximation to this shortest vector in polynomial time: in the worst case, it will return a vector b 1 satisfying ||b 1 || 2 ≤ 2 (n−1)/4 (det L) 1/n .In practice, for random lattices, the LLL algorithm obtains a better approximation factor ||b 1 || 2 ≤ 1.02 n (det L) 1/n [NS06].In fact, the LLL algorithm will return an entire basis for the lattice whose vectors are good approximations for what are called the successive minima for the lattice; for our purposes the only fact we need is that these vectors will be fairly short, and for a random lattice they will be close to the same length.Current implementations of the LLL algorithm can be run fairly straightforwardly on lattices of dimension from a few hundred to a few thousand [RH23].
To compute a closer approximation to the shortest vector than LLL, one can use the BKZ algorithm [Sch87,SE94].This algorithm runs in time exponential in a block size, which is a parameter to the algorithm that determines the quality of the approximation factor.The theoretical guarantees of this algorithm are complicated to express; for our purposes we only need to know that for lattices of dimension below around 100, one can easily compute the shortest vector in the heuristically random-looking lattices we consider using the BKZ algorithm, and can often find the shortest vector, or a "good enough" approximation to it, by using smaller block sizes.Theoretically, the LLL algorithm is equivalent to using BKZ with block size 2.
RSA Preliminaries
Parameter Generation.To generate an RSA key pair, implementations typically start by choosing the public exponent e.By far the most common choice is to simply fix e = 65537.Some implementations use small primes like 3 or 17. Almost no implementations use public exponents larger than 32 bits.This means that attacks that involve brute forcing values less than e are generally feasible in practice.
In the next step, the implementation generates two random primes p and q such that p − 1 and q − 1 are relatively prime to e.The public modulus is N = pq.The private exponent is then computed as d = e −1 mod (p − 1)(q − 1).
The public key is the pair (e, N ).In theory, the secret key is the pair (d, N ), but in practice many implementations store keys in a data structure including much more information.For example, the PKCS#1 private key format includes the fields p, q, d p = d mod (p − 1), d q = d mod (q − 1), and q inv = q −1 mod p to speed encryption using the Chinese Remainder Theorem.
Encryption and Signatures.
In textbook RSA, Alice encrypts the message m to Bob by computing c = m e mod N .In practice, the message m is not a "raw" message, but has first been transformed from the content using a padding scheme.The most common encryption padding scheme in network protocols is PKCS#1v1.5,but OAEP [BR95] is also sometimes used or specified in protocols.To decrypt the encrypted ciphertext, Bob computes m = c d mod N and verifies that m has the correct padding.
To generate a digital signature, Bob first hashes and pads the message he wishes to sign using a padding scheme like PKCS#1v1.5 signature padding (most common) or PSS (less common); let m be the hashed and padded message of this form.Then Bob generates the signature as s = m d mod N .Alice can verify the signature by computing the value m ′ = s e mod N and verifying that m ′ is the correct hashed and padded value.
Since encryption and signature verification only use the public key, decryption and signature generation are the operations typically targeted by side-channel attacks.
RSA-CRT.
To speed up decryption, instead of computing c d mod N directly, implementations often use the Chinese remainder theorem (CRT).RSA-CRT splits the exponent d into two parts d p = d mod (p − 1) and d q = d mod (q − 1).
To decrypt using the Chinese remainder theorem, Alice would compute m p = c dp mod p and m q = c dq mod q.The message can be recovered with the help of the pre-computed value q inv = q −1 mod p by computing This is called Garner's formula [Gar59].
Relationships Between RSA Key Elements.For the purpose of secret key recovery, we typically assume that the attacker knows the public key.
RSA keys have a lot of mathematical structure that can be used to relate the different components of the public and private keys together for key recovery algorithms.The RSA public and private keys are related to each other as The modular equivalence can be removed by introducing a new variable k to obtain an integer relation We know that d < (p − 1)(q − 1), so k < e.The value of k is not known to the attacker, but since generally e ≤ 65537 in practice it is efficient to brute force over all possible values of k.
For attacks against the CRT coefficients d p and d q , we can obtain similar relations: for some integers k p < e and k q < e. Brute forcing over two independent 16-bit values can be burdensome, but we can relate k p and k q as follows: Rearranging the two relations, we obtain ed p − 1 + k p = k p p and ed q − 1 + k q = k q q.Multiplying these together, we get Reducing the above modulo e, we get Thus given a value for k p , we can solve for the unique value of k q mod e, and for applications that require brute forcing values of k p and k q we only need to brute force at most e pairs [IGA + 15].The multiplier k also has a nice relationship to these values.Multiplying the relations from Equation 1 together, we have Substituting (p−1)(q −1) = (ed−1)/k and reducing modulo e, we can relate the coefficients as k ≡ −k p k q mod e Any of the secret values p, q, d, d p , d q , or q inv suffices to compute all of the other values when the public key (N, e) is known.
From either p or q, computing the other values is straightforward.
For small e, N can be factored from d by computing The integer multiplier k can be recovered by rounding ⌈(ed − 1)/N ⌋.Once k is known, then Equation 3 can be rearranged to solve for s = p + q.Once s is known, we have (p + q) 2 = s 2 = p 2 + 2N + q 2 and s 2 − 4N = p 2 − 2N + q 2 = (p − q) 2 .Then N can be factored by computing gcd((p + q) − (p − q), N ).
When e is small, p can be computed from d p as p = gcd((ed p − 1)/k p + 1, N ) where k p can be brute forced from 1 to e.If k p is not known and is too large to brute force, with high probability for a random a, p = gcd(a edp−1 − 1, N ).
Factoring from q inv is more complex.As noted in [HS09], q inv satisfies q inv q 2 − q ≡ 0 mod N , and q can be recovered using Coppersmith's method, described below.
RSA Key Recovery with Consecutive bits known
This section covers techniques for recovering RSA private keys when large contiguous portions of the secret keys are known.The main technique used in this case is lattice basis reduction.
For the key recovery problems in this section, we can typically recover a large unknown chunk of bits of an unknown secret key value (p, d mod (p − 1), or d).We typically assume that the attacker has access to the public key (N, e) but does not have any other auxiliary information (about q or d mod (q − 1), for example.Knowledge of large contiguous portions of secret keys is unlikely to arise in side channels that involve noisy measurements, but could arise in scenarios where secrets are being read out of memory that got corrupted in an identifiable region.They can also help make attacks more efficient if a high cost is paid to recover known bits.
Warm-up: Lattice attacks on low-exponent RSA with bad padding.
The main algorithmic technique used for RSA key recovery with contiguous bits is to formulate the problem as finding a small root of a polynomial modulo an integer, and then to use lattice basis reduction to solve this problem.
In order to introduce the main tool of using lattice basis reduction to find roots of polynomials, we will start with an illustrative example for the concrete application of breaking small-exponent RSA with known padding.In later sections we will show how to modify the technique to cover different RSA key recovery scenarios.
The original formulation of this problem is due to Coppersmith [Cop96].Howgrave-Graham [HG97] gave a dual approach that we find easier to explain and easier to implement.May's survey [May10] contains a detailed description of the Coppersmith/Howgrave-Graham algorithm.
To set up the problem, we have an integer N , and a polynomial f (x) of degree k that has a root r modulo N , that is, f (r) ≡ 0 mod N .We wish to find r.Finding roots of polynomials can be done efficiently modulo primes [LLL82], so this problem is easy to solve if N is prime or the prime factorization of N is known.The Coppersmith/Howgrave-Graham methods are generally of interest when the prime factorization of N is not known: it gives an efficient algorithm for finding all small roots (if they exist) modulo N of unknown factorization.Cast the problem as finding roots of a polynomial.Let a = 0x01FFFFFFFFFFFFFFFF0000 be the known padding string, offset to the correct byte location.We also know the length of the message; in this case m < 2 16 .Thus we have that c = (a + m) 3 mod N , for unknown small m.Let f (x) = (a + x) 3 − c; we have set up the problem so that we wish to find a small root m satisfying f (m) ≡ 0 mod N for the polynomial (We have reduced the coefficients modulo N so that they will fit on the page.) Construct a lattice.Let the coefficients of f be f (x) = x 3 + f 2 x 2 + f 1 x + f 0 .Let M = 2 16 be an upper bound on the size of the root m.We construct the matrix We then apply the LLL lattice basis reduction algorithm to the matrix.The shortest vector of the reduced basis is Extract a polynomial from the lattice and find its roots.We then construct the polynomial The polynomial g has one integer root, 0x42, which is the desired solution for m.
This specific 4 × 4 lattice construction works to find roots up to size N 1/6 .For the small key size we used in our example, this is only 16 bits, but since it scales directly with the modulus size, this same lattice construction would suffice to learn 170 unknown bits of message for a 1024-bit RSA modulus, or 341 bits of message for a 2048-bit RSA modulus.Lattice reduction on a 4 × 4 lattice basis with entries that have a few thousand bits is essentially instantaneous on a modern laptop.More detailed explanation.Why does this work?The rows of this matrix correspond to the coefficient vectors of the polynomials f (x), N x 2 , N x, and N .We know that each of these polynomials evaluated at x = m will be 0 modulo N .Each column is scaled by a power of M , so that the ℓ 1 norm of any vector in this lattice is an upper bound on the value of the corresponding (un-scaled) polynomial evaluated at m.For a vector We have constructed the lattice so that every polynomial g we extract from it has the property that g(m) ≡ 0 mod N .We have also constructed our lattice so that the length of the shortest vector in a reduced basis will be less than N .The only integer multiple of N less than N is 0, so by construction the polynomial corresponding to this short vector satisfies g(m) = 0 over the integers, not just modulo N .Since finding roots of polynomials over the integers, rationals, reals, and complex numbers can be done in polynomial time, we can compute the roots of this polynomial and check which of them is our desired solution.
This method will always work if the lattice is constructed properly.That is, we need to ensure that the reduced basis will contain a vector of length less than N .For this example, det B = M 6 N 3 .Heuristically, the LLL algorithm will find a vector of ℓ 2 norm |v| 2 ≤ 1.02 n (det B) 1/ dim B .We ignore the 1.02 n factor, and the difference between the ℓ 2 and ℓ 1 norms for the moment.Then the condition we wish to satisfy is For our example, we have (det B) 1/ dim L = (M 6 N 3 ) 1/4 < N .Solving for M , this will be satisfied when M < N 1/6 .In this case, N has 96 bits, and m is 16 bits, so the condition is satisfied.
This can be extended to N 1/e , where e is the degree of the polynomial f by using a larger dimension lattice.Howgrave-Graham's dissertation [HG98] and May's survey [May10] give detailed explanations of this method and improvements.
Factorization from consecutive bits of p.
In this section we show how to use lattices to factor the RSA modulus N if a large portion of contiguous bits of one of the factors (without loss of generality p) is known.Coppersmith solves this problem in [Cop96] but we find the reformulation from Howgrave-Graham as "approximate integer common divisors" [HG01] simpler to apply, and will give that construction here.
Problem setup.
Let N = pq be an RSA modulus with equal-sized p and q.Choosing an example with numbers small enough to fit on the page, we have a 240-bit RSA modulus We assume N is known.Assume we know a large contiguous portion of the most significant bits b of p, so that p = a + r, where we do not know r but do know the value a = 2 ℓ b.Here ℓ = 30 is the number of unknown bits, or equivalently the left shift of the known bits.
In our example, we have Cast the problem as finding the roots of a polynomial.Let f (x) = a + x.We know that there is some value r such that f (r) = p ≡ 0 mod p.We do not know p, but we know that p divides N and we know N .We know that the unknown r is small, and in particular |r| < R for some bound R that is known.Here, R = 2 30 .
Construct a lattice. We can form the lattice basis
We then run the LLL algorithm on our lattice basis be the shortest vector in the reduced basis.In our example, we get the vector Extract a polynomial and find the roots.We form a polynomial We can then calculate the roots of f .In this example, f has one integer root, r = 0x873209.
We can then reconstruct a + r and verify that gcd(a + r, N ) factors N .
This 3×3 lattice construction works for any |r| < p 1/3 , and directly scales as p increases.In our example, we chose p and q so that they have 120 bits, and r has 30 bits.However, this same construction will work to recover 170 bits from a 512-bit factor of a 1024-bit RSA modulus, or 341 bits from a 1024-bit factor of a 2048-bit RSA modulus.More detailed explanation.The rows of this matrix correspond to the coefficient vectors of the polynomials x(x + a), x + a, and N .We know that each of these polynomials evaluated at x = r will be 0 modulo p, and thus every polynomial corresponding to a vector in the lattice has this property.As in the previous example, each column is scaled by a power of R, so that the ℓ 1 norm of any vector in this lattice is an upper bound on the value of the corresponding (un-scaled) polynomial evaluated at r.
If we can find a vector in the lattice of length less than p, then it corresponds to a polynomial g that must satisfy g(r) < p.Since by construction, g(r) = 0 (mod p), this means that g(r) = 0 over the integers.
We compute the determinant of the lattice to verify that it contains a sufficiently small vector.For this example, det B = R 3 N .This means we need (det B) 1/ dim L = (R 3 N ) 1/3 < p. Solving for R, this gives R < p 1/3 .For an RSA modulus we have p ≈ N 1/2 , or R < N 1/6 .This method works up to R < p 1/2 at the limit by increasing the dimension of the lattice.This is accomplished by taking higher multiples of f and N .See Howgrave-Graham's dissertation [HG98] and May's survey [May10] for details on how to do this.
RSA key recovery from least significant bits of p
It is also straightforward to adapt this method to deal with a contiguous chunk of unknown bits in the least significant bits of p: if the chunk begins at bit position ℓ, the input polynomial will have the form f (x) = 2 ℓ x + a.This can be multiplied by 2 −ℓ mod N and solved exactly as above.
RSA key recovery from middle bits of p
RSA key recovery from middle bits of p is somewhat more complex than the previous examples, because there are two unknown chunks of bits in the most and least significant bits of p. Problem setup.Assume we know a large contiguous portion of the middle bits of p, so that p = a + r ℓ + 2 t r m , where a is an integer representing the known bits of p, r ℓ and r m are unknown integers representing the least and most significant bits of p that we wish to solve for, and t is the starting bit position of the unknown most significant bits.We know that |r ℓ | < R and |r m | < R for some bound R.
As a concrete example, let be the middle bits of one of its factors p; there are 16 unknown bits in the most and least significant bit positions.Thus we know that R = 2 16 in our concrete example.We wish to recover p.
Cast the problem as finding solutions to a polynomial.In the previous examples, we only had one variable to solve for.Here, we have two, so we need to use a bivariate polynomial.We can write down f (x, y) = x + 2 t y + a, so that f (r ℓ , r m ) = p.
In our concrete example, p has 164 bits, so we have f (x, y) = x + 2 148 y + a.We hope to construct two polynomials g 1 (x, y) and g 2 (x, y) satisfying g 1 (r ℓ , r m ) = 0 and g 2 (r ℓ , r m ) = 0 over the integers.Then we can solve the system for the simultaneous roots.
Construct a lattice.
As before, we will use our input polynomial f and the public RSA modulus N to construct a lattice.Unfortunately for the simplicity of our example, the smallest polynomial that is guaranteed to result in a nontrivial bound on the solution size for our desired roots has degree 3, and results in a lattice of dimension 10.
As before, each column corresponds to a monomial that appears in our polynomials, and each row corresponds to a polynomial that evaluates to 0 mod p at our desired solution.
In our example, we will use the polynomials f 3 , f 2 y, f y 2 , y 3 N, f 2 , f y, y 2 N, f, yN , and N ; the monomials in the columns are x 3 , x 2 y, xy 2 , y 3 , x 2 , xy, y 2 , x, y, and 1.Each column is scaled by the appropriate power of R.
We reduce this matrix using the LLL algorithm, and reconstruct the bivariate polynomials corresponding to each row of the reduced basis.Unfortunately, these are too large to fit on a page.
Solve the system of polynomials to find common roots.Heuristically, we would hope to only need two sufficiently short vectors and then compute the resultant of the corresponding polynomials or use a Gröbner basis to find the common roots, but in our example the two shortest vectors are not algebraically independent.In this case it suffices to use the first three vectors.Concretely, we construct an ideal over the ring of bivariate polynomials with integer coefficients whose basis is the polynomials corresponding to the three shortest vectors in the reduced basis for L(B) above, and then call a Gröbner basis algorithm on it.For this example, the Gröbner basis is exactly the polynomials (x − 0x339b, y − 0x5a94), which reveals the desired solutions for x = r ℓ and y = r m .
In this example, the nine shortest vectors all vanish at the desired solution, so we could have constructed our Gröbner basis from other subsets of these short vectors.More detailed explanation.The determinant of our lattice is det B = R 20 N 4 , and the lattice has dimension 10.We hope to find two vectors v 1 and v 2 of length approximately det B 1/ dim B ; this is not guaranteed to be possible, but for random lattices we expect the lengths of the vectors in a reduced basis to have close to the same lengths.The ℓ 1 norms of the vectors v 1 and v 2 are upper bounds on the magnitude of the corresponding polynomials f v1 (x, y), f v2 (x, y) evaluated at the desired roots r ℓ , r m .In order to guarantee that these vanish, we want the inequality Thus the desired condition for success is In our example, N was 326 bits long, and we chose R to have 16 bits.
This attack was applied in [BCC + 13] to recover RSA keys generated by a faulty random number generator that generated primes with predictable sequences of bits.
RSA key recovery from multiple chunks of bits of p
The above idea can be extended to handle more chunks of p at the cost of increasing the dimension of the lattice.Each unknown "chunk" of bits introduces a new variable in the linear equation that will be solved for p.At the limit, the algorithm requires 70% of the bits of p divided into at most log log N blocks [HM08].
Open problem: RSA key recovery from many nonconsecutive bits of p
The above methods scale poorly with the number of chunks of known bits.It is an open problem to develop a subexponential-time method to recover an RSA key or factor the RSA modulus N with more than log log N unknown chunks of bits, if these bits are only known about, say, one factor p of N .If information is known about both p and q or other fields of the RSA private key, then the methods of Section 4.3.1 may be applicable.
Partial recovery of RSA d p
Recovering the CRT coefficient d p = d mod (p − 1) from a large contiguous bits can be done using the approach given in Sections 4.2.2, 4.2.3 and 4.2.4.We illustrate the method in the case of known most significant bits.
given many contiguous bits of d p .
Problem setup.
Let be a 240-bit RSA modulus.We will use public exponent e = 65537.
In this problem, we are given some of the most significant bits b of d p , and we want to recover the rest.As before, let ℓ be the number of least significant bits of d p we need to recover, so that there is some value a = 2 ℓ b with a + r = d p for some r < 2 ℓ .For our concrete example, we have Cast the problem as finding the roots of a polynomial.We start with the relation ed p ≡ 1 mod (p − 1) and rewrite it as an integer relation by introducing a new variable k p : The integer k p is unknown, but we know that k p < e since d p < (p − 1).In our example, and typically in practice, we have e = 65537, so we will run the attack for all possible values of 1 ≤ k p < 65537.With the correct parameters, we are guaranteed to find a solution for the correct value of k p .For other incorrect guesses of k p , in practice the attack is unlikely to result in any solutions found, but any spurious solutions that arise can be eliminated because they will not result in a factorization of N .
We can rearrange Equation 4, with e −1 computed modulo N : . Then we wish to find a small root r of the polynomial f (x) = A + x modulo p, where |r| < R.
For our concrete example, we have R = 2 30 and k p = 23592, so Construct a lattice.Since the form of the problem is identical to the previous section, we use the same lattice construction: We apply the LLL algorithm to this basis and take the shortest vector in the reduced basis.For our example, this is We construct the corresponding polynomial Computing the roots of f , we discover that r = 0x39d9b141 is among them, and that gcd(A + r, N ) = p.
At the limit, this technique can work up to R < p 1/2 [BM03] by increasing the dimension of the lattice with higher degree polynomials and higher multiplicities of the root.
Partial recovery of RSA d from most significant bits is not possible
Partial recovery for d varies somewhat depending on the bits that are known and the size of e.Since e is small in practice, we will focus on that case here.Most significant bits of d.When e is small enough to brute force, the most significant half of bits of d can be recovered easily with no additional information.This implies that if full key recovery were possible from only the most significant half of bits of d, then small public exponent RSA would be completely broken.Since small public exponent RSA is not known to be insecure in general, this unfortunately means that no such key recovery method is possible for this case.
Consider the RSA equation Since p + q ≈ √ N , the second term affects only the least significant half of the bits of d, so the value kN/e shares approximately the most significant half of its bits in common with d.
On the positive side, this observation allows the attacker to narrow down possible values for k if the attacker knows any most significant bits of d for certain.See Boneh, Durfee, and Frankel [BDF98] for more details.
Partial recovery of RSA d from least significant bits
For low-exponent RSA, if an adversary knows the least significant t bits of d, then this can be transformed into knowledge of the least significant t bits of p, and then the method of Section 4.2.3 can be applied.This algorithm is due to Boneh, Durfee, and Frankel [BDF98].Assume the adversary knows the t least significant bits of d; call this value d 0 .Then The adversary tries all possible values of k, 1 < k < e to obtain e candidate values for the t least significant bits of s.
Then for each candidate s, the least significant bits of p are solutions to the quadratic equation Let a be a candidate solution for the least significant bits of p. Putting this in the context of Section 4.2.3, the attacker wishes to solve f (x) = a + 2 t x ≡ 0 mod p.This can be multiplied by 2 −t mod N and the exact method of Section 4.2.3 can be applied to recover p.Since at the limit, the methods of Section 4.2.3 work to recover N 1/4 bits of p, this method will work when as few as N 1/4 bits of d are known.
There are more sophisticated lattice algorithms that involve different tradeoffs, but for very small e, which is typically the case in practice, they require nearly all of the least significant bits of d to be known [BM03].
Non-consecutive bits known with redundancy
This section covers key recovery in the case that many non-consecutive bits of secret values are known or need to be recovered.The lattice methods covered in the previous section can be adapted to recover multiple chunks of unknown key bits, but at a high cost: the lattice dimension increases with the number of chunks, and when a large number of bits is to be recovered, the running time can be exponential in the number of chunks.
In this section, we explore a different technique that allows a different tradeoff.In this case, the attacker has knowledge of many non-contiguous bits of secret key values, and knows these for multiple secret values of the key.The attacker might have learned parts of both p and q, or d mod (p − 1) and d mod (q − 1), for example.We begin by analyzing a case that is less likely to arise in practice, the case of random erasures of bits of p and q, in order to give the main ideas behind the algorithm in the simplest setting.
Random known bits of
The main technique used for these cases is a branch and prune algorithm.The idea behind the branch and prune algorithm is to write down an integer relationship between the elements in the secret key and the public key, and progressively solve for unknown bits of the secret key, starting at the least significant bits.This produces a tree of solutions: every branch corresponds to guesses for one or more unknown bits at a particular solution, and branches are pruned if the guesses result in incorrect relationships to the public key.
This algorithm is presented and analyzed in [HS09].
Problem setup.Let N = 899.Imagine we have learned some bits of p and q, in an erasure model: for each bit position, we either know the bit value, or we know that we do not know it.For example, we have p = ⊔11 ⊔ 1, and q = ⊔1 ⊔ 0⊔.
Defining an integer relation.The integer relation that we will take advantage of for this example is N = pq.
Iteratively solve for each bit.The main idea of the algorithm is to iteratively solve for the bits of the unknowns p and q, starting at the least significant bits.These can then be checked against the known public value of N .At the least significant bit, the value is known for p and is unknown for q.There are two options for the value of q, but only the bit value 1 satisfies the constraint that pq = N mod 2. The algorithm then proceeds to the next step, where the value of the second bit is known for q but not for p.Only the bit value 1 satisfies the constraint pq = N mod 2 2 , so the algorithm continues down this branch.Since this generates a tree, the tree can be traversed in depth-first or breadth-first order; depth-first will be more memory efficient.This is illustrated in Figure 10.
Figure 10: The branch and prune tree for our numeric example.The algorithm begins at the right-hand node representing the least significant bits, and iteratively branches and prunes guesses for successive bits moving towards the most significant bits.
The algorithm works because N = pq mod 2 i for all values of i.Additionally, we want some assurance that an incorrect guess for a value at a particular bit location should eventually lead to that branch being pruned.Heuristically, when the ith bits of both p and q are unknown, the tree will branch; when bit i is known for one but not the other, there will be a unique solution; and when the ith bits of both p and q are known, an incorrect solution has around a 50% probability of being pruned.Thus the algorithm is expected to be efficient as long as there are not long runs of simultaneous unknown bits.We assume the length of p and q is known.Once the algorithm has traversed this many bits, the final solution pq = N can be checked without modular constraints.
When random bits are known from p and q, the analysis of [HS09] shows that the tree of generated solutions is expected to have polynomial size when 57% of the bits of p and q are revealed at random.This algorithm can still be efficient if the distribution of bits known is not random, as long as it allows efficient pruning of the tree.An example would be learning 3 out of every 5 bits of p and q, as in [YGH16].
Paterson, Polychroniadou, and Sibborn [PPS12] give an analysis of the required information for different scenarios, and observe that doing a depth-first search is more efficient memory-wise than a breadth-first search.
Random known bits of the Chinese remainder coefficients d mod (p − 1)
and d mod (q − 1) The description in Section 4.3.1 can be extended to recover the Chinese remainder exponents d p = d mod (p − 1) and d q = d mod (q − 1) using the same technique as the previous section.This is the most common case encountered in RSA side channel attacks.
Factorization of N = pq given non-consecutive bits of d p , d q .
Problem setup.Let N = 899 be the RSA public modulus, and e = 17 be the public exponent.Imagine that the adversary has recovered some bits of the secret Chinese remainder exponents d p = d mod (p − 1) and d q = d mod (q − 1).
We wish to recover the missing unknown bits of d p and d q , which will allow us to recover the secret key itself.
Define integer relations.
We know that ed p ≡ 1 mod (p − 1) and ed q ≡ 1 mod (q − 1).We rewrite these as integer relations We have no information about the values of p and q, but their values are uniquely determined from a guess for d p or d q .
We also know that pq = N.
The values k p and k q are unknown, so we must brute force them by running the algorithm for all possible values.We expect it to fail for incorrect guesses, and succeed for the unique correct guess.Equation 2 in Section 4.1 shows that there is a unique value of k q for a given guess for k p .Since k p < e we need to brute force at most e pairs of values for k p and k q .
In our example, we have k p = 13 and k q = 3, although this won't be verified as the correct guesses until the solution is found.
Iteratively solve for each bit.With our integer relations in place, we can then use them to iteratively solve for each bit of the unknowns d p , d q , p, and q, starting from the least significant bit.We check guesses for each value against our three integer relations, and at bit i we prune those that do not satisfy the relations mod 2 i .We have three relations and four unknowns, so we generate at most two new branches at each bit.
. .0011 d q = . . .0001 p = . . .1011 q = . . .0001 d p = . . .011 d q = . . .101 p = . . .011 q = . . .101 d p = . . .111 d q = . . .001 p = . . .111 q = . . .001 We give a sample branch and prune tree for recovering d p and d q from known bits, starting from the least significant bits on the right side of the tree.At each bit location, the value of p up to bit i is uniquely determined by the guess for d p up to bit i, and the value of q up to bit i is uniquely determined by the buess for d q up to bit i.The red X marks the branches that are pruned by verifying the relation pq = N mod 2 i .Since the values of p and q up to bit i are uniquely determined by our guess for d p and d q up to bit i, the algorithm prunes solutions based on the relation pq ≡ N mod 2 i .The analysis of this case is then identical to the case of learning bits of p and q at random.
For incorrect guesses for the values of k p and k q , we expect the equations to act like random constraints, and thus to quickly become unsatisfiable.Once there are no more possible solutions in a tree, the guess for k p and k q is known to be incorrect.This is illustrated by Figure 11.
Recovering RSA keys from indirect information
For this type of key recovery algorithm, it is not always necessary to have direct knowledge of bits of the secret key values with certainty.It can still be possible to apply the branchand-prune technique to recover secret keys even if only "implicit" information is known about the secret values, as long as this implicit information implies a relationship that can be checked to prioritize or prune candidate key guesses from the least significant bits.Examples in the literature include [BBG + 17], which computes partial sliding window square-and-multiply sequences for candidate guesses and compares them to the ground truth measurements, and [MVH + 20], which compares the sequence of program branches in a binary GCD algorithm implementation computed over the cryptographic secrets to a ground truth measurement.
Open problem: Random known bits without redundancy
As mentioned in Section 4.2.6, it is an open problem to recover an RSA secret key when many nonconsecutive chunks of bits need to be recovered, and the bits known are from only one secret key field, with no additional information from other values.Applying the branch-and-prune methods discussed in this secction to a single secret key value, say a factor p of N , where random bits are known, would result in a tree with exponentially many solutions unless additional information were available to prune the tree.
5 Key recovery methods for DSA and ECDSA
DSA and ECDSA preliminaries
From the perspective of partial key recovery, DSA and ECDSA are very similar, and we will cover them together.We will use slightly nonstandard notation to describe each signature scheme to make them as close as possible, so that we can use the same notation to describe the attacks simultaneously.
DSA
The Digital Signature Algorithm [NIS13] (DSA) is an adaptation of the ElGamal Signature Scheme [EG85] that reduces the amount of computation required and the resulting signature size by using Schnorr groups [Sch90].
Parameter Generation.A DSA public key includes several global parameters specifying the group to work over: a prime p, a subgroup of order n satisfying n | (p − 1), and an integer g that generates a group of order n mod p, where n is typically much smaller than p, for example 256 bits for a 2048-bit p.A single set of group parameters can be shared across many public keys, or individually generated for a given public key.
To generate a long-term private signing key, an implementation starts by choosing the secret key 0 < d < n and computing y = g d mod p.The public key is the tuple (y, g, p, n) and the private key is (d, g, p, n).Signature Generation.To sign a message m, implementations apply a collision-resistant hash function H to m to obtain a hashed message h = H(m).To generate the signature, the implementation generates an ephemeral secret integer 0 < k < n, and computes the integers r = g k mod p mod n, and s = k −1 (h + dr) mod n.The signature is the pair (r, s).
ECDSA
The Elliptic Curve Digital Signature Algorithm (ECDSA) is an adaptation of DSA to use elliptic curves instead of Schnorr groups.
Parameter Generation.An ECDSA public key includes global parameters specifying an elliptic curve E over a finite field together with a generator point g of a subgroup over E of order n.
To generate a long-term private signing key, an implementation starts by choosing a secret integer 0 < d < n, and computing the elliptic curve point y = dg on E. The public key is the elliptic curve point y together with the global parameters specifying E, g, and n.The private key is the integer d together with these global parameters.Signature Generation.To sign a message m, implementations apply a collision-resistant hash function H to m to obtain a hashed message h = H(m).To generate the signature, the implementation generates an ephemeral secret 0 < k < n.The implementation computes the elliptic curve point kg and sets the value r to be the x-coordinate of kg.The implementation then computes the integer s = k −1 (h + dr) mod n.The signature is the pair of integers (r, s).
Nonce recovery and (EC)DSA security.
The security of (EC)DSA is extremely dependent on the signature nonce k being securely generated, uniformly distributed, and unique for every signature.If the nonce for one or more signatures is generated in a vulnerable manner, then an attacker may be able to efficiently recover the long-term secret signing key.Because of this property, side channel attacks against (EC)DSA almost universally target properties of the signature nonces.
Key recovery from signature nonce.For a DSA or ECDSA key, if the nonce k is known for a single signature, it is simple to compute the long-term private key.Rearranging the expression for s, the secret key d can be recovered as
(EC)DSA key recovery from most significant bits of the nonce k
There are two families of techniques for (EC)DSA key recovery from most significant bits of the nonce k.Both techniques require knowing information about the nonce used in multiple signatures from the same secret key.We assume that the attacker knows the long-term public signature verification key, and has access to multiple signatures generated using the corresponding secret signing key.The attacker also needs to know the hash of the messages that the signatures correspond to.The first technique is via lattices.This is generally considered more straightforward to implement, and works well when more nonce bits are known, and information from fewer signatures is available: we would need to know at least two most significant bits from the nonces of dozens to hundreds of signatures.We cover this technique below.
The second technique is via Fourier analysis.This technique can deal with as little as one known most significant bit from signature nonces, but empirically appears to require an order of magnitude or more signatures than the lattice approach, and as many as 2 32 -2 35 for record computations [ANT + 20].We leave a more detailed tutorial on this technique to future work.Nice descriptions of the algorithm can be found in [DHMP13,TTA18].
Lattice attacks
The main idea behind lattice attacks for (EC)DSA key recovery is to formulate the (EC)DSA key recovery problem as an instance of the Hidden Number Problem and then compute the shortest vector of a specially constructed lattice to reveal the solution.
Below we give a simplified example that shows how to recover the key from a small number of signatures when many of the most significant bits of the nonce are zero, and then we will show how to extend the attack to more signatures with fewer bits known from each nonce, and cover the case of arbitrary bits known from the nonce.Problem setup.Let p = 0xffffffffffffd21f be a 64-bit prime, and let E : y 2 = x 3 +3 be an elliptic curve over F p .Let g = (1, 2) be our generator point on E, which has order n = 0xfffffffefa23f437.
Cast the problem as a system of equations.Our signatures above satisfy the equivalencies The values k 1 , k 2 , and d are unknown; the other values are known.We can eliminate the variable d and rearrange terms as follows: We can then simplify the above as We wish to solve for k 1 and k 2 , and we know that they are both small.Let |k 1 |, |k 2 | < K.
For our example, we have K = 2 32 .
Construct a lattice.We construct the following lattice basis: ) is in this lattice by construction, and we expect it to be particularly short.
Calling the BKZ algorithm on B results in a basis that contains this short vector v = (−0x270feca3, 0x4dbd2db0, 0x100000000) as the third vector in the reduced basis.We can verify that the value r 1 in our example matches the x-coordinate of k 1 g, and we can use Equation 5 to compute the private key d.
More detailed explanation.In our example, we have constructed a lattice that is guaranteed to contain our target vector.In order for this method to work, we hope that it is the shortest vector, or close to the shortest vector in the lattice, and we solve the shortest vector problem in the lattice in order to find it.The vector v = (k 1 , k 2 , K) has length |v| 2 ≤ √ 3K by construction.Our lattice has determinant det B = nK.Ignoring constants for the moment, if our lattice were truly random, we would expect the shortest vector to have length ≈ det B 1/ dim B .Thus if |v| 2 < det B 1/ dim B , we expect it to be the shortest vector in the lattice, and to be found by a sufficiently good approximation to the shortest vector problem.
For our example, we expect this to be satisfied when K < (nK) 1/3 , or when K < √ n.The way we have presented this method may remind the reader of the flavor of the methods in Section 4.2.1.The specific lattice construction used here is a sort of "dual" to the constructions from Section 4.2.1, in that the target vector is the desired solution to our system of equations.However, in contrast to Section 4.2.1, we are not guaranteed to find the solution we desire once we find a sufficiently short vector: this method can fail with probability that decreases the shorter our target vector d is compared to the expected shortest vector length.
The Hidden Number Problem.The lattice-based algorithms we describe for solving these problems are based on the Hidden Number Problem introduced by Boneh and Venkatesan [BV96].They applied the technique to show that the most significant bits of a Diffie-Hellman shared secret are hardcore.Nguyen and Shparlinski showed how to use this approach to break DSA and ECDSA from information about the nonces [NS02,NS03].
Various extensions of the technique can deal with different numbers of bits known per signature [BvSY14] or errors [DDE + 18].
There is another algorithm to solve this problem using Fourier analysis [Ble98, DHMP13] originally due to Bleichenbacher; it requires more samples than the lattice approach but can handle fewer bits known.
Scaling to many signatures to decrease the number of bits known.
To decrease the number of bits required from each signature, we can incorporate more signatures into the lattice.If we have access to many signatures (r 1 , s 1 ), . . ., (r m , s m ) on message hashes h 1 , . . ., h m , we use the same method above to write down equivalencies s i ≡ k −1 i (h i + dr i ) mod n, then as above we rearrange terms and eliminate the variable d to obtain We then construct the lattice In order to solve SVP, we must run an algorithm like BKZ with block size dim L(B) = m + 1.Using BKZ to look for the shortest vector can be done relatively efficiently up to dimension around 100 currently; beyond that it becomes increasingly expensive.In practice, one can often achieve a faster running time for fixed parameters by using more samples to construct a larger dimension lattice, and applying BKZ with a smaller block size to find the target vector.This method can recover a secret key from knowledge of the 4 most significant bits of nonces from 256-bit ECDSA signatures using about 70 samples, and 3 most significant bits using around 95 samples.For fewer bits known, either the Fourier analysis technique or a more powerful application of these lattice techniques is required, along with significantly more computational power.
Known nonzero most significant bits.If the most significant bits of the k i are nonzero and known, we can write k i = a i + b i , where the a i are known, and the b i are small, so satisfy some bound |b i | < K. Then substituting into Equation 6, we obtain , and use the same lattice construction as above, with u ′ i substituted for u i .Nonce rebalancing.The signature nonces k i take values in the range 0 < k i < n, but the lattice construction bounds the absolute value |k i |.Thus if we know that 0 < k i < K for some bound K, we can achieve a tighter bound by renormalizing the signatures.Let Then we can write Equations 7 as Thus we have an equivalent problem with t ′ i = t i , u ′ i = (t i + 1)K/2 + u i , and K ′ = K/2, and can solve as before.This optimization can make a significant difference in practice by reducing the number of required samples.
(EC)DSA key recovery from least significant bits of the nonce k
The attack described in the previous section works just as well for known least significant bits of the (EC)DSA nonce.Problem setup.We input a collection (EC)DSA signatures (r i , s i ) on message hashes h i .For each signature, we know the least significant bits, so the signature nonces k i satisfy for known a i , and b i unknown but satisfying Substituting these into Equations 7, we get We have an equivalent instance of the problem with , and B ′ = B, and solve as above.Recovering an ECDSA key from middle bits of the nonce k is slightly more complex than the methods discussed above, because we have two unknown "chunks" of the nonce to recover per signature.Fortunately, we can deal with these by extending the methods to multiple variables per signature.The method we will use here is similar to the multivariate extension in Section 4.2.4,but this case is simpler.
(EC)DSA key recovery from middle bits of the nonce
Problem setup.We will use the same elliptic curve group parameters as above.Let p = 0xffffffffffffd21f be a 64-bit prime, and let E : y 2 = x 3 +3 be an elliptic curve over F p .Let g = (1, 2) be our generator point on E, which has order n = 0xfffffffefa23f437.
We have two ECDSA signatures (r 1 , s 1 ) =(1a4adeb76b4a90e0, eba129bb2f97f7cd) on message hash h 1 = 608932fcfaa7785d and (r 2 , s 2 ) =(c4e5bec792193b51, 0202d6eecb712ae3) We know some middle bits of the corresponding nonces.Let be the middle 34 bits of the signature nonce k 1 used for the first signature above.The first and last 15 bits are unknown.Let be the middle 34 bits of the signature nonce k 2 used for the second signature above.
Cast the problem as a system of equations.As above, our two signature nonces k 1 and k 2 satisfy the where Since we know the middle bits of k 1 and k 2 are a 1 and a 2 respectively, we can write where b 1 , c 1 , b 2 , and c 2 are unknown but small, less than some bound K.In our example, we have Substituting and rearranging into Equation 8, we have We wish to find the small solution Construct a lattice.We construct the following lattice basis: If we call the BKZ algorithm on B, we obtain a basis that contains the vector v = (0x6589e5fb1823K, −0x42b0986d3e11K, This corresponds to the linear equation We can do the same for the next three short vectors in the basis, and obtain four linear polynomials in our four unknowns.Solving the system, we obtain the solutions More detailed explanation.The row vectors of the lattice correspond to the weighted coefficient vectors of the linear polynomial f in Equation 9, nx 1 , ny 1 , nx 2 , and ny 2 .Each of these linear polynomials vanishes by construction modulo n when evaluated at the desired solution and thus so does any linear polynomial corresponding to a vector in this lattice.If we can find a lattice vector whose ℓ 1 norm is less than n, then the corresponding linear equation vanishes over the integers when evaluated at the desired solution.Since we have four unknowns, if we can find four sufficiently short lattice vectors corresponding to four linearly independent equations, we can solve for our desired unknowns.The determinant of our example lattice is det B = K 4 n 4 , and the lattice has dimension 5. Thus, ignoring approximation factors and constants, we expect to find a vector of length det B 1/ dim B = (Kn) (4/5) .This is less than n when K 4 < n; in our example this is satisfied because we have chosen a 15-bit K and a 64-bit n.
The determinant bounds guarantee that we will find one short lattice vector, but do not guarantee that we will find four short lattice vectors.For that, we rely on the heuristic that the reduced vectors of a random lattice are close to the same length.
(EC)DSA key recovery from many chunks of nonce bits
The above technique can be extended to an arbitrary number of variables.The extension is called the Extended Hidden Number problem [HR07] and can be used to solve for ECDSA keys when many chunks of signature nonces are known.Each unknown "chunk" of nonce in each signature introduces a new variable, so the resulting lattice will have dimension one larger than the total number of unknowns; if there are m signatures and h unknown chunks of nonce per signature, the lattice will have dimension mh + 1.We expect this technique to find the solution when the parameters are such that the system of equations has a unique solution.If the size of each chunk is K, heuristically this will happen when K mh < n m−1 .This technique has been used in practice in [FWC16] and further explored in [DPP20].
6 Key recovery method for the Diffie-Hellman Key Exchange
Finite field and elliptic curve Diffie-Hellman preliminaries
The Diffie-Hellman (DH) key exchange protocol [DH76] allows two parties to create a common secret in a secure manner.We summarize the protocol in the context of finite fields and elliptic curves.
Finite field Diffie-Hellman.Finite-field Diffie-Hellman parameters are specified by a prime p and a group generator g.Common implementation choices are p a safe prime, i.e., q = (p − 1)/2 is prime, in which case g is often equal to 2, 3 or 4, or p is chosen such that p − 1 has a 160, 224, or 256-bit prime factor q and g generates a subgroup of F * p of order q.Key exchange is performed as follows: 1. Alice chooses a random private key a, where 1 ≤ a < q and computes a public key A = g a mod p.
2. Bob chooses a random private key b, where 1 ≤ b < q and computes a public key B = g b mod p.
3. Alice and Bob exchange the public keys.
4. Alice computes s A = B a mod p.
Bob computes s
Because B a mod p = (g b ) a mod p = (g a ) b mod p = A b mod p, we have s A = s B .The latter is the secret that now Alice and Bob share.
Elliptic Curve Diffie-Hellman.The Elliptic Curve Diffie-Hellman (ECDH) protocol is the elliptic curve counterpart of the Diffie-Hellman key exchange protocol.In ECDH, Alice and Bob agree on an elliptic curve E over a finite field and a generator G of order q.
The protocol proceeds as follows: 1. Alice chooses a random private integer a, where 1 ≤ a < q and computes a public key A = aG.
2. Bob chooses a random private integer b, where 1 ≤ b < q and computes a public key B = bG.
3. Alice and Bob exchange the public keys.
Bob computes s
The shared secret is
Most significant bits of finite field Diffie-Hellman shared secret
The Hidden Number Problem approach we used in the previous section to recover ECDSA or DSA keys from information about the nonces can also be used to recover a Diffie-Hellman shared secret from most significant bits.
Recovering Diffie-Hellman shared secret from most significant bits of s.
Problem setup.Let p = 0xffffffffffffffffffffffffffffc3a7 be a 128-bit prime used for finite field Diffie-Hellman, and let g = 2 be a generator of the multiplicative group modulo p.
Let s the Diffie-Hellman shared secret s between public keys A = g a mod p = 0x3526bb85185259cd42b61e5532fe60e0 and B = g b mod p = 0x564df0b92ea00ea314eb5a246b01ac9c.
We have learned the value of the first 65 bits of s: let r 1 = 0x3330422f6047011b8000000000000000, so we know that s = r 1 + k 1 where k 1 < K = 2 63 .Let c = 0x56e112dac14f4a4cc02951414aa43a38.We have also learned the most significant 65 bits of the Diffie-Hellman shared secret between AC = g a+c = g a g c mod p and B. Let r 2 = 0x80097373878e37d20000000000000000.
We know that g (a+c)b = g ab g bc = sB c mod p.Let t = B c so st = r 2 + k 2 mod p where Cast the problem as a system of equations.We have two relations where s, k 1 , and k 2 are small and unknown, and r 1 , r 2 , and t are known.We can eliminate the variable s to obtain the linear equation We now have a linear equation in the same form as the Hidden Number Problem we solved in the previous section.
Construct a lattice. We construct the lattice basis
If we call the LLL algorithm on M , we obtain a basis that contains the vector (−0x2ddb23aa673107bd, −0x216afa75f66a39d5, 0x10000000000000000) This corresponds to our desired solution (k 1 , k 2 , K), although if the Diffie-Hellman assumption is true we cannot verify its correctness.More detailed explanation.This method is due to Boneh and Venkatesan [BV96], and was the original motivation for their formulation of the Hidden Number Problem.The Raccoon attack demonstrated an attack scenario using this technique in the context of TLS [MBA + 21].
This method can be adapted to multiple samples with the same number of bits required as the attacks on ECDSA.Knowing the most significant bits of s is not necessary either; we only need the most significant bits of known multiples t i of s.
Discrete log from contiguous bits of Diffie-Hellman secret exponents
This section addresses the problem of Diffie-Hellman key recovery when the known partial information is part of one or the other of the secret exponents.The technique we apply in this section is Pollard's kangaroo (also known as lambda) algorithm [Pol78].Unlike the techniques of the previous sections, which are generally efficient when the attacker's knowledge of the key is above a certain threshold, and either inefficient or infeasible when the attacker's knowledge of the key is below this threshold, this algorithm runs in exponential time: square root of the size of the interval.Thus it provides a significant benefit over brute force, but in practice is likely limited to 80 bits or fewer of key recovery unless one has access to an unusually large amount of computational resources.The Pollard kangaroo algorithm is a generic discrete logarithm algorithm that is designed to compute discrete logarithms when the discrete logarithm lies in a small known interval.It applies to both elliptic curve and finite field discrete logarithms.We will use finite field discrete logarithms for our examples, but the algorithm is the same in the elliptic curve context.
Known most significant bits of the Diffie-Hellman secret exponent.
Problem Setup.Using the same notation for finite fields as in Section 6.1, let A be a a Diffie-Hellman public key, p be a prime modulus, and g a generator of a multiplicative group of order q modulo p.These values are all public, and thus we assume that they are known.Imagine that we have obtained a consecutive fraction of the most significant bits of the secret exponent a, and we wish to recover the unknown bits of a to reconstruct the secret.In other words, let a = m + r, where m = 2 ℓ m ′ for some known integers m ′ and ℓ, and 0 ≤ r < 2 ℓ is unknown.Let w be the width of the interval that r is contained in: here we have w = 2 ℓ .For our concrete example, let p = 0xfef3 be a 16-bit prime, and let g = 3 be a multiplicative generator of the group of order q = (p − 1)/2 = 0x7f79 modulo p.We know a Diffie-Hellman public key A = 0xa163 and we are given the most significant bits of the secret exponent a but the 8 least significant bits of a are unknown, corresponding to m = 0x1400, ℓ = 8, and r < 2 8 .Take some pseudorandom walks.We define a deterministic pseudorandom walk along values s 0 , s 1 , . . ., s i , . . . in our multiplicative group modulo p (and the corresponding exponents s 0 = g xo mod p, . . ., when known) by choosing a set of random step lengths for the exponents in [0, √ w].For our example, we pseudorandomly generated the lengths (1, 3, 7, 10).
This is a small sample pseudorandom walk generated to run our small example computation.Each step in the pseudorandom walk is determined by the representation of the previous value as an integer 0 ≤ s i < p.
We run two random walks.The first random walk, which is called "the tame kangaroo", starts in the middle of the interval of exponents to be searched, at s 0 = g m+⌊ w 2 ⌋ mod p.In our example, we have m = 0x1400 and w = 2 8 = 256, so the tame kangaroo begins at s 0 = g 0x1480 mod p = 0x9581.We take √ w steps along this deterministic pseudorandom path, and store the values s i together with the exponent x i that is computed at each step so that g xi ≡ s i mod p.
The second random walk is called the "wild kangaroo".It begins at the target s ′ 0 = A = 0xa163 and follows the same rules as above.We do not know the secret exponent a, but at every step of the walk, we know that s ′ i = Ag x ′ i mod p = g a+x ′ i mod p.We take at most √ w steps along this deterministic pseudorandom path.If at some point the wild kangaroo's path intersects the tame kangaroo's path, then we are done and can compute the result.Compute the discrete log.We know that s i = s ′ j for s i on the tame kangaroo's path and s ′ j on the wild kangaroo's path.Thus we have In our example, the kangaroos' paths intersected at g 0x1497 and g a+0x36 ; we can thus compute a = 0x1461 and verify that g 0x1461 ≡ 0xa163 mod p.More detailed explanation.Pollard gave the original version of this algorithm in [Pol78].Teske gives an alternative random walk in [Tes00] that should provide an advantage in theory, but in practice, it seems that no noticeable advantage is gained from it.
We expect this algorithm to reach a collision in O( √ w) steps; this algorithm thus takes O( √ w) time to compute a discrete log in an interval of width w.Thus in principle, the armchair cryptanalyst should be able to compute discrete logarithms within intervals of 64 to 80 bits, and those with more resources should be able to go slightly higher than this.
In order to scale to these larger bit sizes, several changes are necessary.First, one typically uses a random walk with many more subdivisions: 32 might be a typical value.Second, van Oorschot and Wiener [OW99] show how to parallelize the kangaroo algorithm using the method of distinguished points.The idea behind this method is that storing the entire tame kangaroo walk will require too much memory.Instead, one stores a subset of values that satisfy some distinguishing property, such as starting with a certain number of zeros.Then the algorithm launches many wild and tame kangaroo walks, storing distinguished points in a central database.The algorithm is finished when a wild and a tame kangaroo land on the same distinguished point.
Elliptic curves.This algorithm applies equally well to elliptic curve discrete logarithm.One can gain a √ 2 improvement in the complexity of the algorithm as a by-product of the efficiency of inversion on elliptic curves.Since the points P and −P share the same x-coordinate, one can then do a pseudorandom walk on equivalence classes for the relation P ∼ ±P .It is straightforward to extend the kangaroo method to solve for unknown most significant bits of the exponent.As before, we have a known A = g a mod p for unknown a that we wish to solve for.In the case of unknown most significant bits, we know an m such that a = m + 2 ℓ r for some unknown r satisfying 0 ≤ r < w.The offset ℓ is known.Then we can reduce to the previous problem by running the kangaroo algorithm on the value A ′ = g 2 −ℓ A = g 2 −ℓ +m+2 ℓ r mod p.The case of recovering a Diffie-Hellman secret key in practice with multiple chunks of unknown bits is still an open problem.In theory, finding the secret key in this particular case can be done using a multi-dimensional variant of the discrete log problem.The latter generalizes the discrete logarithm problem in an interval to the case of multiple intervals, see [Rup10, Chapter 6] for further details.In [Rup10], Ruprai analyzes the multi-dimensional discrete log problem for small dimensions.This approach appears to run into boundary issues for multi-dimensional pseudorandom walks when the dimension is greater than five, suggesting that this approach may not extend to the case of recovering many unknown chunks of a Diffie-Hellman exponent.
Conclusion
This work surveyed key recovery methods with partial information for popular public key cryptographic algorithms.We focused in particular on the most widely-deployed asymmetric primitives: RSA, (EC)DSA and Diffie-Hellman.The motivation for these algorithms arises from a variety of side-channel attacks.
Figure 1 :
Figure 1: Illustration of low-exponent RSA message recovery attack setup.The attacker knows the public modulus N , a ciphertext c, and the padding a prepended to the unknown message m before encryption.The attacker wishes to recover m.
Figure 2 :
Figure 2: Factorization of N = pq given contiguous known most significant bits of p.
Figure 3 :
Figure 3: Factorization of N = pq given contiguous known least significant bits of p.
Figure 4 :
Figure 4: Factorization of N = pq given contiguous known bits of p in the middle.
Figure 5 :
Figure 5: Factorization of N = pq given multiple chunks of p.
Figure 6 :
Figure 6: Efficient factorization of N = pq given many chunks of p and no information about p is an open problem.
dNFigure 7 :
Figure 7: For small exponent e, the most significant bits of d do not allow full key recovery.
d d 0 NFigure 8 :
Figure 8: Recovering RSA p given contiguous least significant bits of d.
p and q q p Figure 9 :
Figure 9: Factorization of N = pq given non-consecutive bits of both p and q.
Figure 12 :
Figure 12: (EC)DSA key recovery from signatures where most significant bits of the nonces are known.
Figure 13 :
Figure 13: (EC)DSA key recovery from signatures where least significant bits of the nonces are known.
Figure 14 :
Figure 14: (EC)DSA key recovery from signatures where middle bits of the nonces are known.
DSA key recovery from signatures where multiple chunks of the nonces are known.
a 2 ℓ m ′ r Figure 15 :
Figure 15: Recovering Diffie-Hellman shared secret with most significant bits of secret exponent.
Figure 16 :
Figure 16: Recovering Diffie-Hellman shared secret with least significant bits
:Figure 17 :
Figure 17: Recovering Diffie-Hellman shared secret with multiple chunks of unknown bits.
#
S e c t i o n 4 .2 .4 : RSA Key r e c o v e r y from middle b i t s o f p d e f b i v a r i a t e _ c o p p e r s m i t h ( ) : R.<x , y> = ZZ [ ] #p = random_prime ( 2 ^1 6 4 , 2 ^1 6 3 )#q = random_prime ( 2 ^1 6 4 , 2 ^1 6 3 ) #N = p * q #a = l i f t (mod( p , 2 ^1 4 8 ) ) − l i f t (mod( p , 2 ^1 6 + a + y * 2^148 m o n o m i a l _ l i s t = ( f .monomials ( ) f u n c t i o n _ l i s t = [ f ^3 , f ^2 * y , f * ( y ) ^2 , ( y ) ^3 * N, f ^2 , f * ( y ) , ( y ) ^2 * N, f , ( y ) * N, N] M = matrix ( 1 0 ) f o r i i n r a n g e ( 1 0 ) : M[ i ] = [R( f u n c t i o n _ l i s t [ i ] ) ( x * X, y * Y) .m o n o m i a l _ c o e f f i c i e n t (m) f o r m i n m o n o m i a l _ l i s t ]scaled_monomials = [m( x/X, y/Y) f o r m i n m o n o m i a l _ l i s t ] d e f g e t f (M, i ) : r e t u r n sum ( b * m f o r b ,m i n z i p (M[ i ] , scaled_monomials ) ) A = M. LLL ( ) p r i n t ( "N =" , hex (N) ) p r i n t ( " a =" , hex ( a ) ) I = i d e a l ( g e t f (A, 0 ) , g e t f (A, 1 ) ) p r i n t ( I .g r o e b n e r _ b a s i s ( ) ) I = i d e a l ( g e t f (A, i ) f o r i i n r a n g e ( 3 ) ) p r i n t ( I .g r o e b n e r _ b a s i s ( ) ) I = i d e a l ( g e t f (A, i ) f o r i i n r a n g e ( 9 ) ) p r i n t ( I .g r o e b n e r _ b a s i s ( ) )#p r i n t ( g e t f (A, 0 ) ) #p r i n t ( g e t f (A, 1 ) ) d e f p r i n t _ m a t r i x ( ) : RR.<x , y , T, a , R, N> = ZZ [ ] # S e c t i o n 5 . 2 . 1 : L a t t i c e a t t a c k s d e f l a t t i c e _ a t t a c k s ( ) :p , F , C, n ,G, x = ecdsa_params ( ) p r i n t ( " n " , hex ( n ) ) p r i n t ( " p " , hex ( p ) ) p r i n t ( "G" b f b 4 0 c 9 c 621 ee64e65d1e938 ' r1 , s 1 = [ I n t e g e r ( f , 1 6 ) f o r f i n s i g 1 .s p l i t ( ) ] s i g 2 = ' 3 e a 8 7 2 0 a f a 6 d 0 3 c 2 16 f c 6 a a 6 5 b f 2 4 1 e a ' r2 , s 2 = [ I n t e g e r ( f , 1 6 ) f o r f i n s i g 2 .s p l i t ( ) ] p r i n t ( hex ( r 2 ) ) t = I n t e g e r (−inverse_mod ( s1 , n ) * s 2 * r 1 * inverse_mod ( r2 , n ) ) u = I n t e g e r ( inverse_mod ( s1 , n ) 1 ) r e t u r n " v e c t o r ( k1 , k2 ) " , ( hex ( k1 ) , hex ( k2 ) ) # S e c t i o n 5 . 2 .3 : (EC)DSA key r e c o v e r y from middle b i t s o f t h e nonce k d e f e c d s a _ m i d dl e _ b i t s ( ) : p , F , C, n ,G, x = ecdsa_params ( ) h1 = 0 x 6 0 8 9 3 2 f c f a a 7 7 8 5 d h2 = 0 x e 5 f 8 e c a 4 8 a c 2 a 4 5 c k1 = 0 x 7 3 4 4 5 0 e 2 f d 5 d a 4 1 c s i g 1 = ' 1 a4adeb76b4a90e0 e b a 1 2 9 b b 2 f 9 7 f 7 c d ' r1 , s 1 = [ I n t e g e r ( f , 1 6 ) f o r f i n s i g 1 .s p l i t ( ) ] k2 = 0 x4de972930ab4a534 s i g 2 = ' c 4 e 5 b e c 7 9 2 1 9 3 b 5 1 0202 d6eecb712ae3 ' r2 , s 2 = [ I n t e g e r ( f , 1 6 ) f o r f i n s i g 2 .s p l i t ( ) ]a1 = l i f t (mod( k1 ,2^(64 −15) ) )− l i f t (mod( k1 , 2 ^1 5 ) ) a2 = l i f t (mod( k2 ,2^(64 −15) ) )− l i f t (mod( k2 , 2 ^1 5 ) )p r i n t ( " a1 =" , hex ( a1 ) ) p r i n t ( " a2 =" , hex ( a2) ) b1 = l i f t (mod( k1 , 2 ^1 5 ) ) b2 = l i f t (mod( k2 , 2 ^1 5 ) ) c1 = 2^( −64+15) * ( k1 − l i f t (mod( k1 ,2^(64 −15) ) ) ) c2 = 2^( −64+15) * ( k2 − l i f t (mod( k2 ,2^(64 −15) ) ) ) t = I n t e g e r ( r 1 * inverse_mod ( s1 , n ) * inverse_mod ( r2 , n ) * s 2 ) u = I n t e g e r (−inverse_mod ( s1 , n ) * h1+r 1 * inverse_mod ( s1 , n ) * inverse_mod ( r2 , n ) * h2 ) p r i n t (mod( b1+c1 * 2^(64 −15)−t * b2−t * c2 * 2^(64 −15)+a1−t * a2+u , n ) ) M = matrix ( 5 ) X = 2^15 M[ 0 ] = [ X, X * 2^(64 −15) , −X * t , −X * t * 2^(64 −15) , a1−t * a2+u ] M[ 1 , 1 ] = n * X M[ 2 , 2 ] = n * X M[ 3 , 3 ] = n * X M[ 4 , 4 ] = n A = M. LLL ( ) R.<x1 , y1 , x2 , y2> = ZZ [ ] d e f g e t f (M, i ) : r e t u r n M[ i , 0 ] /X * x1+M[ i , 1 ] /X * y1+M[ i , 2 ] /X * x2+M[ i , 3 ] /X * y2+M[ i, 4 ] I = i d e a l ( g e t f (A, i ) f o r i i n r a n g e ( 4 ) ) r e t u r n I .g r o e b n e r _ b a s i s ( ) # S e c t i o n 6 . 2 : Most s i g n i f i c a n t b i t s o f f i n i t e f i e l d D i f f i e −Hellman s h a r e d s e c r e t d e f dh_msb ( i f t (mod( g , p ) ^( ( d+r ) * c ) ) a1 = DH1 − l i f t (mod(DH1, 2 ^6 3 ) ) a2 = DH2 − l i f t (mod(DH2, 2 ^6 3 ) ) b1 = DH1−a1 b2 = DH2−a2 t = l i f t (mod( g , p ) ^( c * r ) )M = matrix ( 3 ) M[ 0 , 0 ] = p M[ 1 , 0 ] = inverse_mod ( t , p ) M[ 1 , 1 ] = 1 M[ 2 , 0 ] = a1 − inverse_mod ( t , p ) * a2 M[ 2 , 2 ] = 2^64 N = M. LLL ( )p r i n t ( " a1 =" , hex ( a1 ) ) p r i n t ( " a2 =" , hex ( a2 ) ) r e t u r n " s o l u t i o n ( k1 , k2 ) i s g i v e n by " , ( hex ( b1 ) , hex ( b2 ) ) #p r i n t (N) #p r i n t (mod( b1−inverse_mod ( t , p ) * b2+a1−inverse_mod ( t , p ) * a2 , p ) )# Other code : s e t t i n g p a r a m e t e r s and o t h e r d e f gen_curve ( ) :p = 0 x f f f f f f f f f f f f f f c 5 done = Fa l s e i = 1 w h i l e not done : p r i n t ( i ) F = F i n i t e F i e l d ( p ) C = E l l i p t i c C u r v e ( [ F ( 0 ) ,F ( 3 ) ] ) i f is_prime (C .c a r d i n a l i t y ( ) ) : done = True r e t u r n p e l s e : p = p r e v i o u s _ p r i m e ( p ) i += 1 d e f ecdsa_params ( ) :p = 0 x f f f f f f f f f f f f d 2 1 f F = F i n i t e F i e l d ( p ) C = E l l i p t i c C u r v e ( [ F ( 0 ) ,F ( 3 ) ] ) n = 0 x f f f f f f f e f a 2 3 f 4 3 7 G = C .l i f t _ x ( 1 )# ( 1 , 2 ) x = 0 x34aad14 0ec2 c3a3 r e t u r n p , F , C, n , G, x d e f dsa_params ( ) : g = 0 x 1 7 d f d b f 2 b b b a e 7 d 6 c 0 5 2 c 2 f d c 5 d 3 2 8 8 d p = 0 x 8 9 5 2 4 b f c a 9 5 8 c 9 1 6 5 a 0 8 7 c c 4 f 8 8 9 a 0 8 f q = 0 x f f f f f f f f f f f f f f c 5 y = 0 x 2 4 1 0 f 1 5 6 3 4 2 2 2 d 3 3 0 0 e a b e b 4 4 2 2 6 c e a 8 x = 0 x 3 8 d b e f c 0 6 2 c d 4 c f 3 d e f dh_params ( ) : s a f e _ p r i m e ( l =128) : p = p r e v i o u s _ p r i m e (2^l ) done = F a l s e i = 0 w h i l e not done : p r i n t ( i ) i f is_prime ( I n t e g e r ( ( p−1) / 2 ) ) : done = True r e t u r n p e l s e : p = p r e v i o u s _ p r i m e ( p ) i += 1 d e f b t o i ( b ) : r e t u r n i n t .from_bytes ( b , " b i g " ) d e f i t o b ( i , b a s e l e n ) : r e t u r n i n t .to_bytes ( i n t ( i ) , l e n g t h=b a s e l e n , b y t e o r d e r =" b i g " ) d e f s i g n ( h , k l e n =32 , return_k=F a l s e ) : p , F , C, n ,G, x = ecdsa_params ( ) d = x h i = b t o i ( h ) k = ZZ .random_element ( 2 * * k l e n ) r = I n t e g e r ( (G * k ) .xy ( ) [ 0 ] ) s = l i f t ( inverse_mod ( k , n ) * mod( h i + d * r , n ) ) s i g = b y t e s .hex ( i t o b ( r , 8 ) ) +" "+ b y t e s .hex ( i t o b ( s , 8 ) ) i f return_k : r e t u r n k , s i g e l s e : r e t u r n s i g d e f gen_dsa_prime ( ) : p = 2 * q * random_prime ( 2 ^6 4 )+1 i = 1 w h i l e not is_prime ( p ) : p = 2 * q * random_prime ( 2 ^6 4 )+1 i += 1 p r i n t ( i ) r e t u r n p d e f gen_sig ( ) : h = i t o b (ZZ .random_element ( 2 ^6 4 ) , 6 4 / 8 ) r e t u r n b y t e s .hex ( h ) , s i g n ( h ) i = l e n ( p i )+1 c a n d i d a t e _ l i s t = [ ( bp+pi , bq+q i ) f o r ( bp , bq ) i n [ ( ' 0 ' , ' 0 ' ) , ( ' 0 ' , ' 1 ' ) , ( ' 1 ' , ' 0 ' ) , ( ' 1 ' , ' 1 ' ) ] ] f o r new_pi , new_qi i n c a n d i d a t e _ l i s t : i f l e n ( new_pi ) <= l e n ( p ) and p[− i ] != ' ?' and p[− i ] != new_pi[− i ] : 1 ' ) ] ] f o r new_dpi , new_dqi i n dpdq_candidates : i f l e n ( new_dpi ) <= l e n ( dp ) and dp[− i ] != ' ?' and dp [− i ] != new_dpi[− i ] : c o n t i n u e i f l e n ( new_dqi ) <= l e n ( dq ) and dq[− i ] != ' ?' and dq [− i ] != new_dqi[− i ] : c o n t i n u e f o r new_pi , new_qi i n pq_candidates :
Table 1 :
Visual table of contents for key recovery methods for public-key cryptosystems. | 22,614.2 | 2024-04-09T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
Go Simple and Pre-Train on Domain-Specific Corpora: On the Role of Training Data for Text Classification
Pre-trained language models provide the foundations for state-of-the-art performance across a wide range of natural language processing tasks, including text classification. However, most classification datasets assume a large amount labeled data, which is commonly not the case in practical settings. In particular, in this paper we compare the performance of a light-weight linear classifier based on word embeddings, i.e., fastText (Joulin et al., 2017), versus a pre-trained language model, i.e., BERT (Devlin et al., 2019), across a wide range of datasets and classification tasks. In general, results show the importance of domain-specific unlabeled data, both in the form of word embeddings or language models. As for the comparison, BERT outperforms all baselines in standard datasets with large training sets. However, in settings with small training datasets a simple method like fastText coupled with domain-specific word embeddings performs equally well or better than BERT, even when pre-trained on domain-specific data.
Introduction
Language models pre-trained on large amounts of text corpora form the foundation of today's NLP (Gururangan et al., 2020;Rogers et al., 2020). They have proved to provide state-of-the-art performance against most standard NLP benchmarks (Wang et al., 2019a;Wang et al., 2019b). However, these models require large computational resources that are not always available and have important environment implications (Strubell et al., 2019). Moreover, there is limited research in the applicability of pre-trained models in classification tasks with small amount of labelled data. Some related studies (Lee et al., 2020;Nguyen et al., 2020;Alsentzer et al., 2019) investigate whether it is helpful to tailor a pre-trained model to the domain while others (Sun et al., 2019;Chronopoulou et al., 2019;Radford et al., 2018) analyse methods for fine-tuning BERT to a given task. However, these studies perform evaluation on a limited range of datasets and classification models and do not consider scenarios with limited amounts of training data.
In particular, this paper aims to estimate the role of labeled and unlabeled data for supervised text classification. Our study is similar to Gururangan et al. (2020) where they investigate whether it is still helpful to tailor a pre-trained model to the domain of a target task. In this paper, however, we focus our evaluation on text classification and compare different types of classifiers on different domains (social media, news and reviews). Unlike other tasks such as natural language inference or question answering that may require a subtle understanding, feature-based linear models are still considered to be competitive in text classification (Kowsari et al., 2019). However, to the best of our knowledge there has not been an extensive comparison between such methods and newer pre-trained language models. To this end, we compare the light-weight linear classification model fastText , coupled with generic and corpus-specific word embeddings, and the pre-trained language model BERT (Devlin et al., 2019), trained on generic data and domain-specific data. Specifically, we analyze the effect of training size over the performance of the classifiers in settings where such training data is limited, both in few-shot scenarios with a balanced set and keeping the original distributions. In both cases, our results show that a large pre-trained language model may not provide significant gains over a linear model that leverage word embeddings, especially when these belong to the given domain.
Supervised Text Classification
Given a sentence or a document, the task of text classification consists of associating it with a label from a pre-defined set. For example, in a simplified sentiment analysis setting the pre-defined labels could be positive, negative and neutral. In the following we describe standard linear methods and explain recent techniques based on neural models that we compare in our quantitative evaluation.
Supervised machine learning models
Linear models. Linear models such as SVMs or logistic regression coupled with frequency-based handcrafted features have been traditionally used for text classification. Despite their simplicity, they are considered a strong baseline for many text classification tasks (Joachims, 1998;McCallum et al., 1998;Fan et al., 2008), even more recently on noisy corpora such as social media text (Çöltekin and Rama, 2018; Mohammad et al., 2018). In general, however, these methods tend to struggle with OOV (Out-Of-Vocabulary) words, fine-grained distinctions and unbalanced datasets. FastText , which is the model evaluated in this paper, partially addresses these issues by integrating a linear model with a rank constraint, allowing sharing parameters among features and classes, and integrates word embeddings that are then averaged into a text representation.
Neural models. Neural models can learn non-linear and complex relationships which makes them a preferable method for many NLP tasks such as sentiment analysis or question answering (Sun et al., 2019). In particular, LSTMs, sometimes in combination with CNNs for text classification (Xiao and Cho, 2016;Pilehvar et al., 2017), enable capturing long-range dependencies in a sequential manner where data is read from only one direction (referred to as the 'unidirectionality constraint'). Recent state-of-the-art language models, such as BERT (Devlin et al., 2019), overcome the unidirectionality constraint by using transformer-based masked language models to learn pre-trained deep bidirectional representations. These pre-trained models leverage generic knowledge on large unlabeled corpora that can then be fine-tuned on the specific task by using the pre-trained parameters. BERT, which is the pretrained language model tested in this paper, has been proved to provide state-of-the-art results in most standard NLP benchmarks (Wang et al., 2019b), including text classification.
Pre-trained word embeddings and language models
Most state-of-the-art NLP models nowadays use unlabeled data in addition to labeled data to improve generalization (Goldberg, 2016). This comes in the form of word embeddings for fastText and a pretrained language model for BERT.
Word embeddings. Word embeddings represent words in a vector space and are generally learned from shallow neural networks trained on text corpora, with Word2Vec (Mikolov et al., 2013) being one of the most popular and efficient approaches. A more recent model based on the Word2Vec architecture is fastText , where words are additionally represented as the sum of character n-gram vectors. This allowed building vectors for rare words, misspelt words or concatenations of words.
Language models. A limitation to the word embedding models described above is that they produce a single vector of a word despite the context in which it appears. In contrast, contextualized embeddings such as ELMo (Peters et al., 2018) or BERT (Devlin et al., 2019) produce word representations that are dynamically informed by the words around them. The main drawback of these models, however, is that they are computationally very demanding, as they are generally based on large transformer-based language models (Strubell et al., 2019).
tion (Barbieri et al., 2018), AG News (Zhang et al., 2015), Newsgroups (Lang, 1995) and IMDB (Maas et al., 2011). The main features and statistics of each dataset are summarized in Comparison models. As mentioned in Section 2, our evaluation is focused on fastText (Joulin et al., 2017, FT) and BERT (Devlin et al., 2019). For completeness we include a simple baseline based on frequency-based features and a suite of classification algorithms available in the Scikit-Learn library (Pedregosa et al., 2011), namely Gaussian Naive Bayes (GNB), Logistic Regression and Support Vector Machines (SVM). Of the three, the best results were achieved using Logistic Regression, which is the model we include in this paper as a baseline for our experiments.
Training. As pre-trained word embeddings we downloaded 300-dimensional fastText embeddings trained on Common Crawl . In order to learn domain-specific word embedding models we used the corresponding training sets for each dataset, except for the Twitter datasets where we leveraged an existing collection of unlabeled tweets from October 2015 to July 2018 to train 300-dimensional fastText embeddings (Camacho Collados et al., 2020). Word embeddings are then fed as input to a fastText classifier where we used default parameters and softmax as the loss function. As for BERT, we fine-tune it for the classification task using a sequence classifier, a learning rate of 2e-5 and 4 epochs. In particular, we made use of BERT's Hugging Face default transformers implementation for classifying sentences (Wolf et al., 2019) and the hierarchical principles described in Pappagari et al. (2019) for pre-processing long texts before feeding them to BERT. We used the generic base uncased pre-trained BERT model and BERT-Twitter 2 , both from Hugging Face (Wolf et al., 2019).
Evaluation metrics. We report results based on standard micro and macro averaged F1 (Yang, 1999). In our setting, since system provide outputs for all instances, micro-averaged F1 is equivalent to accuracy.
Analysis
We perform two main types of analysis. First, we look at the effect of training size over the classifier's performance by randomly sampling different sized subsets from the original labeled datasets (Section 4.1). Then, we perform a few-shot experiment where we compare classifier's performance on different sizes of balanced subsets of the training data (Section 4.2). Table 2 shows the results with different sizes of training data randomly extracted from the training set. Surprisingly, classification models based on corpus-trained embeddings achieve higher performance with less labelled data compared to the classifier based on pre-trained contextualised models. However, for cases with more than 5,000 training samples, the performance of fine-tuned BERT significantly outperforms fastText corpus-based classifier, especially when domain-trained BERT model (i.e., BERT (Twitter)) is used. Further to that, the fine-tuned model performance improves at a higher rate than the classifier based on corpus-trained embeddings for training sets with more than 2,000 instances. For instance, for the SE-18 dataset, fastText with domain embeddings improves 0.112 micro-F1 points when the entire dataset is used with respect to using only 200 instances, while BERT-Twitter provides a 0.360 absolute improvement. In contrast, fastText with pre-trained embeddings performs similarly to the baseline. This shows the advantage for pre-trained models to be fine-tuned to the given domain and task. Sentences vs. documents. In order to avoid confounds such as the type of input data in each of the experiments, we filter the results by sentences and documents (see Table 1 for the actual split of datasets in each category). Figure 1 shows the results for this experiment. As can be observed, training set size affects similarly for both types of input, with BERT being especially sensitive to the training data size.
Few-shot experiment
A few-shot comparison between the performance of classifiers based on balanced data is shown in Table 3. 3 We balance the dataset for a few shot experiments to ensure the occurrence of instances for all labels within the training set even for datasets with 20 labels when 5-shot and 10-shot experiments are performed. Further, we look at the effect of balanced training data over the classifiers performance. The results show that balancing the dataset lead to improvements in the classification performance with limited training data, especially for BERT. For example, using a subset of 1,000 training instances for 20 Newsgroups corpus, the macro-F1 for random sampled data is 0.42 while the macro-F1 for balanced data (i.e., 50 instances per label) is 0.556. Similarly to the experiments with randomized data samples, fastText based on corpus-trained embeddings is the best performing classification model for very small amounts of balanced labeled data (see Figure 2). However, as the amount of training data increases, BERT model outperforms fastText on average by 0.0442%. As in the previous experiment, the classification model based on pre-trained embeddings perform poorly compared to the corpus-trained embeddings and models fine-tuned to the task. Further, BERT (Twitter) leads to significant improvements over BERT when only 10 instances per label are used (i.e., for SE-16, BERT (Twitter) has macro-F1 = 0.370, similar to domain-based fastText with macro-F1 = 0.384 versus base BERT with macro-F1 = 0.200).
Conclusion and Future Work
In this paper, we analyzed the role of training and unlabeled domain-specific data in supervised text classification. We compared both linear and neural models based on transformer-based language models.
In settings with small training data, a simple method such as fastText coupled with domain-specific word embeddings appear to be more robust than a more data-consuming model such as BERT, even when BERT is pre-trained on domain-relevant data. However, the same classifier with generic pretrained word embeddings does not perform consistently better than a traditional frequency-based linear baseline. 4 BERT, pre-trained on domain-specific data (i.e., Twitter) leads to improvements over generic BERT, especially for few-shot experiments. For future work it would be interesting to further delve into the role of unlabeled data in text classification, both in terms of word embeddings (e.g., by making use of meta-embeddings (Yin and Schütze, 2016)) and the data used to train language models (Gururangan et al., 2020). Moreover, this quantitative analysis could be extended to more classification tasks and different models, e.g., larger language models such as RoBERTa (Liu et al., 2019) and GPT-3 (Brown et al., 2020), which appear to be more suited to few-shot experiments. However, the generic domain embeddings tend to fail to represent the meaning of more domainspecific words, which may explain their lower performance. This is confirmed by the nearest neighbour analysis (see Table 5) which showed that the generic domain embeddings do not provide accurate representations of more technical words such as 'Windows' and 'Sun'. In the IMDB reviews, words such as 'Toothless', used within a very specific context are also not correctly represented by the generic model. Moreover, tweets are rich with abbreviations which have domain-specific meaning such as 'SF' referring to 'San Francisco'. | 3,109.4 | 2020-12-01T00:00:00.000 | [
"Computer Science"
] |
Accuracy of Dental Models Fabricated Using Recycled Poly-Lactic Acid
Based on the hypothesis that the fabrication of dental models using fused deposition modeling and poly-lactic acid (PLA), followed by recycling and reusing, would reduce industrial waste, we aimed to compare the accuracies of virgin and recycled PLA models. The PLA models were recycled using a crusher and a filament-manufacturing machine. Virgin PLA was labeled R, and the first, second, and third recycles were labeled R1, R2, and R3, respectively. To determine the accuracies of the virgin and reused PLA models, identical provisional crowns were fitted, and marginal fits were obtained using micro-computed tomography. A marginal fit of 120 µm was deemed acceptable based on previous literature. The mesial, distal, buccal, and palatal centers were set at M, D, B, and P, respectively. The mean value of each measurement point was considered as the result. When comparing the accuracies of R and R1, R2, and R3, significant differences were noted between R and R3 at B, R and R2, R3 at P, and R and R3 at D (p < 0.05). No significant difference was observed at M. This study demonstrates that PLA can be recycled only once owing to accuracy limitations.
Introduction
Over the past few years, dental materials and equipment have evolved remarkably, benefiting both dentists and patients by improving the quality of treatments and reducing treatment times. When creating prosthetics such as crowns and bridges, professionals commonly take impressions after the formation of the abutment tooth or after building the abutment, injecting plaster, and creating a dental model. Impression taking dates back to the 1800s when wax and plaster were the most commonly used materials. However, nonreversible hydrocolloid alginate impression materials extracted from seaweed and reacted with gypsum to form insoluble calcium alginate have been used since the 1900s owing to their low costs and ease of use. These materials still represent the mainstay of dental treatments [1][2][3]. However, the poor dimensional stability of alginate impression materials when used alone for abutment teeth and the difficulty in reproducing margins have led to the applications of union impressions using alginate and agar for abutment teeth [4]. In the late 1900s, a silicone impression material was developed with vinyl polysiloxane as a component. In silicone impression materials, vinyl polysiloxane and polysiloxane hydroxide are additionally polymerized using platinum chloride to create a cross-linked structure and induce hardening [5]. Basapogu et al. [6] reported that the dimensional accuracy of silicone impression materials had an error ranging from 0.6% to 0.2%; however, the dimensional accuracy was better than that of alginate impression materials. Rajendran et al. [7] also performed silicone impressions on implant abutments. The authors reported on the usefulness of silicone impression materials for implant treatments. However, owing to cost and operability issues, impressions using alginate and agar are more commonly used, whereas silicone impression materials are only seldom used [8]. Aroma injection, a paste-type allied alginate impression material, has been developed recently. Chen et al. [9] reported that this material was more consistent than silicone impressions, had a lower contact angle than silicone, was more fluid, and allowed for more seamless impression taking than agar. Plaster models have also been used for dental models since the 1800s. Currently, ordinary gypsum, primarily composed of beta hemihydrate gypsum; hard plaster, primarily composed of alpha hemihydrate gypsum; and ultrahard plaster, are used for various purposes [10,11]. It was also used to record intermaxillary relationships and dental models [12]. For prosthetic dentistry, Taggart introduced the casting method in 1907, which is considered the foundation of current prosthetic treatments [13]. Vojdani et al. [14] reported a marginal fit of 88 ± 11 µm and an internal gap of 77 ± 10 µm for metal crowns cast and fabricated from wax patterns, demonstrating an excellent fit accuracy. Yang et al. reported a good marginal fit for a single metal coping produced by lost wax casting: 93 µm for a Ni-Cr alloy and 52 µm for a noble alloy [15]. Reitemeier et al. [16] reported a 20-year survival rate of 79% in 95 patients with 190 cast single crowns. Thus, dentistry has benefited from advances in materials science. The fabrication of prostheses and models using intraoral scanners (IOSs), computer-aided design/computer-aided manufacturing (CAD/CAM) systems, and 3D printers is now feasible [17]. The first IOS is believed to be the one launched by CEREC in 1985. IOSs use confocal, holographic, and shape-from-motion methods to illuminate the surface of an object with a laser, acquire three-dimensional data, and convert the data into polygon information, a set of triangular surfaces. This facilitates the reduced use of plaster casts, less discomfort during impression taking, and digital data storage [18]. It also reduces the risk of errors owing to the absence of plaster expansion and deformation of impression materials in conventional workflows [19]. Di Fiore et al. [20] compared eight IOSs, that is, True Definition, Trios, CEREC Omnicam, 3Dprogress, CS3500, CS3600, Planmeca Emerald, and Dental Wings, with regard to the accuracy of abutments and reported results of 31 ± 8 µm, 32 ± 5 µm, 71 ± 55 µm, 107 ± 28 µm, 61 ± 14 µm, 101 ± 38 µm, 344 ± 121 µm, and 148 ± 64 µm, respectively. In addition, as dentists primarily provide oral care, they are at an increased risk of infection from bodily fluids, aerosols, and droplet infections, such as the currently prevalent COVID-19 [21]. Papi et al. [22] noted that in the traditional workflow, impression materials with blood or saliva and plaster could be sources of infections. Therefore, they reported that the digital workflow, which only requires sterilization of IOS tips, reduces the risk of infection. Furthermore, Joda et al. [23] compared treatment times between IOSs and conventional silicone-based impression taking. They reported that the average working time for a student group was 5 ± 2 min using an IOS and 12 ± 2 min using the conventional method, whereas dentists reported a duration of 5 ± 1 min using an IOS and 10 ± 1 min using the conventional method; both groups had shorter treatment times using IOSs. The widespread use of CAD/CAM has also improved the quality of ceramics and zirconia, allowing for greater precision and a shorter time for crafting dental prosthetics [24,25]. With the advent of digital technology, dental treatments are becoming increasingly effective. Albuha Al-Mussawi et al. [26] mentioned that virtual reality simulators and augmented reality (AR) technology could be applied to dentistry for dental training, education, and the fabrication of technological objects. Furthermore, Ariwa et al. [27] evaluated the accuracy of digital dental models, namely head-mounted displays (HMDs) and spatial reality displays (SRDs), as reflected in AR devices. They reported that the measurement errors ranged from 0.3 to 2 mm for the HMDs and from 0.02 to 0.6 mm for the SRDs, indicating that the error was significantly higher for the SRDs than for the HMDs. Digitalization in dentistry is expected to accelerate further.
From the perspective of environmental issues, sustainable development goals are attracting attention worldwide. In this study, we focused on one of the targets of Goal 12, "Ensure sustainable consumption and production patterns", which indicates that "by 2030, significantly reduce waste generation through prevention, reduction, recycling, and reuse". Wayman et al. [28] reported that 359 million metric tons (Mt) of plastics were produced in 2018, of which an estimated 14.5 Mt entered the ocean, causing potential harm to host organisms consuming them. Consequently, growing concerns have been raised regarding environmental issues, and attempts are being made worldwide to reduce plastics, for instance, by charging for plastic bags and eliminating plastic straws [29,30]. Research is underway to degrade polyethylene terephthalate and polypropylene food and beverage packaging waste to address the long-term persistence of plastics in the environment [31]. We believe that using IOSs will reduce impression material applications in the future. Plaster models are often replaced by resin models sculpted using stereolithography 3D printers (SLA) and digital light processing (DLP). This is because they are generally considered to exhibit reasonable accuracy. Ishida et al. [32] created a cylindrical pattern mimicking a full crown and compared the material extrusion (MEX) and SLA. They claimed that SLA was more accurate and that MEX had a high surface roughness. They also mentioned the importance of 3D printer performance, as dental 3D printers have better accuracy than private ones. Resin is not recyclable; therefore, resin models can cause industrial waste. However, thermoplastic materials such as those used in MEX are recyclable. Therefore, we used one of the MEXs, fused deposition modeling (FDM) and polylactic acid (PLA) filaments. MEX is applied in medical devices, building structures, automobiles, and aerospace owing to its high printing strength, a wide range of available materials, and low cost per part [33]. However, the use of MEX and PLA to create dental models has not yet been reported in the literature. In a previous study, we reported on the accuracy of fit for PLA, resin, and plaster models [34]: 118 ± 22 µm, 62 ± 16 µm, 50 ± 27 µm for buccal areas; 64 ± 32 µm, 48 ± 24 µm, 76 ± 11 µm for palatal areas; 62 ± 28 µm, 50 ± 17 µm, 78 ± 20 µm for mesial areas; and 86 ± 43 µm, 50 ± 12 µm, and 80 ± 39 µm for distal areas, respectively, suggesting the usefulness of PLA models. PLA is a plant-derived plastic material that is expected to reduce carbon dioxide emissions. It is biodegradable and can dissociate into water and carbon dioxide in a compost environment [35]. PLA filaments can be reused owing to their characteristics [36]. We consider that using MEX and PLA to fabricate dental models, followed by their reuse, would reduce industrial waste. However, assessing the corresponding accuracy for applications in clinical practice is essential.
This study aimed to compare the accuracies of recycled PLA and virgin PLA models.
Materials and Methods
A left upper first molar model (A55A-262, NISSIN, Tokyo, Japan) was attached to a jaw model (Prosthetic Restoration Jaw Model D16FE-500A(GSE)-QF, NISSIN, Tokyo, Japan) as the base model. Impressions of the base models were taken using an IOS (Trios 3 ® ; 3 shape, Copenhagen, Denmark), and resin blocks (ASAHI PMMA DISK TEMP; ASAHI-ROENTGEN IND. CO., LTD., Kyoto, Japan) were machined using CAD/CAM (Exocad ® ; Exocad, Berlin, Germany) (Ceramill motion2 ® ; Amann Girrbach, Wien, Austria) based on the stereolithography (STL) data recorded to fabricate provisional crowns. Based on the manufacturer's recommendations, the cement space was set to 0.11 mm, and the margin thickness was set to 0.06 mm. For the PLA model, impressions of the base models were taken using the IOS, and from the data obtained, PLA models were fabricated using 1.75 mm PLA filaments designed for Moment 3D printers (Moment Co., Ltd., Seoul, Republic of Korea) and MEX (Moment M350; Moment Co., Ltd., Seoul, Republic of Korea). Details regarding the filaments and MEX are summarized in Table 1.
In the recycling process, the PLA models were ground using a filament-grinding machine (SHR3D IT; 3devo B.V., Utrecht, The Netherlands), followed by filament production in a filament-making machine (COMPOSER; 3devo B.V). The manufactured filaments were used to fabricate the PLA models ( Figure 1). The model made from virgin PLA was labeled R; PLA was recycled up to three times, and the first, second, and third PLA recycles were labeled as R1, R2, and R3, respectively. Five models for each type were fabricated, amounting to 20 in total. Following the manufacturer's recommendations, the temperature during MEX was set to 225 • C, the lamination pitch was set to 100 µm, and the temperature of the filament-manufacturing machine was set to 170-190 • C. No models were surface treated, and no other materials were added when the filaments were reused.
The marginal fits of the provisional crown and PLA model were used as accuracy measures. A PLA model with a provisional crown was placed perpendicular to the X-ray beam in a micro-computed tomography (CT) tube, and micro-CT (ScanXmate-L080T; Comscantecno Co., Ltd., Kanagawa, Japan) was used for imaging. The same provisional crown was placed on all the models. The occlusal surfaces of the provisional crown and adjacent teeth were fixed using utility wax (GC, Tokyo, Japan). The imaging conditions were as follows: 50 kV, 145 µA, voxel size of 34.5 µm, and magnification of 2.891×. After the images were recorded, the digital imaging and communications in medicine (DICOM) data were obtained for accuracy using a three-dimensional image analysis system volume analyzer (SYNAPSE VINCENT ® , FUJIFILM, Tokyo, Japan). The measurement method included loading the DICOM data acquired by micro-CT into SYNAPSE VINCENT ® , adjusting the contrast in the 3D viewer, selecting "linear measurement", and determining the marginal fits of the provisional crown and PLA model. In total, four measurement points were set as the mesial center (M), distal center (D), buccal center (B), and palatal center (P) (Figure 2). The average value of each measurement point was used as the result. The accuracy of the model was verified based on Dunnett's test using the bell curve in Excel (Social Survey Research Information Co., Ltd., Tokyo, Japan). Continuous data were expressed as mean ± standard deviation. Differences with a p-value < 0.05 were considered statistically significant. The accuracy of the model was verified based on Dunnett's test using the bell curve in Excel (Social Survey Research Information Co., Ltd., Tokyo, Japan). Continuous data were expressed as mean ± standard deviation. Differences with a p-value < 0.05 were considered statistically significant.
Discussion
With the widespread use of IOSs and CAD/CAM, the fabrication of prostheses without model creation is now feasible. However, models are still essential for margin, contact, and occlusal adjustments. Numerous reports indicate that the marginal fit discrepancy of
Discussion
With the widespread use of IOSs and CAD/CAM, the fabrication of prostheses without model creation is now feasible. However, models are still essential for margin, contact, and occlusal adjustments. Numerous reports indicate that the marginal fit discrepancy of CAD/CAM crowns should be less than 120 µm [37][38][39]. However, the results of this study, including standard deviations, exceed 120 µm at all measurement points for R2 and R3.
In MEX, the thermoplastic material is melted and extruded from a hot end to form a printed layer to produce the desired object [40,41]. Alsoufi et al. [42] reported that the shape error of PLA was within 3.00% on each side of a 40 mm (L) × 40 mm (W) × 15 mm (H) specimen, which is excellent accuracy for PLA fabricated by MEX. Only one PLA filament was used in this study. Cicala et al. [43] used MEX and three different commercial filaments to verify the accuracy using the same object. Two filaments that exhibited significant shear-thinning behavior and were correlated with mineral filler formulations printed well, but one had poor accuracy. Cicala et al. reported that differences in additives in the filament manufacturing process led to these accuracies. PLA is hydrolyzed during molding, which then degrades into low molecular weight oligomers. The oligomers further decompose into lactide and lactic acid, resulting in the loss of plastic properties. It has also been reported that when PLA is reused, the mechanical properties deteriorate because of hydrolysis and breakage of the reinforcing fibers [44,45]. Agüero et al. reported the following mechanical properties for reused PLA: impact strength (kJ·m −2 ) of 58 ± 4 for virgin PLA, 56 ± 4 after one recycle, and 36 ± 5 after four recycles. The elongation at break (%) was 10 ± 0.04 for virgin PLA, 9 ± 0.3 after two recycles, and 7 ± 0.9 after four recycles. The authors reported that the material could be recycled up to six times, with a slight degradation in the mechanical properties after one and two cycles but a marked decrease from the fourth cycle [46]. Zhao et al. also reused PLA and reported that the viscosity at 160 • C was approximately 2000 Pa·s for virgin PLA, approximately 750 Pa·s after the first cycle, and approximately 100 Pa·s after the second cycle; moreover, they reported that the viscosity decreased with repeated reuse, and the molecular weight decreased with chain scission, resulting in the degradation of mechanical properties. Therefore, they reported that reuse after the second cycle was difficult [47].
Anderson et al. compared the mechanical properties of virgin PLA and one-time reused PLA. They reported an 11% decrease in the tensile strength, a 7% increase in the shear strength, and a 2% decrease in the hardness of the reused filament, with no differences in the average mechanical properties of one-time reused PLA compared to those of the virgin material. However, they reported an increase in the standard deviation and greater variability in the results for the recycled material [48]. These reports are similar to our results. We believe that the mechanical properties of PLA degrade, and their stability is impaired the more they are reused, resulting in a higher standard deviation. As dental models only tolerate minimal errors in micrometer units, reusing them after the second cycle may be difficult. However, research is underway to add other materials to PLA to compensate for the PLA weaknesses. Beltrán et al. added a chain extender and an organic peroxide to PLA and evaluated its mechanical properties. They discovered that both additives reacted with terminal carboxyl groups in the aged polymer, causing cross-linking, branching, and chain extension reactions. Notably, both additives failed to improve either the viscosity or the thermal stability of the heavily degraded PLA. However, they reported that they could improve the microhardness of the recycled material [49]. Patwa et al. reported that adding 1 wt% crystalline silk nanodisks to a PLA matrix increased the toughness by approximately 65%, elongation by approximately 40%, and tensile strength by approximately 10% [50]. López et al. reported that mixing virgin PLA with 30 wt% recycled PLA and adding an epoxy-based chain extender and microcrystalline cellulose as reinforcements improved the tensile strength by up to 88%, modulus by 127%, and Izod impact strength by 11% [51]. Other studies have focused on adding materials such as metals, carbon, and fibers to PLA to maintain and improve its mechanical properties [52,53]. Furthermore, some studies involve reusing PLA with other materials [54,55]. Thus, research on reusing PLA and adding additives to maintain or improve its mechanical properties is progressing worldwide. The decrease in accuracy after the second recycle in this study could be attributed to the fact that the mechanical properties of PLA are known to deteriorate when reused.
Although minimal progress has been achieved in maintaining the biodegradability and mechanical properties of PLA, we believe it is possible to increase the number of recycling times for PLA, with improvements in the future. To the best of our knowledge, this study is the first to consider the reuse of PLA in dentistry.
PLA is widely used in the medical field, and numerous reports on its good biocompatibility can be found in the literature [56][57][58].
Concerning the use of PLA in dentistry, Benli et al. compared the marginal gaps of PLA, polymethyl methacrylate, and polyetheretherketone as provisional crowns. The results for PLA, polymethyl methacrylate, and polyetheretherketone were 60.40 ± 2.85 µm, 61 ± 4 µm, and 56 ± 5 µm, respectively, demonstrating the usefulness of PLA crowns [59]. Molinello-Mourelle et al. reported similarly on the usefulness of provisional crowns fabricated using PLA [60]. Crenn et al. examined the mechanical properties of PLA to verify its feasibility for use as provisional crowns. The elastic modulus of PLA is E = 3784 ± 99 MPa, that of nanoparticulate bisacryl resin is E = 3977 ± 878 MPa, and that of acrylic resin is E = 2382 ± 226 MPa. The flexural strength of PLA is Rm = 116 ± 2 MPa, that of nanoparticulate bisacryl resin is Rm = 86 ± 6 MPa, and that of acrylic resin is Rm = 115 ± 21 MPa, indicating mechanical property problems compared to the other two materials [61]. Relatively fewer reports have been presented on the application of PLA in dentistry, and most reports focus on its applications in provisional crowns. However, the glass transition temperature of PLA is known to be 50-80 • C [62,63]. PLA improves crystallinity and increases heat resistance. Notably, methods adopted to improve crystallinity include plasticizing modification and adding nucleating agents [64,65]. Among these, plasticizing modification is the most effective approach to improve crystallinity. However, the approach is reported to lower the glass transition temperature [66] simultaneously. Xu et al. reported that adding ethylene butyl methacrylate glycidyl methacrylate terpolymer and talc as nucleating agents for PLA increased the heat deformation temperature from 58 • C to 139 • C. The glass transition temperature, however, remained almost unchanged [67]. Various other heat resistance analyses have been conducted. However, no straightforward method has been identified to improve the definite glass transition temperature [68,69]. Additionally, while improving heat resistance in the future, impurities added to achieve heat resistance must be ensured not to impair the biodegradability of PLA [70]. Placing PLA crowns in the oral cavity is challenging due to heat resistance issues. Instead, we consider them more effective when used as models.
PLA models are typically created using MEX. However, MEX is known to release volatile organic compounds (VOCs) during the molding process [71,72]. Ding et al. reported that the mass yields of VOCs emitted during MEX for PLA, acrylonitrile butadiene styrene, and polyvinyl alcohol were 0.03%, 0.21%, and 2%, respectively, at 220 • C [73]. Wojtyła et al. reported that the main VOC emitted from PLA was methyl methacrylate, which accounted for 44% of the total emissions. Thus, it is essential to keep the laboratory rooms unoccupied and ventilated during molding and restrict the use of several MEX processes simultaneously [74]. Notably, filaments left in an environment with 60-70% humidity for two weeks will degrade printing quality. Suharjanto et al. reported that filament storage using medium-density boards prevents and reduces air absorption of PLA filament and filament life, leading to the maintenance of the printing system. Note that the accuracy after modeling varies depending on the storage method [75].
In this study, we measured the marginal fit between the provisional crown and the model. However, it is necessary to measure the accuracy of the entire model in the future. Liu et al. examined the geometric accuracy of monkey tooth roots. After scanning the monkey's maxilla with cone-beam CT and segmentation of the incisor roots, titanium implants were fabricated using laser powder bed fusion (PBF), a metal composite fabrication method. The extracted teeth and 3D-printed implants were scanned with a micro-CT and compared with the original segmented STL data. Results were reported as 91 ± 5% for the segmented versus printed tooth and 67 ± 11% for segmented versus actual. They found that monkey denticles are small and difficult to segment with high precision and that irregular shapes, surfaces, and technical challenges make it difficult to delineate regions of interest and cause deviation errors [76]. In the future, measuring the overall accuracy of the base model and the PLA model after molding will be necessary. This study has some limitations: the mechanical properties of PLA could not be verified, and PLA could not be investigated with additives. They will be the topic of future research.
Conclusions
Sustainable development goals are attracting global attention in terms of environmental issues. Digital technology has led to improved accuracy in prosthetic treatment and shorter treatment times. However, SLA and DLP are widely employed in dentistry, and the resulting models are considered industrial waste. Therefore, we used MEX and PLA to reduce industrial waste in dentistry. Notably, PLA is a plant-derived plastic material that is expected to reduce carbon dioxide emissions. Further, biodegradable PLA filaments break down into water and carbon dioxide in a composting environment, and their properties allow them to be reused. This study examined the accuracy of MEX and PLA models in dentistry and the system for their reuse. The results show that PLA models made with MEX are within the acceptable range of 120 µm up to the first cycle and can be reused for up to one cycle. PLA may be considered the new material of choice in dentistry. The accuracy of MEX could be improved, and additives could be added to filaments to promote their reusability. This may reduce the industrial waste generated by dentistry.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to ethical issues. | 5,713.4 | 2023-03-25T00:00:00.000 | [
"Materials Science"
] |
Evaluation of Cavity Formation and the Use of Cut-off Wall to Reduce the Risk of Washing Subsurface Fine Material
This study shows the results of mapping numerous cavities and distress which appeared and detected in Qassim area, Saudi Arabia. This phenomenon was observed near a school building and residential area and became a serious risk to occupants and residents. The survey was carried out applying geotechnical techniques which included advancing rotary boreholes to depths of 23 m to 30 m with sampling and testing. The evaluation process also included resistivity imaging profiles using 2D electrical resistivity measurements. Results obtained from this research showed a thick top layer of silty clayey sand soil rich of gypsum and carbonate presenting a hazardous and high-risk soil type. The percentage of fines that are likely to be washed out as a result of chemical disintegration and exposure to significant hydraulic gradient was of great concern. Assessment was made using combined geotechnical and geophysical approach in addition to chemical tests. Based on the data collected and analysis of test results a practical solution was suggested to solve this problem. The use of “cut-off wall” in order to reduce the level of subsurface scour and cajuvity formation were found appropriate. The depth of the cut off wall was determined based on the subsurface geological profile. Advantages of this approach and concerns need to be considered in adopting typical solutions that are presented.
Introduction
This study was conducted to investigate the cause of cracks and subsoil collapse appeared recently at a residential area in Authal Center, Al Qassim Region, and Central Saudi Arabia.The site investigation was conducted for locations within close vicinity to coordinates (N 26o 31' 17.8'' E 43o 41' 22.2").See Figure 1.
Figure1. Study Area Map (after of [1]).
The area was reported to have unusual subsidence and near surface cavities.The cause of these faults and collapse was obviously not related to human activities and expected to be a natural phenomenon.Investigation program included geotechnical engineering and geological assessment.In addition to these methods, a 2D geophysical resistivity imaging system was used.The following goals were planned to be achieved: 1) Evaluating of the thickness of surface soil layer.
2) Evaluating the subsurface geological features.
3) Evaluating depth of groundwater and surface water if found.
4) Evaluating the seriousness of current collapse and void formation hazards, at present and in future.2D resistivity imaging system was used as to be one of the most known systems at present time to give two different cross-sections in different directions in investigated areas and environmental explorations.A geotechnical engineering study was conducted and both field and laboratory works were carried out.Results obtained through geophysical and geotechnical methods showed that the entire problem is related to the nature of the near surface soil layer.Loose spots of clayey sand were formed due to the loss of fine material being washed away.The subsurface soil was subjected to great hydraulic head difference created by topography and rainfall.When soils got wet and liquefied, interstitial pores increase, hollow tiny voids and vugs develop into fissures, cracks, big voids and then collapse by time (Figure 2).
Chemical disintegration of subsoil material can have a significant role in cavity formation especially when calcium and salts including carbonates and sulfates are present.
Calcium carbonate (CaCO 3 ) reacts with CO 2 to form soluble calcium bicarbonate; Ca(HCO 3 ) 2 .This can cause solid material to reduce and wash away with water.
This soluble compound is then washed away with the rainwater.This form of weathering is called 'Carbonation".
Calcium oxide can react with basic oxides to give calcium sulfites.
CaO + SO
Geology Setting
Arabian Peninsula is formed of two main structures, the first is the Arabian Shield, which covers nearly 40% of the Arabian Peninsula in the West, and the second structure is the sedimentary formation which covers the remaining parts of the Kingdom, located dominantly towards the East.The formation of the Arabian Shield consists of solid basement rocks of [2] Proterozoic Eon (Precambrian), which is overlain by rocks that return in age to Paleozoic, Mesozoic and Cenozoic eras, forming rocks of the sedimentary basin in the sedimentary cover.
The study area lies on recent deposits of Quaternary Period in age, directly above Sara Formation of Early Silurian.Sara Formation is characterized and strongly influenced by lifting tectonic movements.Sara Formation is formed of different constituents, which include the two main components: 1) Shale and silt, exposed at Jabal Khanasir Sara in Al Qassim Region.
2) Tillite sandstone, exposed at Jabal as Zarqa, east of Hail.These rocks consist of heavy distribution of cross bedding structures.
The local site geology of the study area indicated the presence of Quaternary deposits of sand, clay and silt followed by weakly cemented sandstone.The sandstone cementation is improving with depth and getting sound and intact beyond 10m below ground level.
The area under investigation lies on recent deposits of an active khabra, of Quaternary Period in age, Figure 3.It is located in a low area, compared to the surrounded topography.During the rainy season, the khabra is crossed by surface runoffs that drain rainfall and water flowing from the upstream drainage basin, in which the silt and clay carried by wadis settle, and natural vegetation flourishes.
The khabra deposits are typically silty and clayey with small amount of eolian sand and no pebbles or gravel.It looks like sand sheets of different sizes.
Geotechnical Tools
The field work included advancing four boreholes that range in depth between 23 to 30 m, below earth's surface, and also few open test pits excavated to a depth of 2 to 3 m below grade level.The boreholes and test pits were distributed in a way to cover the area next to school which showed many cavities and sink holes.The term sink holes used to denote cavities near surface in which the arching collapsed leaving an open hole.Figure 4 presents the drilling instrument used (Acker AD II) type, fixed on board International, 4-wheel drive truck.
2D Resistivity Imaging System
Syscal R1 system was used in field work.It is considered one of the most widely used systems at present time to give two different cross sections.It is mainly used for water and subsurface exploration.The (Syscal R1) is a multi-node resistivity imaging system.It features an internal switching board for 72 electrodes.The system is designed to automatically perform pre-defined sets of resistivity measurements with roll-along capability, with 3 meters electrode spacing.
This instrument is provided with three special programs, helping to design imaging methods and for data transfer from the instrument to the personal computer or laptop, and vice versa.They are used for initial analysis of data output at field and before using the sophisticated computer program for resistivity imaging meter (Res2Dlnv).The later program is one of the most advanced programs dealing with processing of resistivity output data analysis.
Boring Works
Drilling within the upper layers was done by wash boring method, and in case of encountering hard soil layer, the rotary drilling is continued using tri-cone bit so as to help crossing the hard layer using casing.When firm rock is reached drilling is continued using a double tube core barrel of a two inch size (T2 76).This core barrel helps in extracting good quality core samples.
Four (4) open hole tests were excavated, 2 -3 meter in depth, in order to study the constituents of the subsurface soil, to preview the possible cavities, to present a geologic and geotechnical description and to provide laboratory with some undisturbed samples for some tests, Figures 5 and 6.
A standard Penetrating Test (SPT) was conducted, too.It is a common testing method used to estimate the relative density of soils and approximate shear strength parameters.The test uses a thick-walled sample tube, with an outside diameter of 50 mm and an inside diameter of 35 mm, and a length of around 650 mm ended by a driven shoe .Tube is driven into the ground at the bottom of a borehole by blows from a slide hammer with a weight of 63.5 kg falling through a distance of 760 mm.
The sample tube is driven 150 mm into the ground and then the number of blows needed for the tube to penetrate each 150 mm up to a depth of 450 mm is recorded.The sum of the number of blows required for the second and third 6 in. of penetration is termed the "standard penetration resistance" or the "N-value".In cases where 50 blows are insufficient to advance it through a 150 mm interval the penetration after 50 blows is recorded.The blow count provides an indication of the density of the ground, and it is used in many empirical geotechnical engineering formulae.
Resistivity Imaging Works
In order to choose the proper plans and methods for field works, study area has been surveyed by locating sites of risk phenomena using the Global Positioning System (GPS).Due to the distribution of the sink holes, survey line directions were assigned to extend from West to East, perpendicular to main lines joining these holes.The electrical resistivity survey lines were designed to be of 2 dimensions, with 177 m length and 3 m electrical node spacing.9 parallel electrical survey lines, with 9 meters spacing distance between each other's, have been conducted, and in a way that the first survey line extends to the north of the site and the last one extends towards the south as shown in figure (5).The 2D electrical resistivity profile No. 2 shown in Figure 7 was selected to extend along a direction that crosses boreholes No. 2 and No. 3. Borehole No. 1 is situated 10 meters away from the start point of this profile, and the distance between the two wells is 157 meters.vity was low and in the order of (5 Ω), in a vertical section ranging from earth's surface to a depth of 6 meters.This low resistivity is believed to be a result of loose sand that is almost moist and wet.Then, the resistivity increases between 7 meters depth to the bottom of profile section, indicating more cohesive sandy layers.Evaluation used the approaches and guides by [6].
These results match properly with those obtained in the geotechnical engineering work.The nature of the subsurface soil, according to what has been reported from test borings, seems to be formed of extremely dense sandy soil layers, with argillaceous, calcareous and gypsiferous cementation.The density of the sandy soil increases with depth, and gradually, soil becomes sandstone.Photos of Scanning Electron microscope (SEM) show presence of cements between the sand particles and wide interstitial spaces between quartz grains filled by the soluble gypsum (Figure 8).No groundwater has been detected in the range of tested soil layers.However, the soil layers might be water bearing ones in the pluvial intervals, long time ago, as it lies in a low depression at the foothill of nearby mountains.This study suggests that the problem is related to the nature of soil layers, near to earth's surface, that when it rains, the layer goes into loose sand liquefaction, sometimes called "running sand", causing more looseness for sand, and more widening of interstitial spaces and forming vugs that extend gradually into cracks and voids causing sudden collapses (Figure 9).
The choice of the two borehole locations being connected with electrical section is to compare between the two methods of investigation.This will enable a double check and comparison between two different methods.The electrical resistivity lines can give information on layers to depths far greater than those obtained from the boreholes.The test locations were selected in such a way to cover the area of high risk near the school.Cavities were scattered at different places within the site.The works of [3], [4] and [5] provide a good guide for evaluating the electrical resistivity of soils.
Results of Electrical Resistivity Imaging and the Geotechnical Engineering
Results of electrical imaging tests showed that the resis- A collapse potential test was carried out for samples extracted from the test pits in accordance with ASTM D 5333-9 [7].The collapse Index measured at 300 kPa stress was found 6.3% (Figure 10).
The chemical test results presented in Table 1 provide good estimates of anions and cations.Percentage of gypsum and carbonate content is significant.
ue to the chemical nature of the subsurface material and the flow of water derived by significant hydraulic gradient, it is decided to recommend some solution that will reduce the risk to the existing school.
However, one of the solutions suggested based on detailed field study, was to dig a trench with a depth ranging between 2.5 -3 meters, with a width of 60 -80 centimeters around the local building walls, filling it with imbricated water proof insulator, then filling the empty spaces by a low permeability soil, e.g. a mixture of sand and bentonite or any appropriate clayey soil, that does not contain gravels or angular grains that could cause a damage to water proof layer.This method is known as "cut-off wall".It is effective in preventing hydraulic rush of subsurface water that would cause soil washing and pore space formation (Figure 11).
Conclusion and Recommendation
This study investigated the cause of cavity formation near surface and close to foundation level for a site of a school building in central parts of Saudi Arabia.It was found that the site was rich in soluble salts which included gypsum and calcareous material.The site was also subjected to surface and subsurface water flow with a rather high hydraulic gradient.Comprehensive study using geotechnical and geophysical methods in addition to physical and chemical laboratory tests were performed.
A practical solution was suggested to solve the problem of near surface cavity formation using "cut-off wall" method.The trench with a depth of 2.5 -3.0 meters, and a width of 60 -80 centimetres, around the outside school walls filled up with a mixture of sand bentonite slurry is expected to intercept flow of water and reduce the energy of flow.The wall can be extended to a satisfactory depth where no cavities were reported.The use of water proof insulator or geotextile membrane can be used as a liner to protect fine material from being washed away.Construction of the cut-off wall all around the school will provide satisfactory protection.Rise and fall of ground water under the school will not cause any serious problems as the fines will be trapped at the bottom sandstone layer.
Figure 2 .
Figure 2. Typical sink hole close to ground surface.
Figure 3 .
Figure 3.A geologic map of the study area within recent khabra deposits.
Figure 4 .
Figure 4. Acker AD II Drilling rig operating on site.
Figure 5 .
Figure 5. Satellite image showing the location of the 2D electrical resistivity profiles and borehole locations.
Figure 6 .
Figure 6.Digging an open hole at study area.
Figure 8 .
Figure 8.View of the expansive soil as seen in an SEM to 2000 magnifications.
Figure 9 .
Figure 9.Typical soil profile as obtained for borehole 3.
Figure 11 .
Figure 11.Schematic diagram for the proposed cut-off wall. | 3,477.6 | 2013-04-30T00:00:00.000 | [
"Geology"
] |
Family ownership concentration and real earnings management: Empirical evidence from an emerging market
Abstract The paper examines the effect of family ownership concentration (FMOC) on real earnings management (REM) in manufacturing firms listed on Bursa Malaysia (formerly known as Kuala Lumpur Stock Exchange). Data are gathered from 1,056 firm-year observations for the four-year period from 2013 to 2016. The feasible generalised least square estimation is used to examine the relationships. The results show that FMOC is negatively and significantly associated with REM. This evidence supports the alignment hypothesis that FMOC mitigates managerial earnings management by preventing real activities manipulation. However, the finding of the current study is contrary to the claim that family-controlled firms have lower earnings quality. This study extends previous empirical research by examining the effect of different levels of family control on REM in an emerging market and provides evidence that family firms have less incentive to engage in REM practices. The findings imply that earnings reported in the financial statements of Malaysian manufacturing family firms are more reliable as these firms do not manipulate earnings through real business activities. Policymakers may consider the results of the current study that show family-controlled firms have the motivation to self-monitor their business and avoid earnings manipulation activities. Investors may benefit from this evidence and invest in family firms. Future studies may extend the sample to cover other sectors to check the consistency of the findings. In addition, the paper uses data from Malaysia, a country characterised as a family-controlled market. Thus, the findings may not be similar to those of countries with lower FMOC.
Abstract: The paper examines the effect of family ownership concentration (FMOC) on real earnings management (REM) in manufacturing firms listed on Bursa Malaysia (formerly known as Kuala Lumpur Stock Exchange). Data are gathered from 1,056 firm-year observations for the four-year period from 2013 to 2016. The feasible generalised least square estimation is used to examine the relationships. The results show that FMOC is negatively and significantly associated with REM. This evidence supports the alignment hypothesis that FMOC mitigates managerial earnings management by preventing real activities manipulation. However, the finding of the current study is contrary to the claim that family-controlled firms have lower earnings quality. This study extends previous empirical research by examining the effect of different levels of family control on REM in an emerging market and provides evidence that family firms have less incentive to engage in REM practices. The findings imply that earnings reported in the financial statements of Malaysian manufacturing family firms are more reliable as these firms do not manipulate earnings through real business activities. Policymakers may consider the results of the current study that show family-controlled firms have the motivation to self-monitor their business and avoid earnings manipulation activities.
ABOUT THE AUTHOR Belal Ali Abdulraheem Ghaleb is an Assistant Professor of Accounting and Auditing at Hodeidah University, Yemen. His research interests include financial accounting and reporting, corporate governance, earnings management, and auditing. Hasnah Kamardin is an Associate Professor of Accounting at Tunku Puteri Intan Safinaz School of Accountancy, College of Business, Universiti Utara Malaysia, Kedah, Malaysia. Her research interests include firm performance, corporate governance, financial reporting quality, and auditing. Mosab I. Tabash is currently working as MBA Director at the College of Business, Al Ain University, UAE. He is also a Risk Management Supervisor at the university level. His research interests include Islamic banking, monetary policies, financial performance, risk management, and investments.
PUBLIC INTEREST STATEMENT
This paper examines the effect of family ownership concentration on real earnings management in the Malaysian market. Family ownership concentration is a unique feature of the Malaysian market and earnings management is more pervasive. Inconsistent empirical results regarding the effect of family control on financial reporting quality motivate conducting this research in Malaysia. Thus, the study aims to check whether the alignment theory or entrenchment theory is fitting in the Malaysian market. The study uses secondary panel data that were hand collected from the annual report and Thomson Reuters Datastream. The results revealed that family ownership concentration is negatively and significantly associated with REM, supporting the alignment theory. The results are the same at different levels of family ownership concentration and alternatives regression techniques. The findings of the current study may benefit regulators and policymakers as well as investors interesting in family businesses.
Introduction
Quality of financial reporting in family businesses is a prominent issue in recent research, and earnings management (EM) is considered as an important trait of financial reporting quality (Cohen et al., 2008;Lin & Shen, 2015;Zang, 2012). Researchers classify earnings management into accrual earnings management (AEM) and real earnings management (REM). Recent empirical evidence shows that firms prefer to engage in REM than AEM, this is because REM is less detectable, even though it is costly compared to AEM (Cohen et al., 2008;Ipino & Parbonetti, 2017). Although several studies have examined the effect of family ownership concentration (FMOC) on EM through discretionary accruals (Ali et al., 2007;Hashmi et al., 2018;Wang, 2006), recent work is now looking at the effect of family control on the practice of REM and AEM (Achleitner et al., 2014;Razzaque et al., 2016), and reports mixed results. The differences in findings are explained by the different institutional settings, which may play a substantial role in the monitoring function and earnings reporting process. It is suggested that findings in developed countries may not be readily generalised to developing countries due to the difference in the degree of ownership concentration and institutional environment (Chi et al., 2015;Fan & Wong, 2002).
Malaysia is an interesting area for the family businesses and financial reporting quality research due to its high levels of ownership concentration in the hand of families or individuals and state, where family firms make up about 70% of Malaysian companies (Amran & Che Ahmad, 2009;Claessens et al., 2000). Malaysia ranks seventh globally in terms of the number of family firms (CSRI, 2017). Thus, agency problem as a conflict of interest between managers and shareholders (type I) may not be prevalent in Malaysia. Instead, agency problem as a conflict of interest between minority and majority shareholders (type II) could be serious (Claessens et al., 2002). Chi et al. (2015) reveal that ownership concentration, less transparency, ineffective corporate governance and weakness of legal systems in East Asia provide greater incentives for the controlling shareholders to manipulate earnings. Enomoto et al. (2015) provide evidence that REM is more prevalent among Malaysian firms than those in other markets. Therefore, family ownership concentration is expected to influence financial reporting quality in Malaysia significantly.
Even though Abdullah and Wan Hussin (2015) report that family ownership constrains the opportunistic behaviour of managers, and also reducing the positive association between related party transactions and REM in the Malaysian market, the current study differs by investigating the direct effect of FMOC on REM. It employs data from a large sample over four years to answer the research question. It also investigates the effect of FMOC on REM at different levels of concentration. Few studies have investigated the effect of family control on REM as an alternative technique of earnings manipulation, especially in Malaysia. Thus, the current study extends previous studies and investigates whether family-controlled firms in the Malaysian market mitigate or exacerbate REM, and in which level of ownership concentration.
The results show that FMOC is negatively and significantly associated with REM, suggesting that family firms are less likely to engage REM in Malaysia. The paper provides evidence that family firms report "better quality" earnings than their non-family counterparts. In the additional analyses, the results remain the same at different percentages of family ownership concentration, using individual REM measurement and alternative regression estimation. The findings are in line with the results of recent studies that provide evidence for the positive role of family ownership in reducing EM and producing high-quality financial reporting (Achleitner et al., 2014;Boonlert-U-Thai & Sen, 2019;Hashmi et al., 2018;Mohammad & Wasiuzzaman, 2020). However, they contradict the findings of previous studies which suggest that family firms are associated with higher REM (Eng et al., 2019;Razzaque et al., 2016). The results of the current study imply that earnings reported in the financial statements of Malaysian manufacturing family firms are more reliable as these firms do not manipulate earnings through real business activities. Investors may benefit from this evidence in taking the right investment decisions. Policymakers may consider the results of the current study that show family-controlled firms have the motivation to self-monitor their business and avoid earnings manipulation activities.
The rest of this paper is structured as follows. In the next section, we discuss several issues associated with REM. Next, we discuss the theoretical background of family ownership concentration, followed by the development of hypotheses. Then, we describe the research design and discuss the results. The paper concludes with the implications of the results and recommendations for further work.
Real Earnings Management (REM)
The accounting literature splits EM into two groups: accruals-based earnings management (AEM) and REM (Graham et al., 2005;Healy & Wahlen, 1999;Roychowdhury, 2006). AEM occurs when managers use the discretion allowed under generally accepted accounting principles (GAAP) to affect reported earnings (Healy & Wahlen, 1999). However, REM occurs when they use real business activities to manage reported earnings (Graham et al., 2005;Roychowdhury, 2006). Roychowdhury (2006) defines REM as "departures from normal operational practices, motivated by managers' desire to mislead at least some stakeholders into believing certain financial reporting goals have been met in the normal course of operations". REM has received considerable attention recently due to evidence that firms shift their EM practices from AEM to REM (Cohen et al., 2008;Graham et al., 2005;Ipino & Parbonetti, 2017).
Recent studies suggested that firms -under different reasons-have shifted their EM practises from AEM to REM. These reasons are; regulation changes (e.g., Sarbanes-Oxley) that restrict the use of AEM (Cohen et al., 2008;Zang, 2012), tighter accounting standards and the adoption of IFRS (Ewert & Wagenhofer, 2005;Ferentinou & Anagnostopoulou, 2016;Ipino & Parbonetti, 2017), and higher levels of audit quality (Burnett et al., 2012;Chi et al., 2011). However, some studies find that firms use both REM and AEM complementarily (Chen et al., 2012;Das et al., 2017). Researchers claim that REM can be costly to firms and ultimately to shareholders. Several studies find evidence that REM has a negative impact on future cash flows as well as long-term firm value and performance Roychowdhury, 2006). Thus, this paper investigates the use of REM, particularly in an emerging country where firms' ownership structure is highly concentrated.
Family ownership concentration and REM
Family firms are characterised by a concentrated ownership structure (Srinidhi et al., 2014), and recent studies show that this concentration has an influence on the quality of financial reporting (Durendez & Madrid-Guijarro, 2018). Two competing hypotheses might explain the relationship between EM and family ownership: alignment and entrenchment hypotheses (Wang, 2006). According to the alignment hypothesis, the presence of larger family shareholdings (represented by the managers) could align the interests of family managers and other principals (shareholders), discouraging management from manipulating earnings. Family firms have more effective monitoring mechanisms and achieve superior performance over non-family firms (Anderson et al., 2003).
Family management members have less incentive to practise EM, and they care about the firm's value and reputation (Alzoubi, 2016;Martinez-Ferrero et al., 2016;Tsao et al., 2019). Further, family firms have superior earnings quality to non-family firms (Boonlert-U-Thai & Sen, 2019; Hashmi et al., 2018). Following the alignment hypothesis, recent empirical studies report a positive role for family control in mitigating REM, as family firms practise less REM than non-family firms (Achleitner et al., 2014;T. Chen et al., 2015;Tian et al., 2018).
However, according to the entrenchment hypothesis, controlling shareholders could expropriate the interests of non-controlling shareholders to increase their wealth, thus encouraging earnings manipulation (Abdullah & Ismail, 2016). Based on the entrenchment effect, managers in family firms have both the incentive and the opportunity to manage earnings; this is because of the traditional notion that family firms are less efficient due to the conflict between controlling shareholders and other shareholders (Fama & Jensen, 1983;Shleifer & Vishny, 1997;Wang, 2006). Wang (2006) argued that family members usually hold significant positions in top management as well as on the board of directors. Thus, due to the weakness in corporate monitoring, these family members are able to manage revenues and expropriate the interests of other shareholders through real business activities. Empirically, Yang (2010) finds an association between larger insider ownership and EM in Taiwanese family firms.
Similarly, Chi et al. (2015) document a positive relationship between family ownership and EM, even though this positive relationship is reduced when independent directors are present on the board. Teh et al. (2017) report that family-controlled firms practise more EM in the Malaysian market through their power and authority over decision making. Razzaque et al. (2016) find that family firms practise more REM than non-family firms in Bangladesh. In a recent study in the US and Chinese markets, the findings show that REM is higher in family firms than in non-family counterparts (Eng et al., 2019).
In sum, the greater monitoring by family management is likely to reduce opportunistic managerial behaviour through real business activities. As previous studies show mixed results about the effect of family ownership concentration on REM. Thus the current study predicts that family ownership concentration significantly affects REM. Thus, the following non-directional hypothesis is proposed: H1: Family ownership concentration significantly affects REM in the Malaysian market.
Sample and data collection
The sample of this study comprises all manufacturing companies listed in the Main Market of Bursa Malaysia for the period 2013 to 2016. We extract manufacturing companies from the Emerging Markets Information Service (EMIS) database, which provides details about the company sector and main activities 1 . The companies' annual reports available in the Bursa Malaysia website is the primary reference for extracting information on ownership concentration. This is because data related to ownership and other corporate governance structure in Malaysia is not directly available in online or electronic form. Abdul Rahman (2001) posits that the annual reports are considered an important information source in Malaysian companies. Thus, data related to family ownership concentration and corporate governance variables are extracted from companies' annual reports, while other variables data are downloaded from Thomson Reuters Datastream. Companies that changed their financial year end during the sample period were excluded. New and delisted companies during the study period were also excluded because of insufficient data. Companies with missing data during the period from 2011 to 2016 were also excluded as for REM measurement data related to sales is required for the years 2011 and 2012. A list of the final sample of 264 company (1,056 observations) is used as a reference to extract companies' financial data from their annuals reports and Datastream. Table 1 summarises the sample selection criteria.
The reason for focusing on manufacturing sector is that REM appears to be more pronounced in this sector (Brown et al., 2015;Ge & Kim, 2014;Roychowdhury, 2006). Further, overproduction which is one of the REM strategies is only available in manufacturing firms (Chen et al., 2014;Jarvinen & Myllymaki, 2016). Furthermore, the manufacturing sector plays a significant role in the the Malaysian economy growth. According to the reports of the International Monetary Fund (2016) and the Bank Negara Malaysia (2015), manufacturing sector contributed to Malaysia's Gross Domestic Product (GDP) by 23 per cent, and to Malaysia exports by about 80 per cent in 2015. Yatim et al. (2016) report an increase in foreign investments from developed countries such as from the US and Germany, particularly in the manufacturing sectors. Finally, it was reported that the FDI in the manufacturing sector in 2015 was 44.8 per cent of total FDI in Malaysia (Bank Negara Malaysia, 2015). Therefore, it is crucial to study REM in manufacturing companies.
Real earnings management measurement
According to Roychowdhury (2006), firms manage earnings through real business activities by changing the timing or structure of three types of activity: operating, investing and financial activities. The most common measurements of REM are three proxies: discretionary expenses, sales manipulation and overproduction (Cohen et al., 2008;Roychowdhury, 2006). These measurements should be estimated by year and industry. Each industry-year group should contain at least 15 observations to ensure that there are adequate data for estimating the levels of REM (Roychowdhury, 2006). The current study follows Roychowdhury (2006) and uses Standard Industrial Classification (SIC) codes to classify manufacturing firms into two-digit industry groups (SIC 20-39) based on firms' main activities. The classification results in eleven industry groups. Thus, this paper considers these three measurements of REM estimated cross-sectionally for each year and industry group by employing the following models: Where, CFO t is cash flow from operations in period t. Assets tÀ1 is the lagged total assets. Sales t are the annual sales. is the change in sales in year t relative to the sales in year t-1. The abnormal cash flow from operations (ACFO) is the difference between the actual values and the normal levels of the cash flow from operations calculated as a residual from Equation (1), with a smaller ACFO indicating high REM.
Where, PRC t is the sum of the changes in inventory (ΔINV) and cost of goods sold COGS t ð Þduring the year. ΔSales tÀ1 is the change in last year's sales related to sales for the year before last. Abnormal level of production (APRC) is the difference between the actual values and the normal levels of the production cost calculated as a residual from Equation (2), with a larger APRC indicating high REM.
Where, DIE t refers to the discretionary expenses during the period t; it is the sum of advertising expenses, selling, general and administrative (SG&A) expenses and research and development (R&D) expenses (Cohen & Zarowin, 2010;Roychowdhury, 2006) 2 . Abnormal discretionary expenses (ADIE) is the deviation between the actual values and the normal levels of the discretionary expenses calculated as a residual from Equation (3), with a smaller ADIE indicating high REM.
Although Roychowdhury (2006) measured REM using the three proxies mentioned above, recent studies have measured it by using aggregate approach (N. M. Abdullah & Wan Hussin, 2015;Cohen et al., 2008;Eng et al., 2019). According to Cohen et al. (2008), a comprehensive measurement helps to capture the effect of overall REM by computing a single REM variable from all three variables. W. Chi et al. (2011) claim that the three individual REM variables provide richer information, but the REM aggregate indicates the level of overall REM. Further, Eng et al. (2019) argue that the aggregate measure would better capture REM activity than any single measure. Thus, we generate an aggregate measure of REM by multiplying standardised residuals from the level of cash flow from operation in Equation (1) and discretionary expense in Equation (3) by −1 and adding them to the standardised residuals of the production cost from Equation (2) (Cohen et al., 2008;Eng et al., 2019). Hence, the overall REM is calculated by the following equation:
Family ownership concentration measurement
Family ownership refers to the ratio of shares held by family members over the firm's shares issued (Anderson et al., 2003;Villalonga & Amit, 2006). The Malaysian Companies Act 2016 (section 197, 2A, p. 208) defines family members as "spouse, parent, child, including adopted child and stepchild, brother, sister and the spouse of the director's child, brother or sister". Information related to family members is available in Malaysian firms' annual reports under the section on profiles of directors.
Research model and control variables
This study employs a panel data methodology to test the hypothesis. This methodology addresses potential unobserved firm-level heterogeneity, can handle variability in the data, permits more degrees of freedom, and produces more efficient and consistent results (Baltagi, 2005;Fraile & Fradejas, 2014;Gujarati & Porter, 2009). The panel data methodology has been adopted in previous corporate governance and financial reporting quality studies (S. N. Abdullah & Ismail, 2016;Razzaque et al., 2016). The Hausman specification test is used to select between random effects and fixed effects models. Based on the test results, the random effects model was chosen. To detect possible autocorrelation between variables, we employed the Durbin-Watson test. The value of the test result is 1.19, which indicates that there is an autocorrelation problem in the dataset. The Breusch-Pagan/Cook-Weisberg test was also conducted to check for heteroscedasticity, and the results confirmed the presence of this problem in the research models. Both heteroscedasticity and autocorrelation problems exist in our research model. To correct these two problems, we employ the Feasible Generalised Least Square (FGLS) estimation method (Kouaib & Jarboui, 2016;Mohammad et al., 2016). To this end, we use the following regression model to test the study hypothesis ( Table 2 summarises the measurements of variables): Where REM it is the aggregate measure of the standardised residual of the three REM measurements of firm i and year t; FMOC it is one of two proxies of family ownership concentration: FMOC is the percentage of equity shares held by family members in firm ownership not less than 20%, and the FMOC dummy equals "1" when the family ownership concentration is present in the firm and "0" otherwise. To capture the effect of governance monitoring on REM, three governance monitoring mechanisms are considered: board independence (BIND), audit committee financial expertise (ACFE), and audit quality (BIG4). We include BIND because previous studies have shown that firms with a high proportion of independent directors have lower EM (Garven, 2015;Jaggi et al., 2009;Kang & Kim, 2012). The literature also provides sufficient evidence that firms with a high proportion of financial experts on audit committees engage in lower EM (N. M. Abdullah & Wan Hussin, 2015; J. W. Lin & Hwang, 2010). BIG4 measures whether the auditor of the firm is from one of the BIG4 audit firms and is included in the research model to control the possible effects of this variable on EM practices. Recent studies state that a trade-off exists between REM and AEM (Cohen et al., 2008). However, others indicate that firms use both REM and AEM (Alhadab et al., 2015). Thus, we include AEM as a control variable, represented by the absolute value of discretionary accruals (ABDA) as measured by Jones (1991) model.
We also control for firm characteristics. Specifically, we control the effect of firm size (SIZE) because previous empirical studies show that the size of the firm is an important element that affects REM (Roychowdhury, 2006). To control the possible effect of the firm's performance on REM, we include return on assets (ROA) in the model. Previous studies also find that leveraged firms engage in REM (Anagnostopoulou & Tsekrekos, 2016). We therefore include firm leverage (LEV) in our model. Dechow et al. (2011) find that firms involved in managing earnings have an abnormal market-to-book ratio (MTBV); we include MTBV as a control variable. The model also includes industries and year dummies to control for time and industry effects (Petersen, 2009). Table 3 presents the descriptive statistics for the variables and univariate test for comparisons between family and non-family firms to determine the potential mean differences over the sample period 2013-2016. The mean value of REM is 0.000. This is the same as the mean value found in Cohen et al. (2008) which measured REM as a total value of the standardised residuals of ACFO, APRC and ADIE. Unlike accrual earnings management, which is usually measured by absolute values of the residuals, REM is calculated for each industry and year with actual values (positive and negative). In addition, the values of the mean and median of REM proxies are represented by the residuals of Ordinary Least Squares (OLS) regressions. Therefore, the mean value of combined REM is almost zero, indicating that manufacturing listed companies in Malaysia practise both upward and downward REM.
Univariate analysis
Family firms display lower mean values of REM than non-family firms, with a significant difference according to an independent t-test analysis. Additionally, family firms differ significantly in terms of leverage (LEV) and profitability (ROA), as these variables are higher for family than non-family firms. Family firms also have lower levels of growth opportunities (MTBV), board independence (BIND), and discretionary accruals (ABDA) than do non-family firms; and these differences are statistically significant. However, there is no significant difference in SIZE and ACFE across family and nonfamily firms. Descriptive data related to dichotomous variables are reported in Table 3. Table 4 documents the correlation coefficient between variables. Correlation results show that FMOC is significantly and negatively correlated with REM. The result suggests that the relationship between FMOC and REM is negative. Importantly, Table 4 shows that multicollinearity is not a serious problem among the variables, as the coefficients are not greater than 0.90 (Hair et al., 2014). Further, the variance inflation factors (VIFs) are also calculated to test for multicollinearity. Multicollinearity exists when the VIF value is more than 10 (Hair et al., 2014). The current study shows that all the VIF values are below 3, as reported in Table 4. Table 5 shows the results of the regressions used to test the study hypothesis (H1), which predicts that family ownership concentration (FMOC) significantly affects REM in Malaysian manufacturing firms. The results of the regressions test reported in Table 5 are based on FGLS which correct the heteroscedasticity and serial correlation problems in the research model (Kouaib & Jarboui, 2016;Mohammad et al., 2016). The Wald-chi-square value is strongly significant, showing that the model is valid. Consistent with H1, we find that FMOC (percentage) and FMOC (dummy) are negatively and significantly associated with REM (p < 0.01) in both models, which suggests that higher family ownership concentration is associated with lower REM. This result is in line with the alignment hypothesis, which suggests that the presence of more family shareholders aligns the interests with other shareholders and discourages managers from manipulating earnings. The results are also consistent with those of recent empirical studies, that family firms have less incentive to practise EM and report better quality earnings than nonfamily firms (Achleitner et al., 2014;Alzoubi, 2016;Boonlert-U-Thai & Sen, 2019;Chen et al., 2015;Hashmi et al., 2018;Martinez-Ferrero et al., 2016;Tian et al., 2018;Tsao et al., 2019).
Multivariate analysis
Regarding control variables, the results show that firm size is not associated with REM, which is consistent with the findings of Abdullah and Ismail (2016) that firm size is not significantly associated with EM. Table 5 also shows that LEV is positively and significantly associated with REM, which is in line with the findings of previous studies (Anagnostopoulou & Tsekrekos, 2016;Jie et al., 2017). This could be because firms aim to avoid the violation of debt covenants (Koh, 2003). In addition, Table 5 shows that ROA is negatively and significantly associated with REM, suggesting that firms with good performance are less likely to engage in EM (Abdul Rahman & Ali, 2006). Similarly, the study shows that MTBV is negatively and significantly associated with REM. This result is consistent with the results found in Liu and Tsai (2015), suggesting that firms with high growth Notes: *** p < 0.01, ** p < 0.05, * p < 0.1. REM = the real earnings management measured as an aggregate value of the standardised ACFO (−1), standardised APRC, and standardised ADIE(−1), FMOC = family ownership concentration (percentage and dummy), SIZE = natural log of total assets, LEV = ratio of total debt to total assets, ROA = return on assets, MTBV = market-to-book value ratio. BIND = proportion of independent directors on the board, ACFE = proportion of audit committee members with financial expertise, BIG4 = a dummy variable equal to 1 if the firm's auditor from one of big four audit firms, zero otherwise, and ABDA = absolute value of the discretionary accrual earnings management.
opportunities are less motivated to practice REM to avoid the adverse effect from any surprise earnings (Abdul Rahman & Ali, 2006).
Importantly, corporate governance variables do not appear to influence REM practice in manufacturing firms. For example, BIND does not significantly affect REM. Similarly, ACFE does not support the prediction that an audit committee with a high proportion of financial expertise would improve the quality of financial reporting, including the detection of REM. Audit quality, as measured by BIG4, is also not effective in mitigating REM. These insignificant results suggest that governance monitoring mechanisms are not effective in mitigating earnings manipulation, particularly through real business activities. This could be because ownership concentration limits the corporate governance role in an emerging market. The results also reveal that discretionary accruals (ABDA) are positively associated with REM, suggesting that manufacturing listed firms in the Malaysian market practise both EM types, consistent with evidence that firms use both AEM and REM to manipulate earnings (X. Chen et al., 2013;Roychowdhury, 2006). However, the result does not support the findings documented by some studies that firms shift their EM practice from AEM to REM (Chi et al., 2011;Cohen et al., 2008;Ferentinou & Anagnostopoulou, 2016;Ho et al., 2015;Ipino & Parbonetti, 2017).
Alternative measurements of family ownership concentration
The accounting literature reports different measurements of family ownership concentration (FMOC). As mentioned above, this study measures FMOC by the proportion of shares in the hands of family members, not less than 20 percent, following previous studies (Khan et al., 2015;Setia-Atmaja et al., 2011). We also measure FMOC as a dummy variable that equals "1" if family members own at least 20 percent and "0" otherwise (Abdullah & Ismail, 2016). However, some studies measure FMOC by the percentage of shares owned by family members, not less than 5 percent (Gonzalez & Garcia-Meca, 2014). Martinez-Ferrero et al. (2016) measure family firms as a dummy variable taking "1" if the shareholders are family members or an individual with more than 10 per cent and "0" otherwise. Durendez and Madrid-Guijarro Prob > chi2 0.000 0.000 Note: *** p < 0.01, ** p < 0.05, * p < 0.1.
(2018) consider a business to be a family firm if the family holds more than 50 per cent of the capital and family members are present in the management.
These differences in family ownership concentration (FMOC) may reveal different effects on the quality of financial reporting. Management with a high percentage of FMOC have potential controlling power over other parties, and thus a greater desire to expropriate other shareholders' wealth through earnings manipulation. Indeed, Razzaque et al. (2016) suggest that different thresholds of family ownership have different effects on REM. Thus, the current study re-examines the regression model with different thresholds for FMOC (5%, 10%, 30%, 40% and 50%) to provide further evidence of the effect of different family ownership concentration on REM. We find that, similar to the main findings, FMOC mitigates REM at all levels of ownership concentration (results are reported in Table 6).
REM Individual Measurements
The current paper follows previous studies and measures REM as an aggregate of the three REM measurements proposed by Roychowdhury (2006) (i.e. Chi et al., 2011Cohen et al., 2008;Eng Note: *** p < 0.01, ** p < 0.05, * p < 0.1. Standard errors in parentheses. FMOC ≥5% = family members owned 5 per cent and more of the firm's shares; FMOC ≥10% = family members owned 10 per cent and more of the firm's shares; FMOC ≥20% = family members owned 20 per cent and more of the firm's shares; FMOC ≥30% = family members owned 30 per cent and more of the firm's shares; FMOC ≥40% = family members owned 40 per cent and more of the firm's shares; FMOC≥50% = family members owned 5 per cent and more of the firm's shares. Ferentinou & Anagnostopoulou, 2016;Guo et al., 2015;Jie et al., 2017;Kim & Park, 2014;Li et al., 2016). However, Chi et al. (2011) claim that although the REM aggregate indicates the level of overall REM, the three individual REM variables provide richer information. Thus, we reexamine the regression model for each individual measure of REM (ACFO, APRC, and ADIE) after multiplying ACFO and ADIE by −1 to be consistent with APRC. The results are the same as those reported in the main analysis, suggesting that FMOC plays a significant role in reducing REM through its three individual proxies as well as the aggregate measurement.
Alternative regression approach
Although we employ the FGLS estimation approach in the main analysis, we further employ panelcorrected standard error (PCSE) regression to strengthen our findings. Researchers claim that the PCSE regression approach corrects for autocorrelation and heteroscedasticity problems (Bailey & Katz, 2011). The PCSE regression results confirm the main findings that FMOC significantly constrains REM 4 . The result suggests FMOC (measured as a percentage and dummy) is a significant variable for monitoring managerial behaviour.
Summary and conclusion
The study provides evidence that family ownership concentration has a significant effect on the level of real earnings management (REM) in the Malaysian market. Family-controlled firms are found to practise lower REM than non-family controlled firms. This evidence is consistent across different levels of family ownership concentration, with different measures of REM and alternative regression estimation. The findings support the alignment hypothesis that family members align their interests with minority shareholders. The results support the notion that family firms report a higher quality of earnings and do not manipulate earnings through real activities. Family firms appear to have more incentive for avoiding information asymmetry, monitoring managerial decisions, and avoiding subsequent loss of reputation (Martinez-Ferrero et al., 2016). Policymakers may consider the results of the current study, that family-controlled firms are motivated to self-monitor their business and avoid playing the earnings manipulation game. Investors may benefit from these results and invest in family firms as these firms produce reliable earnings that reflect the real business activity outcomes. The findings of the current study are subject to two limitations. First, the FMOC in our sample only covers manufacturing firms. Thus, these results may not reflect the situation of other sectors. Nevertheless, the results could be generalised to manufacturing firms in similar emerging markets that share the same Malaysian features, especially in Asia. Secondly, our sample is from Malaysia, a country characterised by a family-controlled market; the findings may not be applicable to markets with less family ownership concentration.
correctedstandard error (PCSE) regression are available on request | 7,946.4 | 2020-01-01T00:00:00.000 | [
"Economics",
"Business"
] |
African pathway to achieve inclusive growth: COMESA case study
Purpose – The relationship between economic growth performance and achieving inclusive growth, especially concerning poverty rate, is a subject of continuous argument in economic literature. Although some argue that this relationship is deterministic, i.e. achieving economic growth will definitely reduce poverty and enhance inclusive growth, others believe that the relationship between growth and poverty is conditional, depends mainly on the status of income distribution in this country, i.e. if the growth is combined with a significant improve in distribution then it will reduce poverty. Design/methodology/approach – Africa is a clear example of the nexus between economic growth and poverty reduction. Although many African countries manage to achieve relatively high growth rates, hit two digits in some cases, during the last decades, poverty still widely spread in those countries. Of the 30 poorest countries in the world, 24 are African countries. And about 50% of African people still live under the poverty line. Common Market for Eastern and Southern Africa (COMESA), which could be considered as one of the fastest growing regions in Africa, is not an exception; although the region achieves relatively high growth rates, poverty and inequality are still among the region’s main development challenges. Findings – This paper found that the economic growth rate achieved in COMESA countries could not be considered as inclusive growth as it does not combine with adequate enhancement in inclusiveness indicators. And that the structural characteristics of those countries economy and its inelasticity are the main reasons behind this inefficiency. Originality/value – In this context, this paper aims to evaluate the effectiveness of economic growth achieved in COMESA countries in achieving inclusive growth and to identify the main factors affecting this relationship by using two steps data envelopment analysis. Although this method is originally developed to evaluate the relative economic efficiencies, the main contribution of this paper is the adaptation of data envelopment analysis to evaluate the efficiency of economic growth achieved in COMESA countries in enhancing inclusive growth dimensions such as poverty rate, inequality, unemployment, education, health, and then to identify in its second step the main indicators that could be used to explain the variation in efficiency scores.
Introduction
The Common Market for Eastern and Southern Africa (COMESA) is one of the Regional Economic Communities (RECs) in Africa consists of 19 countries, which include Egypt, Burundi, Zimbabwe Comoros, Congo D.R., Zambia, Djibouti, Seychelles, Eritrea, Swaziland, Ethiopia, Kenya, Libya, Madagascar, Mauritius, Malawi, Rwanda, Uganda and Sudan. It was formed in 1994 to enhance intra-regional trade among its members.
COMESA was established in the mid-1960s, when Eastern and Southern African Countries initiated a process to create an Eastern and Southern African economic community. In 1981, the preferential trade area for eastern and southern Africa establishing treaty was signed, entering into force in 1982. COMESA establishment treaty was signed in 1993 in Kampala, Uganda. It turned into a free trade area in 2000. In 2009, COMESA Customs union was launched in Harare, Zimbabwe.
Although a significance number of COMESA countries manage to achieve relatively high economic growth rate, poverty and inequality are still among the main challenges facing those countries.
This paper tries to evaluate the inclusiveness of economic growth achieved in COMESA countries, to identify whether this growth is combined with adequate improvement in inclusiveness indicators as poverty reduction, less inequality, more job opportunities, etc. in this context, the paper is divided into four sections. Section 1 presents the introduction and the definition of inclusive growth, whereas Section 2 presents the literature review of inclusive growth. Section 3 presents the evaluation of the efficiency of COMESA growth rates using two steps data envelopment analysis (DEA). Finally, the conclusion and policy implications of the study are presented in Section 4.
Inclusive growth definition and literature review
Inclusive growth is the economic growth that generate significant sustainable improvement in welfare, and whose fruits are distributed fairly among individuals and groups. In other words, inclusive growth is the growth with low and declining inequality, economic and political participation of the poor in the growth process and benefit-sharing from that process. It is the growth that creates economic opportunities and ensuring equal access to these opportunities by all groups of society; Equity in the provision of public services particularly education, health and employment opportunities.
Inclusive growth became a central concern in development literature. However, it still has multiple different definitions. The United Nations Development Program (UNDP) defines inclusive growth as: the growth with low and declining inequality; economic and political participation of the poor in the growth process; and the benefit-sharing from that process (Ali and Son, 2007).
World Bank defines inclusive growth as the growth that is rapidly paced, broad-based across all sectors and inclusive for a large part of labour force. In this way, the definition includes both macro and micro determinants of economic growth. This implies that inclusive growth means the growth strategy that involves the equity and equality of opportunities and social protection. And thus, while short run inclusiveness is based income redistribution, in the long run, it bases on enhancing labour productivity. So according to World Bank approach, inclusive growth is a labour absorbing growth and depends on increasing productivity labour force. The Asian Development Bank adds to this definition of inclusiveness the aspects of gender, ethnicity and race equality (Ngepah, 2017).
In the same context, Organization of Economic Cooperation and Development defines inclusive growth a multi-dimensional concept goes beyond gross domestic product (GDP) growth to include welfare and other dimensions of people well-being that allow them to productively participate in the economy and society. It also includes the policy instruments (fiscal and monetary) needed to achieve this inclusiveness (Ngepah, 2017).
While the African Development Bank (AFDB) defines inclusive growth as: [. . .] economic growth that results in a wider access to sustainable socio-economic opportunities for a broader number of people, regions or countries, while protecting the vulnerable, all being done in an environment of fairness, equal justice, and political plurality (AFDB, 2012).
Accordingly, the AFDB strategy to achieve inclusive growth based on four pillars: economic, social, spatial and political inclusions (Ngepah, 2017). In this context, inclusive growth is a very wide expression that includes not only poverty reduction and income inequality but also a wide range of indicators. Those indicators could be classified into two groups: the access indicators, which measures the access or opportunities to be able to participate in the growth such as health, education, governmental efficiency, infrastructure endowment.; and the distribution indicators such as poverty and inequality reduction, unemployment reduction, gender equality enhancement., Although the term "inclusive" can be traced back to the beginning of 2000s, it was first introduced to highlight the contents of pro-poor growth, as that one enables the poor to actively participate in it and benefit from the growth process. Inclusive growth includes both poverty and inequality reduction. However, the idea of inclusiveness, or mainly the triangle relationship between poverty, inequality and economic growth, is discussed earlier than the concept of inclusive growth itself.
But for better understanding of this concept, inclusive growth should be defined in the line with the development of economic thought mainly concerning the relationship between economic growth on one side and poverty reduction and income inequality on the other side. While the economic thinking in the earlier years post Second World War was concentrated on how to achieve high rates of economic growth, and that how this economic growth will improve the standard of living of all population and reduce poverty rate. This understanding changed, as these promises were not achieved. And problem as poverty, inequality started to be seen as major challenges facing countries even those which achieve promising growth rates.
The relationship between economic growth, poverty reduction and income inequality can be tracked down to the earlier literature of neoclassic. According to the neoclassical model, the economic growth is a comprehensive procedure in which the growth of a certain sector pushing other economic sectors to grow through forward and backward links, but the first attempt to study this relationship was what is known as "Kuznets hypothesis".
According to this hypothesis, economic growth could be related to income inequality and thus poverty reduction in a U-shape relationship in which income inequality is increasing in the early stages of growth since savings is concentrated in high income groups, then income inequality started to be improved as growth rate increases later on. This shape of relationship could be explained by two factors: saving concentration in high income groups and then transforming into new capital investments that lead to more growth; and the economic structural transformation accompanied with the growth rates, the traditional economy turned into more modernized industrialized sector where labour move from the less productive sector (agriculture) to more productive sector (industry), leading to an improvement in their standard of living (Kanbour, 2000).
There is also Lewis 1954 Model. According to this model, under the assumption that the economy is consists of only two sectors, namely, a traditional agricultural sector and an industrial sector. The relationship between economic growth and income inequality go through two stages. The first stage characterized with high income inequality, as wages remain unchanged while profits accelerate. While inequality decreases in the second stage, as wages increases because of the reduction of labour supply in the traditional sector. However, this stage will not last forever because of the technological changes that change the income distribution again causing an increase in the profits even with the higher wages, so growth rate increases again and improvements in poverty rates and inequality will be achieved again (Kanbour, 2000).
On the other hand, many literatures believe that the main factor, which determines the nature of the relationship between economic growth and poverty reduction, is the structure of this growth. The effect any sector growth depends on its contribution in the economy as whole and its labour share, and labour elasticity to move from the lagged sector to the growing one and thus enhance income equality and reduce poverty (Suryahadi et al., 2009).
There is also the theory of trickle-down effect that dominated development literatures in 1950s and 1960s. According to this theory, countries in their attempts to achieve development have to adopt growth models that transfer growth fruits from the high-income groups to lower income groups by increasing investment expenditure of these groups that benefit directly from the growth. Correspondingly, the relationship between growth and poverty is not a direct relationship, but if it does not exist, the growth will lead to higher income inequality and widen the income gap (Bhagwati, 1988).
With the increasing international concerns about poverty issue, the theories of pro-poor growth appear. This concept refers to this type of growth which enhances poor people standard of living and which benefits poor people more than higher income groups; in other words, it is the growth that is accompanied with better income distribution. In other words, pro-poor growth focusses on the improvement of the income of the poor relative to that of rich, rather than the absolute improvement of poor income (Ranieri and Almeida Ramos, 2013).
Recently, there is a common understanding that the sustainable development could not be achieved by economic growth alone and, thus, even high growth rate will not reduce poverty unless it is accompanied with enhancement in income inequality. According to this, more comprehensive concept of growth started to be used in literatures, which is inclusive growth. Inclusive growth scope is not limited to poverty reduction and income inequality but goes further to cover other dimensions of inequality and discrimination economically, politically and socially, as mentioned previously in this paper. This growth model is used today as the targeted growth model to achieve sustainable development (Lundstorm, 2009).
Empirically, in the absence of a commonly agreed definition for inclusive growth and so clear or direct indicators for measuring it, there is a lack of empirical studies that identifies its status or determinants or explaining the variations in countries performance in achieving inclusive growth. The empirical studies of inclusive growth usually concentrate on the relationship between economic growth on one hand and poverty reduction and income distribution on the other hand. As in the study of Andree(2017) which investigates the relationship between economic growth and income inequality in 46 sub-Saharan African countries between 2005 and 2015, to measure to what extent economic growth could be considered inclusive and how could political and societal factors explain the variation in inclusive growth. The paper found that social programmes directed to lower income groups is the main factor that helps in achieving inclusive growth.
While Aslam(2016) uses vector error correction model to analyse the long run and short run effects of education, health, trade openness, inflation, GDP per capita and institutional indicators on inclusive growth in less and middle developed Asian countries. The paper concluded that those countries could achieve inclusive growth on the long run but not for the short run. And thus, the main factors that determine the ability of countries to achieve longrun inclusive growth are as follows: education, initial GDP growth, institutional quality.
There is also the study of Jalles (2019) that investigates the determinants of inclusive growth episodes between 1980 and 2013 for 78 countries using logit and multinomial probit estimations that show that human capital accumulation, redistribution policies, productivity improvements, labour force participation, trade openness and institutional quality are the main determinants of the inclusive growth.
In Anyanwu(2013), the study examined factors affecting poverty rates in African countries and thus inclusive growth using regression analysis of data of 43 African countries for the time period from 1980 to 2011. The study shows that higher level of income inequality, primary education, mineral rents, inflation and high population growth rates increase the poverty rate in African countries and thus inclusive growth. While real GDP per capita, net official development assistance and secondary education increase poverty and hinder inclusive growth.
On the other hand, talking about DEA, there are relatively few studies that aim to evaluate inclusiveness or sustainability of economic growth. One of these studies is the study of Burja(2018), in which the authors uses DEA for investigating the efficiency levels of sustainable development in newly members of European Union (EU) by using economic growth as an input, and Global Competitiveness index, Human Development Index and Environmental Performance index as outputs. The paper concludes that Romania did not achieve enough efficiency in achieving sustainable development using its economic, social and environmental resources. And thus, reducing gaps between Romania's economy and other EU countries could lead to better harmonization of economic, social and environmental components of sustainable development.
There is also the study of Halkos(2015), in which two-staged DEA is adopted by using a panel of 20 developed countries for time period 1990-2011 to measure the sustainability of those economies. By first calculate the production efficiency using capital stock and total labour force as inputs and GDP as output, and then in the second stage, eco-efficiency is calculated using GDP as input and different gas emissions as outputs. The result shows that there is large variation between case studies in the environmental dimension of sustainability and less in the overall performance, and that the high production efficiency does not imply an eco-efficiency in all cases.
There is also the study of Santana(2014) which evaluates the efficiency of BRICS countries in transforming productive resources and technological innovation into sustainable development using DEA method. The inputs used were gross fixed capital accumulation, employed population and R&D expenditure, whereas the outputs were GDP, CO2 emission and life expectancy. The analysis showed that Brazil had the highest economic and social efficiencies, whereas China has the lowest environmental efficiency.
Evaluate the efficiency of achieving inclusive growth in Common Market for Eastern and Southern Africa countries 3.1 Method and data description
Although DEA is mainly developed to measure the efficiency of non-profit organizations as educational and health public entities, it is also used to measure the efficiency of complex entities with diverse inputs and outputs where the traditional methods of measuring efficiency could not be applied.
DEA is defined as a "data-oriented" approach for evaluating the performance of a set of peer entities called decision-making units (DMUs), which convert multiple inputs into multiple outputs (Copper, 2011, Handbook on Data Envelopment Analysis). DEA was introduced for the first time in 1978 by Charness, William Cooper and Rhodes who developed its basic model known by their name as CCR model (Copper, 2007), assuming that DMUs works under constant returns to scale.
DEA can be interpreted with either input-oriented or output-oriented approaches. The output-oriented approach focusses on how high maximal output can be achieved with the same amount of resources. The output-oriented approach is appropriate one for inclusive growth efficiency because the principle of cost minimization is not applied according to the market condition (Copper, 2007). The original CCR model for n DMU j where j = 1,. . .,n, that produce Y rj where (r = 1,. . ., s) using X ij where (i= 1,2,. . ..,n) takes the following formula (Copper, 2011): Subject to: . . . :; s and i ¼ 1;2; . . . :; m In this paper, output orientation approach to build CCR model is used, with four outputs and one input to evaluate the pathway to achieve inclusive growth in COMESA countries. In choosing the inputs and outputs, the AFDB approach of inclusive growth is adopted; thus, inclusive growth indicators are not only limited to inequality and poverty dimensions but also to other economic and social dimensions. The input used in this analysis is the real GDP per capita growth rate. The outputs used in this analysis that represented inclusive growth are as follows: Poverty rate: percentage of population under national poverty line. Youth unemployment rate: the share of labour force of 15-24 years of age without work but available for seeking employment. Inequality adjusted human development index: it is human development indicator adjusted for inequalities in the three basic dimensions of human development.
The data used in this paper is obtained from the African Statistical Yearbook 2019 Report, and from World Bank online data base and UNDP online data Bank. Table 2 shows the data for the selected output/inputs indicators: There is a noticeable variation in the economic performance of the COMESA countries as could be noticed from the variation in real GDP growth rates. This rate varies from 1.4% in Burundi to 8.6% in Rwanda. But in general, most countries of the region have achieved a relatively high growth rates, actually 10 out of 16 countries included in this analysis have a growth rate higher than 4%, and only 2 countries have a low growth rate below 2%. The average real growth rate in this region is 4.8% which could be considered relatively high, especially when it compares with African average growth rate 4% or emerging market and developing countries average growth rate 3.9% or even world growth rate 3% [1]. Poverty is still a severe problem in the majority of COMESA countries, where poverty rate exceeded 50% in 7 countries in this sample and even higher than 70% in two countries: Madagascar and Zimbabwe. On the other hand, only three countries have a poverty rate below 25%, namely, Egypt, Uganda and Mauritius, as shown in Figure 1. In general, this region has a moderate human development level as measured by human developed index. The lowest country is Burundi 0.417, while the highest is Seychelles 0.797. But the image changes when moving to inequality adjusted human development index (HDI), almost all COMESA countries loses a significance part of its HDI level reflecting the existence of inequality problem in all dimensions of HDI. Except Mauritius, all COMESA countries included in this analysis have inequality adjusted human development index (IHDI) below 0.450, as shown n Figure 2 (Table 1). Figure 3 shows the efficiency score calculated for input-oriented model with constant returns to scale, using the DEA online software available at: www.deaos.com. The model is applied on 15 COMESA countries owing to the availability of data (Figure 3). From the previous figure: Only two countries achieve a 100% efficiency score meaning that only two countries have a growth rate that really equivalent to the inclusive growth indicators of the country, those countries are: Kingdome of Eswatini and Burundi. While other COMESA countries included in this analysis actually achieve a low efficiency score which is even below 50% (except Comoros 56%). Giving the fact that the efficiency score is only high in the two countries that actually have the minimum economic growth rate, one can conclude that the efficiency of COMESA countries in transforming economic growth into an inclusive growth is very low. This result implies that even many COMESA countries achieving relatively good growth rates. This growth rate could not be considered as inclusive growth rate, since it does not translate into adequate poverty reduction, more employment or even enhanced inequality levels.
Results and discussion
To analyse the main factors explaining the variation of efficiency score in COMESA region, several indicators is used mainly to express both economic and institutional factor that may affect the achievement of inclusive growth. In this paper, global competitiveness indicator is used to reflect the overall economic performance of the countries, while corruption perception index is used to reflect the main institutional factor that hinders development and inclusiveness in Africa. The last factor added to the model is the industrial value added as a percentage of GDP that reflects the economic structure of the country. This could be done through adopting Tow Limit Censored Regression Analysis (Tobit Regression) as follow: Where the variables used in this model are identified in the following table (Tables 3 and 4) The Tobit Model estimated as follow: As could be noticed from the previous analysis, the economic structure is the only significance indicators that affected the efficiency or could be used to explain the variation of the efficiency score, as the economic structure of the economy is more industrialized the country could transform its growth into inclusive growth. And this could be explained as the economic transformation to more industrial economy implies an enhancement in the labour productivity than the traditional agricultural economy, and thus long run inclusive growth could be achieved as assumed by World Bank approach of inclusive growth.
Conclusion
Although DEA is commonly used in measuring and evaluating economic efficiencies, it could be used in evaluating the macro economic performance of countries, measuring environmental performance and reducing poverty.
This result of the analysis shows that even many COMESA countries are achieving relatively good growth rates. This growth rate could not be considered as inclusive growth rate, as it does not translate into adequate poverty reduction, more employment or even enhanced inequality levels.
However, the second stage of the analysis shows that the main indicator, which could explain the variation and efficiency level in COMESA countries, was the economic structure of these economies. As the share of industrial or modern economic sector increases in the country GDP, the ability of these countries to transform its growth into an inclusive growth increases. In this context, COMESA countries need to focus more not only on achieving high economic growth rates but also on how to structurally change its economy in the way that enhances the industrial sector. This could be only done be giving a priority to industrial sector by adopting a comprehensive economic strategy that targeting all factors affecting industry or barriers that may hinder its enhancement, as improving technical educations or providing more financial and nonfinancial incentives to attract more industrial investment both local and foreign. 2. Since the sample size is relatively small. Shapiro-wilk test is used to ensure the normality of residual. | 5,531.4 | 2020-11-11T00:00:00.000 | [
"Economics"
] |
Feasibility study of chromium electroplating process in stamping tooling
Due to the great need for reducing production costs, productivity increasing quality improvement of products, the study had its start drafting surface treatment of dies process development, where the first step was a market study looking for the treatment types and their applications. These treatments are intended to stabilize the production process so that there are no variations on the production and increasing the useful life of dies and their appropriate tools. It was determined through analysis, that the parts had problems in dies drawing which influenced on productivity, quality and cost. It had been realized that those parts had similar problems and the treatments could generally minimize such problems. The following step was to apply the treatment of dies and tools, and then realized that the results achieved certain goals, managing to stabilize the stamping process of those parts.
INTRODUCTION
General Motors Corporation (GM) is the largest vehicle manufacturer in the world and it is the global industry sales leader since 1931.It designs, builds and sells cars and trucks around the world.Established in 1908, GM today employs about 325,000 people around the world.Operates in 32 countries and its vehicles are sold in 200 countries.In 2004, it sold nearly 9 million cars and trucks.GM's global headquarters is located in Detroit-USA.
The General Motors of Brazil (GMB) is the largest facility of this company in South America and the second largest operation outside the United States.On 26th January, 2014 had completed 90 years of activities in the country.In 1925, GM came to Brazil and settled in warehouses at Ipiranga, Sao Paulo city.In 1930, it was transferred to Sao Caetano do Sul at Sao Paulo state.Over the years, it achieved several milestones and became a reference not only in this country, as worldwide, through high standards and innovative procedures.
This work was done in the automotive industrial complex at Sao Jose dos Campos, in stamping area, which now holds approximately 7,000 employees working in three shifts, and two of them assembly cars.It has a daily average of 780 cars.The GMB's production of Sao Jose dos Campos is intended to supply the domestic market and to export to all over the world.Several problems were found and observed, and then tools to receive hard chromium coatings were selected to this study.Chromium electroplated coatings are widely used in industry for protecting mechanical components against corrosion and wear, to the worn components and dimensional recovery in applications where its repellence is required, as in stamping tools and rubber and plastic extruders.
In industry, it is used mainly with electroplated chromium coating thickness greater than 10 μm, which is called hard chromium to differentiate it from the chromium used as decorative coating, which has typically layer thickness between 0.2 and 10 μm.Hard chromium coatings are used on more than 70 years in the industry, and it is proven their excellent cost/benefit ratio.The application of chromium as coating has the need to increase the service life of the tools because of the high cost of replacement components.Chromium is used as coating when it wants to associate with corrosion resistance and to decrease the wear rates.Electrodeposited chromium presents high surface hardness, which can facilitate crack surface, and it can provoke the superficial degradation.
For this reason, it is proposed in this paper the use of an electrochemical technique to determine and to evaluate the behaviour of this coating.The electroplating is one of the most used methods for obtaining metallic coatings, as it allows the control of important parameters of deposits such as: chemical composition, phase composition, microstructure and layer thickness.Few works are found in literature on electroplating of chromium and stamping process, specially related to automotive sector.Deqing et al. (2005) reported their work on the study of the temperature effects, current density and time on thickness of nickel, copper and hard chromium coatings produced by a multiple electroplating process.Svenson (2006) listed properties of chromium and its application on plating.
In Abdel Gawad et al. (2006), carbon fibre of PAN (plasma assisted nitriding) type was electrolytically coated with chromium layer, which was transformed to chromium carbide using in-situ process.The influence of plating parameters such as current density and plating time on the coating thickness of chromium deposited layer was investigated.Alternating pulsed electrolysis was investigated for the surface modification of carbon steel substrates with carbon contents of 0.2, 0.6 and 0.8 mass% (Yagi et al., 2008).
Bin Sobhi (2008) investigated and analysed the scrap reduction in automotive manufacturing parts, specially the car door production and process, where the stamping process is the main process used.Kumar et al. (2010) had developed structural models for effluent treatment system for electroplating, indentifying benefits to the electroplaters and to end users.Mandich and Snyder (2010) described properties, features and applications of electroplated chromium, such as it deposits rank among the most important plated metals and that is used almost exclusively as the final deposit on parts.Lin and Hsieh (2011) had studied strength of relationships with the partners in supply networks in the automotive industry and their influence in row materials quality.Khodadad and Lei (2014) reported their work, where the trivalent chromium coatings were deposited on the pure aluminium substrate using a thin zincates interlayer.
Problem statement
The use of stamped parts in the automotive sector is extremely important, so General Motors of Brazil has been invested in its stamping units at Sao Caetano do Sul, Sao Jose dos Campos and Gravatai facilities, to eliminate waste, and loss of productivity and for continuous improvement in the quality of its products.In stamping unit of the GMB at Sao Jose dos Campos exists losses during the productive process due to problems caused by stamping tools used in the manufacturing of parts.Stamping tools present: Wear, griffin, dirt, weld marks and broken.These deficiencies are generating low availability of stamping tools, high rate of waste parts, rework and overuse of stamping oil. Figure 1 show some of these deficiencies.
Chroming process
According to Newby (2000), the chromium plating is produced by chromic acid solution, which contains one or more catalytic anions.The anions have great influence on chromium deposition, mainly the sulphate found in commercial chromic acid, which may not exceed a certain amount (0.1 mg/m 3 of air) in the relationship of chromic acid for sulphate ion.Therefore, it is essential to use free chromic acid to meet and to take into account the content of these anions, which should be low in chromic acid.Usually admits a maximum content of anions in chromic acid of 0.2% sulphate ion (Weiner, 1973).
According to Silman (1955), the concentrations of chromic acid and sulphuric acid in the bath have secondary importance related to the main factor, which is the relationship of chromic acid and sulphuric acid, which needs to be maintained around 100:1.The concentration for the hard chroming varies from 250 to 350 g (chromic acid)/L, in special cases using extremely high concentrations up to 500 g/L.The properties of a chromium layer, however, do not only depend on the concentration of chromic acid in electrolyte.Depend of, above all, catalyst and working conditions of electrolysis, for example, current density, temperature, and time deposit (Panossian, 1997).
First working electrodes were built with coating, that is, substrate coated with chromium and uncoated electrodes (only the substrate).The material used in the manufacture of electrodes without coating was carbon steel 1020 (ABNT, 2000).After that, the following steps were determined:
MATERIALS AND METHODS
Here, the research classification and the preparation steps of this work are explained.The exploratory-descriptive was chosen as research methodology, where the field research and data collection might be previously performed through company data file and also through demonstration of improvements in performance results of metallurgical equipment (tool), specially the process of applying chromium plating.To start the implementation of chromium plating process of the tools, it did a survey of some operational data of the company, and they were: (i) Index of losses with waste and scraps from total produced; (ii) Number of returns during the year by the client; (iii) Amount of material released to experiment; (iv) More defective and critics products.Some stamping companies were surveyed, in order to obtain documented procedures and data collection with the suppliers Torata Chromium Plating and Cascadura Coatings, which were evaluated throughout the development of the work.
Development of experiments performed at GMB
Analysing the current process of the company where the problem appeared that the requirements of internal customers had been met partially, since faults such as cracks, wrinkles and tool marks on parts during the manufacturing process steps were found.Based on the practices already adopted by companies in the automotive sector there were developed proposals for modification of the manufacturing process by changing some of its parameters and evaluating variables with reset of this process parameters.All responses generated by the process after the changes suggested were analysed.Recommendations and suggestions for changes on process had been given to reach the expected results by the company and by internal clients.
Once the approximate borders of the situation-problem are identified, also the techniques to be adopted for the full study and decisions which require consideration of the findings obtained in the preliminary exploration of the application of the chromium plating in tooling are defined.It was possible to define the main phases of the project, briefly described hereafter.To maximise the benefits and to minimise the disadvantages of the collection instrument selected, Torata Chromium Plating and Cascadura Coatings recommend the procedures that they had adopted: (i) Focus group -formed by production processes engineers, materials engineers, and the leaders of the departments of metalwork and CKD export, which have interest in the issues of study, as well as direct clients of services provided; (ii) Pre-test -the pre-test was conducted following the standards and guidelines established by the companies Torata Chromium Plating and Cascadura Coatings to establish clarity, acceptability and comprehensiveness of product used (hard chromium); (iii) Note in locu -this observation had provided the capture of views, information, and product quality characteristics.
Treatment application
Monitoring and analysis was carried out during the production process to develop the treatments to be applied on the surface of stamping.It has been found through experiments that require surface treatments applications which result in better efficiency possible.Therefore, in order to a more effective study, the treatment of hard chromium in the input radius of drawn matrix part of opening doors structure (Figure 2) in order to analyse the treatment.
After that, it had been taken off the hard chromium layer from tool and the treatment of plating had been applied within the matrix radius, according to Figure 3.There was also another study, applying in the drawn stamping of inner panel part of the trunk cover (Figure 4).
Survey with suppliers
A survey was conducted via the internet and contacts with other plants of the GMB to know which companies of surface treatments are prepared to receive large tools.Due to the average stamping weight 10 tonnes, it is difficult for many companies to perform this work, so it was possible to register two companies: (i) Torata: Company located in the city of Porto Feliz, in the Brazilian state of Sao Paulo which holds hard chromium surface treatment dies with maximum weight of 16 tonnes; (ii) Cascadura: Company located in the city of Sorocaba, in the Brazilian state of Sao Paulo that performs processing of hard chromium plating and tools metallization with a maximum weight of 10 tonnes.These two suppliers make the budget of surface treatment to be performed and the time for execution.
The stamping setting
Tools adjustment is the most important set to this process, because after the surface treatment is not possible to modify the stamp without damaging treatment.The following procedure was determined for adjustment before treatment: (i) Punch and matrix: Setting across the surface to eliminate deformations caused by wear, cracks due to heat treatment, welding, and polishing brands in general; (ii) Press-plate: Ring setting, copying the shape of the array, setting controlled flow of material and determination of equalizers of the press-plate.
Experiments at GMB
The stamping tools to be treated superficially were defined through quality and productivity graphs issued monthly indicating which critical parts with highest rate of problems.The tools were developed in nodular cast iron GM 238, G3500 standard method GMDDS section 85 with graphite Types I and II, pearlitic/ferritic matrix structure, obtained by heat treatment.It has high mechanical properties, good harden ability and good surface finish.The material has behaviour of tensile strength and yield strength similar to SAE 1040 (AISI, 2013) steel hot-rolled, in condition melting gross.It consists of graphite in the form of nodules (spheres), forms I and II, sizes 6 to 8, according to ASTM A247-10 (ASTM, 2014).The matrix is a pearlitic/ferritic structure, with approximately 50% of perlite and a maximum of 5% of carbides dispersed.The carbon content ranges are specified for each group of gauge, in order to control the type and size of the graphite.The variation within a song is about 0.20%.Magnesium is added with the goal of favouring the formation of spheroidal graphite.The pieces were stamped on semi-automatic mechanical presses, ES4 model Schüler of five operations (Figure 5) which have the following specifications: (i) Head area: 4,572 × 2,500 mm; (ii) Strokes per minute: 7 a 14; (iii) Press capacity : 2,000 tonnes; (iv) Standard height for tools: 1,220 mm; (v) Mobile table: 4,500 × 3,000 mm
Product validation and decision-making
After stamping of parts, a visual assessment was performed following the internal procedures used at GM, they are used for evaluation of a normal part of production.In this evaluation took into account the surface aspects (deformations, brands, tearing of material etc), structural aspects (cracks, remounting, sprains) and aspects of dimensional and form (number of holes, wrinkles, lack of material).100% quality control of all parts manufactured by performing a visual assessment of the same is accomplished during the production process of parts, with the aim of observing any faults in the production process, for example, pits, cracks, wrinkles, overlap etc.The results obtained from this quality control showed a significant contrast between the materials.The use of phosphatised material generated a great reduction in the number of total defects, especially in the pits, with a reduction from 3.2 to 0.8% of lumps in the total production of parts in a same period of time.It was observed that the lumps are due to the tearing of the galvanized coating layer during the stamping process.The images obtained through the MEV shows clearly the low grip between galvanized and metal coating base.
The use of phosphatised materials, as the BH 180, 210, 260 and 280 used in precision metal stampings, showed greater efficiency, especially when it concerns the lumps, improving the adhesion of the coating to the metal base compared to materials using only the electro-galvanized layer.It is observed from this study and tests that, the use of materials with layered phosphatised becomes feasible in parts that will be used in external panels of vehicles, where its quality control is more accurate when compared to internal parts.The use of chromium plating done in tools has a number of advantages for both the stamped parts and the tools employed in these experiments.The chromed surfaces showed a layer of superficial hardness of approximately 900 HV (Vickers), giving a high wear resistance and durability of the tool.
RESULTS
Then it can observe the results obtained after the chroming processes of 14 tools of three vehicles selected for the study, for those two study cases will be presented.The external side panel part LD for car model Montana had low productivity due to interference caused in the production process by the tool to eliminate rework from gripping (Figure 6).After it had been held the plating treatment of incoming rays of productivity matrix had a gain of 112 parts per hour due to non-interference of the productive process by the tooling department.Table 1 shows the other gains for the surface treatment of plating.The external panel on the right side of the car model Corsa featured a high number of wastes, totalling 37 parts per month (Figure 7).After presented the superficial treatment of chroming on the press-plate and at the matrix of the stamping of fountain there was drawn a decrease of 35 parts per month in the number of waste.Table 2 shows the gains for the surface treatment of chroming.Then, it is shown that the increase in productivity of the parts studied, compared to productivity before the achievement of surface treatments and after the completion of the work (Figure 8).
It can be checked in the Table 2 that the productivity of the parts studied in three vehicles had an increase of 87 parts per hour being indicated by the letter P in Table 3, which corresponds to a 26.4% gain in productivity.Table 3 presents the legend adopted in this work to indicate the parts studied.
With the work performed, the number of scraps of the parts studied decreased from 354 parts to 14 parts per month, with a 96% reduction.Figure 8 shows a reduction in the number of scraps.Table 4 shows the legend adopted in this work to indicate the parts studied used in Figure 9.It has been gotten through the surface treatments carried out in the studied parts reduce the rework of 712 parts to 29 parts per month, which is equivalent to a gain of 96%. Figure 10 corresponds to gains in all parts studied.Figure 11 shows that the economy hit with elimination of lubricating oil known as Green Rust which it was used around 800 L/month and it was generated a monthly cost of R$ 7,848.00(US$ 3,246.06 in February 4 th , 2014) in the processes of the parts, which have a unit value of 9.81 R$/L (4.06 US$/L in February 4 th , 2014).
DISCUSSION
It was possible to observe that chromium has low coefficient of friction, allowing the reduction or even elimination of lubricants, and therefore the operating cost in a chromed surface, due to its high repellence characteristic, there is no particle adhesion, eliminating the risk of fouling, and avoiding even the rework of tools.When adorn galvanized sheet metal, chromium coating does not allow for accession of zinc particles on the surface of the tool, avoiding the need for polishes, and improving the quality of the surface of the stamped parts.The chromium job enabled greater efficiency in printing, from the use of cheaper cast irons.The use of cast irons also results in advantages in the machining of parts, given the lower hardness of material which facilitates its processing, promotes increase in service life of cutting tools and more efficient use of the machine.Other benefits were observed: (i) Reducing and eliminating the need of lubrication in the stamping process, also reducing environmental problems (ii) Reduction of downtime spent in stamping process, after reducing the need of polishing during the process, due to low adherence of material, reduction of faults or problems occurred during the stamping parts resulting in quality and productivity gains; (iii) Increase in tool life and greater ease in the recovery of the same, being need only the replacement of the chromium layer.
Conclusions
The application of hard chromium surface treatment used in this work represents an advantage in critical parts stamping process, achieving stability in the production.
With the study conducted in selected parts, it was defined as procedure that helps in treatment analysis to apply.This procedure enables to apply the surface treatment in order to achieve the best efficiency, increasing the life of stamping and achieving greater stability of the process.It has to follow the procedure for the application of two types of treatments used in this project: (i) Plating: used when the part shows constant and located gripping caused by rays of matrices and puncture; (ii) Chroming: used when the part shows gripping in the press-plate, scratches on the surface of the punch and matrix, cracks and wear on tooling.
It was concluded that all these results and previously cited gains are coming from the properties that the chromium plating gave to the tools used in this work, analysed and previously described and carried out at General Motors presented high performance during the process.A large etching efficiency factor of this project lies in the use of high strength steel plates, which provide the high mechanical properties, to the vehicle and smaller thicknesses used for manufacture of structural elements and automotive panels, resulting in more resistant vehicles and at the same time lighter and economical.In order to shape these plates, due to its characteristics and properties, it was necessary the development of new techniques, which were designed to ensure an efficient production of automotive elements, otherwise it would not be possible using conventional techniques.Watching the production losses of parts by quality problems, it has been seen that this indicator has increasingly strategic importance in the production chain of automotive group from which General Motors is part.
The strategic importance due to the fact that the guarantee of quality of the final product must be sustained once, if not only supplies products directly to the internal client such as metalwork, as well as for external customers of other productive areas of the group as its dealers, parts for export and other plants of the corporation in Brazil and South America.
In the years 2011 and 2012, the goal of production loss was established taking into account the history of the equipment and the process, being respectively 25 and 20%.The lost production is a percentage; in this case, it was adopted a tolerance range since this is a measure of reliability of the processes production equipment (tools).The actual values of production losses and losses of processes affected in the years cited were respectively 30.15 and 22.20%, which demonstrates a performance improvement in product quality and in the process between the years 2010 and 2011, but the range of the goal is still the challenge.
(i) Passivation solution of substrate; (ii) Ideal concentration of passivation solution; (iii) Scanning ideal speed of passivation; (iv) Selection of potential where the chromium does not have chemical reaction and the substrate suffer passivation; (v) Determination of loads density of substrate passivation of coated and uncoated electrodes for the calculation of porosity; (vi) Determination of coating thickness; (vii) Manual polishing (sandpapers with granulation of 600 and 400) for the electrodes without coatings.
Figure 4 .
Figure 4. Array of back cover of the trunk (courtesy: General Motors).
Figure 8 .
Figure 8. Productivity of the pieces studied.
Figure 10 .
Figure 10.Rework of the parts studied.
Figure 11 .
Figure 11.Cost of representation with oil consumption.
Table 1 .
Monitoring of the parts gain: external side panel LD.
Table 2 .
Monitoring of external panel LD gains. | 5,272.4 | 2014-08-15T00:00:00.000 | [
"Materials Science"
] |
AVERROES’ “EPISTLE ON DIVINE KNOWLEDGE” AS A DIALECTICAL WORK: BETWEEN FORBIDDEN INTERPRETATION AND PHILOSOPHICAL TRAINING
Abstract Averroes’ “Epistle on Divine Knowledge” presents four different dialogues on two textual levels. These dialogues, the syllogistic structure of the arguments in them, and their use of contradictories indicate that the “Epistle on Divine Knowledge” is structured nearly entirely in accordance with the descriptions of dialectic we find in Averroes’ commentaries on Aristotle's Topica. Accordingly, Averroes’ solution to the question of how God can have universal knowledge of particular things is a dialectical account of the distinction between Divine and human knowledge. Moreover, at a crucial point in the “Epistle on Divine Knowledge” Averroes refers to Aristotle, Metaphysics Β, which he considers to a dialectical exposition of questions on metaphysics. This reference suggests that Averroes sees the “Epistle on Divine Knowledge” as a kind of dialectical inquiry aimed at answering questions that arise at the outset of studying metaphysics. So, while it is possible to view the “Epistle on Divine Knowledge” as a dialectical interpretation of Quran 67:14, its primary purpose is to introduce its readers to metaphysical speculation. Thus it does not violate Averroes’ legal prohibition given in the Decisive Treatise against declaring dialectical interpretations in books available to the general public.
INTRODUCTION
In his Decisive Treatise, Averroes decrees that interpretations (al-taʾwīlāt), of the Law, "ought not to be declared to the multitude (al-ǧamhūr) nor established in rhetorical or dialectical books." 1 Shortly thereafter, Averroes goes so far as to associate declaring "interpretations to those not adept in them" with heresy (al-kufr) on the grounds that it leads to damnation (halāk) in this world and the next. 2 Indeed, so against public dialectic is Averroes that the ideal state Averroes describes in his Commentary on Plato's Republic is one without public dialectic or dialecticians. 3Still, Averroes himself employs dialectical methods not only in scientific works intended for an audience adept in such argumentation, but also in more general works such as Tahāfut al-tahāfut and even in the Decisive Treatise itself.Dialectic is also present throughout Averroes' "Epistle on Divine Knowledge," 4 and 1 Averroes, Decisive Treatise and Epistle Dedicatory, trans.Charles Butterworth (Provo, Utah: Brigham Young University Press, 2001), p. 26, para.45.On the legal form of the Decisive Treatise as a fatwā, see Daniel Heller-Roazen "Philosophy before the Law: Averroes's Decisive Treatise," Critical Inquiry, vol.32 (2006), p. 412-442. 2 Averroes, Decisive Treatise, p. 27, para.47. 3 Yehuda Halper, "Expelling Dialectics from the Ideal State: Making the World Safe for Philosophy in Averroes' Commentary on Plato's Republic," in Alexander Orwin (ed.), Plato's Republic in the Islamic Context: New Perspectives on Averroes' Commentary (University of Rochester Press, 2022), p. 69-86. 4The first editor of this work, Marcus Joseph Müller, gave this treatise the Arabic title, Ḍamīma, "Appendix" in Philosophie und Theologie von Averroes (Munich, 1859).The scribe of the manuscript used by Müller referred to the text as "The question which Abū al-Walīd (may God be pleased with him) mentioned in the Decisive Treatise" (preserved as a subtitle in Müller's edition, p. 128: في الوليد ٔبو ا ذكرها التي المسئلة عنه الله رضى المقال .)فصل Muhsin Mahdi notes that this "is not a formal title and does not form part of the work as written or dictated by Averroes; it is a scribe's explanation."Moreover, Mahdi notes that this work is specifically addressed to "one of his companions," and argues that this companion is in fact the Almohad Caliph Abū Yaʿqūb Yūsuf.Accordingly, he says, that this treatise "was not meant to have a title: it is an epistle dedicatory."See Muhsin Mahdi, "Averroes on Divine Law and Human Wisdom," in Joseph Cropsey (ed.), Ancients and Moderns: Essays on the Tradition of Political Philosophy in Honor of Leo Strauss, (New York: Basic Books, 1984), 114-131, esp.117-118.Charles Butterworth takes up Mahdi's suggestion and gives the work the title "Epistle Dedicatory" in his edition and translation, Averroes, Decisive Treatise and Epistle Dedicatory.The two Hebrew translations of this work, one by Ṭodros Ṭodrosi of Arles and the other anonymous, give their own titles to the work.Ṭodros' title appears as "Treatise on Eternal Knowledge" הקדמון( במדע מאמר in two manuscripts, and בקדום מדעת … מאמר in another manuscript).The anonymous translation, which survives in only one manuscript gives as a title "Epistle on the indeed, Averroes ends the short work by quoting Quran 67:14 ( َق ل ,)خَ which can be translated "Does he (God) not know, he who created, since he is perspicacious and informed?" 5 In either case, Averroes suggests that the entire "Epistle on Divine Knowledge" and its question about God's knowledge of generated things is in a sense an interpretation of this verse.The "Epistle on Divine Knowledge," then, is a dialectical interpretation of the Quran and as such would seem to be explicitly prohibited from being written down and presented to the multitude according to Averroes' Decisive Treatise.Does Averroes' "Epistle on Divine Knowledge" go against the legal ruling Averroes laid down in the Decisive Treatise?
It is, of course, possible to answer this question using Averroes' own justification for discussing the connection between wisdom and Law and interpretation, viz.that such issues and questions have gained a status of being widely held among people (šuhra … ʿinda al-nās). 6This, indeed, would explain Averroes' use of dialectical arguments in the Decisive Treatise, and perhaps in the Exposition and Incoherence as well.Yet, while these works use some dialectical arguments, they are not thoroughly dialectical in the way of the "Epistle on Divine Knowledge," which we shall see is structured nearly entirely according to the descriptions of dialectic we find in Averroes' commentaries on Aristotle's Topica.That is, the "Epistle on Divine Knowledge" is fundamentally dialectical in a way we do not see in Averroes' other writings and so we may ask why he wrote in this way here and why, moreover, writing such a thoroughly dialectical work is permitted?
Before answering this question, we shall examine the dialectical character of the "Epistle on Divine Knowledge" in light of Averroes' own descriptions of dialectic in his Short and Middle Commentaries on Aristo-Meaning of the Doubt attendant on the Eternal's Knowledge (May He be Exalted)" ית( הקדמון בידיעת הקורה הספק בענין … .)אגרת See Silvia Di Donato, "La tradizione ebraica dell'opuscolo di Averroè sulla scienza divina," in Irene Kajon, Luise Valente, and Francesca Gorgoni (ed.), Philosophical Translations in Late Antiquity and in the Middle Ages.In Memory of Mauro Zonta (Rome: Aracne, 2022), p. 161 and 164, and the discussion on p. 149-150.Both Hebrew translations use the term "Eternal" to mean Divine.In a course I attended on Averroes' Decisive Treatise at the University of Chicago taught by Joel Kraemer and Ralph Lerner in 2003, Prof. Kraemer suggested using the title, "Treatise on Divine Knowledge," relying, as I recall, on Ṭodros' title.Here I have adopted the title, "Epistle on Divine Knowledge," in an attempt to combine these approaches.tle's Topica.Even though the Middle Commentary was probably written after the "Epistle on Divine Knowledge" it is Averroes' most detailed work on dialectic and probably presents his views best, even if they were in less developed form at the time he wrote the "Epistle on Divine Knowledge."Then we shall follow Averroes' comparison of solving the difficulty of the "Epistle on Divine Knowledge" to untying a knot to its source in Aristotle's Metaphysics Β and examine what Averroes has to say about dialectic in his Middle Commentary there.This will allow us to suggest an explanation of the role of dialectic in Averroes' "Epistle on Divine Knowledge" that is consistent with Averroes' philosophical project.
DIALECTIC IN THE "EPISTLE ON DIVINE KNOWLEDGE"
One cannot escape the dialogical structure of the "Epistle on Divine Knowledge," which presents four different dialogues on two textual levels.First, there is an apparent frame dialogue between the author, viz.Averroes, and an unnamed interlocutor, whom some have supposed to be the Caliph Abū Yaʿqūb Yūsuf. 7Nested within that frame, are three other short dialogues.One between a first person plural "us" and someone known as "the adversary" (al-ḫaṣm), another between the first-person plural "us" and the mutakallimūn (para.5), 8 and the third betweem "us" and al-Ġazālī (para.6-7).Each of these dialogues is between two people and each includes a questioner and a respondent.So, even though the term al-ḫaṣm is more frequently used in the context of rhetoric, it is clear that there is no audience here and the adversarial contexts of dialogues 2 and 3 are dialectical, rather than rhetorical. 9Moreover, all four 7 This is suggested by Muhsin Mahdi in "Averroes on Divine Law and Human Wisdom," p. 118-119.This suggestion is repeated by Charles Butterworth in Decisive Treatise, p. xl-xli. 8In fact, para.5 is careful to use the passive voice and the sense that one of the interlocutors is "us" is supplied from context, including from the fact that the "us" عندنا shows up again at the opening of paragraph 7, despite the use of the passive in para.6. 9 Glossarium graeco-arabicum lists ḫaṣm as a frequent translation of ἀντίδικος in the rhetoric and the verbal form, ḫaṣama as translating ἀμφισβητέω in the rhetoric.Note that both Hebrew translations of the "Epistle on Divine Knowledge" render ḫaṣm by baʿal rib (though Ṭodros Ṭodrosi adds the definite article).See Silvia Di Donato, "La tradizione ebraica dell'opuscolo di Averroè sulla scienza divina," p. 141-169.While the Hebrew term, baʿal rib, is often used in the context of rhetoric, it also appears in Samuel ben Judah of Marseilles' Hebrew translation of Averroes' Short Commentary on Aristotle's Topica.See Averroes, Short Commentary on Aristotle's Logical Organon: Topica.Trans.Jacob ben Makhir Ibn Tibbon, revised by Samuel ben Judah of Marseilles, ed.Yehuda Halper (Mahadurot: Modular Hebrew Digitally Ren-dialogues concern a single "doubt," šakk, about God's eternal knowledge of created, i. e., generating things.
A dialectical problem is an inquiry that leads either to choice and avoidance or to truth and cognizance … About [this problem] either people's opinions go any way, or the opinions of the many are opposite those of the wise, or the opinions of the wise are opposite those of the many, or each (sc.wise and many) go opposite with themselves.
Al-Damašqī apparently translates Aristotle's "dialectical problem" (πρόβλημα διαλεκτικὸν) as al-masʾala al-manṭiqiyya, "logical questioning," though ʿAbd al-Raḥman Badawī points to a note above the line that reads al-maḥāwiriyya al-ǧadiliyya, meaning something like "dialectical pivots." 10 Neither reading is clearly relevant for our purposes.Yet, in his Middle Commentary, Averroes restates what is apparently the same passage as follows.
ما شك يلحقه بل لمشهور بحسب بنفسه صدقه معلوما يكن لم ما فهو الجدلى المطلوب المشهور. في
The object of dialectical inquiry is that whose truth is not known in itself according to what is widely-held, but that which is attended by doubt with respect to what is widely-held. 11erroes goes on to give examples related to choice, such as whether or under what circumstances wealth or poverty is to be preferred, and examples related to truth and knowledge, such as whether the world is eternal or created, both favorite examples of Aristotle's.Averroes also make special mention of doubts that occur to believers regarding what is widely-held in their religions.Averroes, then, takes "doubt," šakk, to be central to dialectic, even though it does not play so clearly prominent a role in Aristotle's text, even in Arabic translation."Doubt" is clearly a framing subject of the "Epistle on Divine Knowledge."Indeed the word šakk appears 13 times in the short, eleven paragraph text.Moreover, the inquiry of the "Epistle on Divine Knowledge" is centered around this doubt.Indeed, the text seems to be divided fairly evenly into two parts: (1) The determination of the doubt (taqrīr hāḏa al-šakk, paragraphs 1-5) and (2) The solution to the doubt (ḥall hāḏa al-šakk , paragraphs 6-11).Even if we follow Charles Butterworth's division of the text into three parts (in addition to what he sets as introductory and concluding paragraphs), 12 we take paragraphs 8-10 as "Consequences" to the solution of the doubt in paragraphs 6-7.That is to say, the treatise is clearly an inquiry into "that which is attended by doubt." The doubt in question, whether and how God knows created things, is moreover one about something "whose truth is not known in itself according to what is widely-held."What makes something "widely-held" (mašhūr)?This concept is loosely connected to Aristotle's notion of ἔνδοξα as developed in the Nicomachean Ethics and Topica.In the Topica Aristotle connects it to what all or some people believe, especially the wise. 13Averroes follows Aristotle in this in both his Middle Commentary (para.21) and his Short Commentary (para.13), while giving a more systematic breakdown into the kinds of wise people (scientists, experienced doctors, etc.) who might hold different opinions.This is significant because, as Averroes notes, if all believe something, there is no doubt and so, no need for dialectical methods.In the case of the doubt about God's knowledge, we know that some who might be considered wise, viz. the mutakallimūn, have opinions about it which are patently wrong. 14Accordingly, the doubt about God's knowledge of created things is not known in itself according to what is widely-held.
Moreover, according to Averroes in both the Short and Middle Commentaries on the Topica, the contradictory or opposite of something well-known is also well-known. 15That is to say, if the view of the mutakallimūn is well-known, then so is its contradictory.This is a further indication that this doubt is "according to what is widely-held."Accordingly, it is clear that the discussion of the "Epistle on Divine Knowledge" in general is a dialectical inquiry, as Averroes understands 12 That is, the division he employs in Averroes, Decisive Treatise and Epistle Dedicatory. 13Aristotle, Topica 100b21-23.Cf.Ethica Nicomachea 1145b5. 14"Epistle on Divine Knowledge," para. 5. 15 Short Commentary para.13 and Middle Commentary para.21.
it.
Averroes' dialectical approach can be felt further in the way he structures opposing arguments in the determination of the doubt, in a form that is readily translatable to syllogisms.In the Prior Analytics and elsewhere Aristotle generally introduces syllogistic premises with the Greek εἰ, meaning "if," and signals the conclusion with the particle ἄρα.These terms come into Arabic as ʾin and lazima ʾanna respectively.These terms appear with some frequency in Averroes' determination of the doubt, suggesting that he is putting the arguments in syllogistic form.This is further tied in with the art of dialectic, as Averroes says at the opening of his Middle Commentary on the Topics:
مقدمات من نعمل ٔن ا سائلين كنا ٕذا ا بها نقدر التي الصناعة بالجملة هي الصناعة هذه يروم كلى وضع كل حفظ وعلى حفظه, المجيب يتضمن وضع كل ٕبطال ا على قياسا مشهورة مجيبين. كنا ٕذا ا ٕبطاله ا السائل
This art is in general the art through which we are able, when we are questioners, to construct a syllogism out of well-known premises in order to refute any thesis which the respondent has committed himself to defendor to defend any universal thesis which a questioner strives to refute, when we are respondents. 16at is, according to Averroes -and here is following al-Fārābī's reading of Aristotle's opening line of Aristotle's Topics 17 -the dialectician should be able to argue both sides of a (universal) thesis using syllogisms built out of well-known premises.This is, in fact, what we find in paragraph 3 of the "Epistle on Divine Knowledge."Averroes presents two contradictory theses followed by arguments in the form of a syllogism.The theses are: (T) Created things in God's knowledge are the same before they exist as they are after they exist.
(¬ T) Created things in God's knowledge are not the same before they exist as they are after they exist.
This formulation sounds somewhat awkward because it takes as its subject the created things as objects of God's knowledge.Averroes then 16 Middle Commentary, para. 1. 17 takes the negative thesis (¬ T) as a premise (beginning with ʾin) and draws the conclusion (lazima ʾanna) that eternal knowledge changes (mutaġayyiran) in response to creation.This argument assumes that a change in the object of God's knowledge is a change in God's eternal knowledge itself.This assumption is not controversial and so could be accepted as a universal well-known premise according to Averroes' conditions for dialectic.Thus we can restate this argument as a syllogism: The objects of God's knowledge are subject to change (i.e., ¬ T) The objects of God's knowledge are part of eternal knowledge (Some) eternal knowledge is subject to change The conclusion that (any) eternal knowledge is subject to change is, according to Averroes "absurd" (mustaḥīl). 18Accordingly, this syllogism is brought by Averroes to refute thesis ¬ T.
When examining the contradictory thesis, T, viz.created things in God's knowledge are the same before they exist as they are after they exist, Averroes constructs a literary dialogue with an unnamed adversary to interrogate the question of whether created things are in themselves the same after they are created or different.The adversary admits (salima) that they are not the same, and thereby is led to admit that the knowledge of created things changes when those things are created.This admission is equivalent to ¬ T and the adversary has thus been led into accepting both T and ¬ T, i. e., into a contradiction.
From this Averroes concludes, "One of two things is obligatory; either eternal knowledge differs in itself, or generated things are not known to it" (para.3).These two are not proper contradictories.Yet they do follow from another set of contradictories: (S) God knows created things.(¬ S) God does not know created things.
If S, then we are faced with T or ¬ T, which are either absurd or selfcontradictory according to Averroes.Yet ¬ S is also "absurd" (mustaḥīl), according to Averroes, though he does not say why -and indeed is famously blamed for holding precisely this position. 19In any case, Aver-18 Glossarium graeco-arabicum lists this word as a possible translation of ἄτοπος.Note that the parenthetical additions of "some" or "any" here are not in Averroes' text, but are not inconsistent with his argument. 19This controversy may be based somewhat on Averroes' statement in his Short Commentary on Aristotle's Parva naturalia that separate intellect know only universals, not particulars.See Averroes, Compendia librorum Aristotelis qui Parva naturalia vocantur, ed.H. Blumberg (Cambridge, Mass.: The Medieval Academy of America, roes is clearly pursuing a dialectical approach, via thesis and its contradictory and arguing each on the basis of syllogism.
This approach continues in the dialogues Averroes creates with the mutakallimūn and with al-Ġazālī in paragraphs 5 and 6 respectively.The mutakallimūn hold thesis T, but deny that God's knowledge changes when the things change.Averroes points out to the imagined interlocutors that this is not consistent with what knowledge of something that changes is.Note that he does this, too, by setting up contradictory theses, viz.
(V) When things come into existence, a change occurs, viz.coming from nothing into existence.
(¬ V) When things come into existence, a change does not occur.
Those who hold ¬ V, he says, "are being contentious" (kabirū), 20 while V must imply that God knows the change in that which comes into existence, thereby raising the questions of T and ¬ T, what Averroes calls "the previous doubt."Again, we see a thesis and its contradictory with arguments to rule out the possibilities, i. e., dialectical argumentation.
The solution of this doubt begins with another mini-dialogue.This time with al-Ġazālī.This dialogue does not identify an answerer or a respondent.Moreover, Averroes focuses on al-Ġazālī's meaning, maʿnāhu, rather than on his actual statement (qawl).According to Averroes, al-Ġazālī claimed (zaʿama) that knowledge and what is known are related (anna al-ʿilm wa-al-maʿalūm min al-muḍāfa)."Just as one of two related things may change and the other related thing not change in itself, so it would seem to occur in the case of God's knowledge, may he be glorified).That is, they change in themselves but his knowledge … does not change." 21Averroes refutes this view by appeal to the proper understanding of the category of relation.Averroes notes that the subject (mawḍūʿ) of the relation need not change along with a change in the object of the relation, but the relation (al-ʾiḍāfa) itself does actually 1972), p. 74-75.Averroes says in Metaphysics Λ that divine providence is applied to the species only, not to individuals.See Averroes, Tafsīr mā baʿd aṭ-ṭabīʿat, ed.Maurice Bouyges (Beirut, Imprimerie Catholique, 1938-42), vol.3, p. 1607 (C.38.r), cf.p. 1707-1708 (C.51.ii).See also Richard Taylor, "Averroes' Epistemology and its Critique by Aquinas," in R. E. Houser (ed.), Medieval Masters: Essays in Memory of Msgr.E. A. Synan (Houston, Tex., 1999), p. 147-177. 20Aristotle also frequently dismisses certain arguers as "contentious" (ἐριστικός)see, e. g., De sophisticis elenchis 172a8-9, though it is not entirely clear that this is equivalent to the Arabic here.Still, Glossarium graeco-arabicum lists al-mukābara as a translation of ἐριστικόν in Themustius' Commentary on Aristotle's De anima (https://glossga.bbaw.de/glossary.php@id=189716.html). 21Butterworth trans.modified.change.Averroes' example is a column on Zayd's right at one point that is on his left at another, without Zayd moving.Zayd has not changed, even though the relation "to the right of Zayd" has changed to "to the left of Zayd." Averroes does not here deny that knowledge is a relation. 22Indeed, when Averroes discusses relation (al-ʾiḍāfa) in some detail in the context of what he identifies as topos 27 in his Middle Commentary on the Topics (para.165), he frequently uses knowledge (al-ʿilm) as an example.In this he follows Aristotle who also employed ἐπιστήμη as an example in the parallel passage in his Topica, at 125a.Averroes identifies a kind of relation that is determined by prepositions, such as li, and notes that sometimes things related in this way can convert such that when A is related to B, B is also related to A. An example of this, says Averroes in a section preserved only in the 14th c.Hebrew translation of Qalonimos ben Qalonimos, is knowledge and what is known. 23Moreover, notes Averroes there in a section preserved in the Arabic, knowledge is an example of something that can be said by a syllogism 24 of that which is known and of the soul that knows.Knowledge exists in the soul and in the things which are known and which are outside of the soul.If it should happen that an inquiry is into the soul, then the knowledge will necessarily exist in the thing which is known.
well as in the soul of the knower.This kind of knowledge is fundamentally distinct from the knowledge one gains when looking into one's own soul. 25l-Ġazālī's mistake was not only that he did not know how to argue properly about relation, it was also because he made "a syllogism between what is not seen and what is witnessed." 26That is he made a syllogistic inference about God's knowledge based on his own, human knowledge.This syllogism meant that he combined the two kinds of knowledge that Averroes mentioned in his discussion of topos 27, knowledge by syllogism of things outside the soul and knowledge of what is in the soul.In fact, al-Ġazālī applied what he knew from his own soul's knowledge to a kind of Knowledge that is distinctly outside of his soul, viz.God's knowledge.Al-Ġazālī's big problem, then, was that he did not know how to make topical arguments of things in relation to one another.This may have been due to the fact that al-Ġazālī never had the opportunity to read Averroes' Middle Commentary on Aristotle's Topica.
In responding to al-Ġazālī and in solving the initial doubt, Averroes employs an argumentation technique he recommends throughout his commentaries on the Topica: he takes the opposite, not of the proposition T, but of one of its terms, in this case God's knowledge. 27Al-Ġazālī themselves in themselves and in the soul of the knower.Should an existing thing change, the knowledge would also change.When it comes to the soul's knowledge of soul, then there is no possibility of a par between the knowledge of the knower and the knowledge in the thing known.Indeed, since they are identical, the soul is the cause of the knowledge of the soul.That God knows himself would follow from the Arabic translation of Aristotle's Metaphysics Λ; Averroes, Averroes, Tafsīr mā baʿd aṭ-ṭabīʿat, vol.3, p. 1692 (T.51.r).See also Averroes' comments on p. 1700-1701 (C.51.r-s).Cf.Steven Harvey, "Notes on Maimonides' Formulations of Principle K," Iyyun: The Jerusalem Philosophical Quarterly, vol.68 (2020), p. 233-244.See also Averroes, Tahafot al-tahafot.L'incohérence de l'incohérence, ed.Maurice Bouyges (Beirut, Dār al-Mašriq, 3rd ed., 1992), p. 459, 22. Accordingly, he is the cause of his own knowledge.This may be at the heart of what Averroes has in mind in the "Epistle on Divine Knowledge" when he distinguishes between generated knowledge which is caused by the existing things and eternal knowledge, i. e., divine knowledge, which is the cause of those things and which God knows through knowing himself (para.7).This argument is highly conjectural.Note that Di Giovanni argues that God's knowledge of himself can not be productive of his own knowledge.According to him, the arguments in the "Epistle on Divine Knowledge" and Metaphysics Λ are not consistent."Philosophy Incarnate," p. 152-155. 26Butterworth trans.modified, p. 41.Butterworth translates the term qiyās here "analogy."Readers who consider that what is meant here is not a proper syllogism are invited to replace "syllogism" and "syllogistic inference" with "analogy" throughout this paragraph.Cf. n. 24 above.and others had assumed that God's knowledge is like any other knowledge, but Averroes argues that it is in a different state (al-ḥāl … ḫilāf…).The state of human knowledge is dependent on, or the cause of the existing things which it knows.Consequently it changes when they change.If God's knowledge is in a different state, then it is not dependent on the existing things which it knows.Averroes, however, goes beyond what one could infer from taking an opposite view and says that God's knowledge is the cause of the created things, or that the created things are dependent on God's knowledge.This fundamentally different kind of knowledge does not change even when the existing things change.This allows him to adopt proposition T, since created things in God's knowledge are unchanging, even if the created things in themselves are subject to change.Eternal knowledge is not affected by any changes in the created things, since it is prior to them in causation and independent of them.This effectively solves the doubt, in Averroes' view.
It should be clear by now that the "Epistle on Divine Knowledge" is a thoroughly dialectical work, and follows the criteria for dialectic that Averroes himself finds in his commentaries on Aristotle's Topica.I have discussed in some detail the dialectical character of all of the internal dialogues.The frame dialogue, which is not adversarial, would seem to suggest that the dialectic between Averroes and the unnamed addressee is for the sake of practice and learning, as outlined at the opening of the Middle Commentary on the Topica and at the very end.Thus, e. g. in his comments on Topica VIII.4,Averroes refers to those whose intention is training in this art and determining the thing sought which is spoken of with regard to the demonstrative science, not those whose intention is contention. 28
البرهانى العلم نحو فيه يتكلمون الذى المطلوب وتوطئة الصناعة بهذا الارتياض غرضهم الذين الغلبة. غرضهم الذين لا
Earlier, at the opening of Book VIII, Averroes notes, "the philosopher and the dialectical person share in the inquiry into discovering the topos." 29If the topos here is something like Divine Knowledge as causal knowledge, then the entire "Epistle on Divine Knowledge" could be an exercise in coming to discover that.It is thus possible to see the work as a dialectical work aimed at coming to the basis of an argument about 27 Divine Knowledge with an eye to demonstrative sciences, i. e., as on the way to philosophy proper.Also, at the very end of the Middle Commentary Averroes interprets Aristotle's statement, "One should not engage in dialectic with everyone, nor should one exercise with one who just happens to be there." 30verroes takes this to mean that one should avoid using dialectical arguments with the "dialectical person" (al-insān al-ǧadalī) whose intention is training (al-ʾirtiyyāḍ). 31This would seem to indicate that dialectical arguments ought to be taken up with those who are not dialectical people, but people training in dialectical arguments in order to gain proficiency in demonstrative science, i. e., with potential philosophers.
METAPHYSICS AND THE EPISTLE ON DIVINE KNOWLEDGE
In fact, Averroes hints at an even more specific intended readership for the dialectical arguments of the "Epistle on Divine Knowledge" at the end of paragraph 2. There Averroes says, "For one who does not know the knot will not be able to untie it" الحل( على يقدر لم الربط يعرف لم من ٕنه .)فاThis would appear to be a restatement of what Aristotle says at Metaphysics Β, 995a29-30: "It is not possible to untie a knot about which you are ignorant" (λύειν δ' οὐκ ἔστιν ἀγνοοῦντας τὸν δεσμόν).Usṭāṯ's translation of Metaphysics Β, which was the one Averroes used at least when composing the Long Commentary on the Metaphysics, 32 renders this line as follows: الرباط جهل من يحل ان يقدر .ولا 33 The similarity between this line and the one at the end of Averroes' "Epistle on Divine Knowledge," para.2, is quite clear.Indeed, the two are so similar that there is virtually no room for doubting that the "Epistle on Divine Knowledge" is referring to Metaphysics Β.
What does Averroes mean to convey by this reference?Well, to my mind, it is a rather clear signal to any reader familiar with Metaphysics Β that the inquiry presented in the "Epistle on Divine Knowledge" is intended to correspond to the kind of inquiry Aristotle describes in Metaphysics Β. Recall that Metaphysics Β is the book in which Aristotle presents a series of ἀπορίαι which must be addressed before beginning the search for knowledge (ἐπιστήμη).Now Averroes, in both the Middle and Long Commentaries on the Metaphysics, identifies the process of addressing these ἀπορίαι as dialectic.I bring here what he says in the Middle Commentary, since it is probably chronologically closer to the writing of the "Epistle on Divine Knowledge" (and the approach in the Long Commentary does not differ significantly for our purposes).Since the Middle Commentary is not extant in Arabic, 34 I bring it only in the 14th century Hebrew translation of Qalonimos ben Qalonimos of Arles.There we find the following: We must first examine the deep questions that are mentioned in this science which we seek … Indeed, this is necessary because the first thing those who want to grasp knowledge of things and their principles do is make a strong inquiry into the dialectical statements that are doubtful from among the deep questions in that genus. 35
החכמה בזאת יזכרו אשר העמוקות בשאלות ראשונה לחקור מוכרחים אנו להשיג הרוצים פעל שהתחלת לפי מחוייב זה היה ואמנם … הנה המבוקשת המספקים הנצוחיים המאמרים החקירה חוזק הוא והתחלותם הדברים ידיעת הסוג… באותו אשר העמוקות מהשאלות
The ἀπορίαι have apparently become deep questions.Indeed, it seems to me that the question-answer format of addressing these issues, as we find, for example, in Metaphysics Β, played a large part in Averroes' association of these questions with dialectic.The doubt associated with the questions is, no doubt, another factor in Averroes' decision to connect the ἀπορίαι with dialectic.
Averroes continues, For in as much as the doubter is unable to understand some of the deep The bind, or knot, is thus associated with two chief components of dialectic, doubt and the stance between two opposite propositions.The doubter who is caught in this bind is accordingly in the predicament of dialectic, as Averroes understands it.
Averroes goes on: One who is in doubt about something cannot resolve his doubt with something from within the genus of statements which necessarily led him into the bind on that matter, i. e., the dialectical statements, but rather with another genus of statements, i. e., demonstrative statements. 37חייבו אשר המאמרים מסוג הוא בדבר ספקו שיתיר אפשר אי בדבר המסופק המאמרים והם אחר מסוג אבל הנצוחיים, המאמרים והם הענין באותו הקשר המופתיים. Clearly, the resolution of the doubts raised through questioning and dialectic is through demonstration, rather than through dialectic.In other words, true solutions to metaphysical questions are through demonstrations, not through dialectic.
Nevertheless, Averroes gives us the following syllogism accounting for why dialectic is useful at the beginning stages of studying metaphysics.
If grasping the truth about these deep questions is resolving the bind that occurs to the understanding with inquiry about them and if this resolving occurs after the bind, it necessarily follows that before inquiring into them, you should first inquire into the statements that are similar in understanding to the bind.These are the dialectical statements.This is one reason it is necessary to precede deep questions with a dialectical inquiry.This syllogism is clearly intended to show that although demonstrations are preferable, we ought to begin with dialectical statements before proceeding to demonstrations.Yet, what kind of syllogism is this?Clearly it is of the first figure: If a is b and b is c, then a is c.As such it is 36 Averroes, Commento medio, ed.Mauro Zonta, vol.2, t. 1, p. 9. 37 Averroes, Commento medio, ed.Mauro Zonta, vol.2, t. 1, p. 9. 38 Averroes, Commento medio, ed.Mauro Zonta, vol.2, t. 1, p. 9.
valid.Yet examination of the first premise, viz.that grasping the truth about these deep questions is resolving the bind, makes clear that this is not a demonstrative premise.Indeed, the notion that resolving questions is grasping the truth does not completely conform to what Averroes had just said in the previous sentence, viz.that demonstration is the proper way to the truth.While demonstration could be in answer to questions, it need not be.Rather it would seem to be the case that what Averroes has in mind here is dialectic, especially in light of the conclusion.That dialectical resolution of doubt is grasping the truth is at best a dialectical premise, accepted by dialecticians, but not by those of the demonstrative class.That is, this syllogism is a dialectical syllogism.
Why does Averroes employ a dialectical syllogism to argue for the importance of dialectic?Let me suggest a dialectical answer.Either the reader recognizes it as a dialectical syllogism or not.If he recognizes it as dialectical and is familiar with demonstrations, then he does not need to work too much on the questions and answers in Metaphysics Β, but can skim them over or skip them and then move on to demonstrations.If not, then he must learn them and thoroughly familiarize himself with the kinds of syllogisms before he can move on to do demonstration proper.This kind of dialectical syllogism, then, performs a didactic function; it works with and encourages students who have not thoroughly understood the content of the Posterior Analytics tradition, while also indicating to those who do understand the syllogism that this is a dialectical, not demonstrative argument. 39
CONCLUSION
In the first part of this paper, I argued that the arguments Averroes employs in the work that came to be known as the "Epistle on Divine Knowledge" are dialectical and can be understood according to the description of dialectical arguments in Aristotle's Topica, as interpreted by Averroes in his Middle Commentary on the Topica.In the second part of the paper, I argued that Averroes uses a literary allusion to associate the "Epistle on Divine Knowledge" with Metaphysics Β, and the arguments of the "Epistle on Divine Knowledge" with the kind of dialectical 39 An anonymous reader suggests that Averroes employed a dialectical syllogism here because demonstration about God's knowledge is not possible and dialectic is the best that can be achieved.This may be the case, but Averroes is far from arguing in the "Epistle on Divine Knowledge" either that demonstration about this issue is not possible or that the argument he gives here is the best that can be achieved.
arguments we find there.Now it is also clear that the subject of the Epistle, God's knowledge of created things, is metaphysical, and indeed discussed by Averroes in his Middle and Long Commentaries on Metaphysics Λ. Averroes's discussion in those places is quite well-known and it is clear that his solution to the problem of God's knowledge of particulars is roughly the same in all places: God and God's knowledge are one and the cause of those particulars, and so his knowledge is of a different kind. 40Whatever Averroes' approach to Metaphysics Λ, it is clear that his approach to the question in the "Epistle on Divine Knowledge" is dialectical.It is dialectical, I suggest, in the way that Averroes sees Metaphysics Β as dialectical, viz. it is of an introductory kind, meant to be supplanted by demonstrations at a later point.
This use of dialectic is exactly parallel to the use of dialectic in education we find in Averroes' Commentary on Plato's Republic.As I have argued elsewhere, while Averroes generally removes dialectic and dialectical arguments from his version of the ideal city described in the Republic, dialectics is incorporated into the education of the guardians, i. e., of the potential philosophers. 41This is due to its educational value, a point which Averroes also emphasizes at the beginning of the Middle Commentary on the Topica.I believe it is clear that the "Epistle on Divine Knowledge" too plays a didactic role.It is a short dialectical solution to a problem that is treated at greater length and with better preparation in the commentaries on Metaphysics Λ.As Averroes notes at the 40 beginning of paragraph 6, the proper discussion would be long (ṭawīlan) and so what he presents here is the point (al-nuqṭa) at which this will be resolved, i. e., not the full demonstration of the resolution.In this case, the "Epistle on Divine Knowledge" is a work of didactic dialectic, meant for training potential philosophers.
Let me add as a kind of afterward that I do not think the Caliph Abū Yaʿqūb Yūsuf is the addressee of this letter, since he is not a potential philosopher.Averroes does not name the addressee of the "Epistle on Divine Knowledge," but only praises his good mind (ḏihn) and noble nature (ṭabʿ) which he says are greater (kaṯīran) than those who have pursued these sciences.Averroes then continues to say that the addressee's theoretical reflection (naẓr) has culminated in the doubt with which the "Epistle on Divine Knowledge" is concerned.Averroes refers to the addressee in the second person plural in the Epistle, which can indicate formality and respect of the kind expected in a literary treatise.In the Decisive Treatise, Averroes refers to the addressee of the Epistle as "one of our friends," and while the Arabic ṣāḥib can also mean "lord" or "master" it is more frequent in its use as "friend" or "fellow traveler." Accordingly, I do not see enough here to justify the statement that "the formula of address gives the reader to understand that the one addressed is a prince in high political office, and strongly suggests that he is Averroes' friend and patron the Almohade ruler Abū Yaʿqūb Yūsuf … for whose benefit Averroes had embarked on his commentaries on Aristotle more than a decade earlier." 42Rather, why not assume Averroes' praise for the addressee to be a genuine compliment to his abilities?Why not assume the addressee to be a student of philosophy, who is sharp, intellectually gifted, and somewhat scientifically advanced?Perhaps, indeed, he has attained the level of the student of Metaphysics Β, as Averroes' literary allusion would suggest, and he has encountered 42 Mahdi, "Averroes on Divine Law and Human Wisdom," p. 118-119.Sarah Stroumsa, Andalus and Sefarad: On Philosophy and its History in Islamic Spain (Princeton University Press, 2019), p. 134-144, calls into question the extent to which the commentaries were in fact commissioned by Abū Yaʿqūb Yūsuf, especially in light of the fact that Averroes was most likely already far into his commentary writing project before his legendary meeting with the Caliph.Di Giovanni argues that the argument of the "Epistle on Divine Knowledge" is intended to interpret the Almohad doctrine of the homonomy of knowledge between God and man in an Aristotelian manner that could encourage readers to pursue philosophy and metaphysics further.Still his view is that this work is directed toward general thinkers in Andalusia living under Almohad rule and perhaps even some immersed in theology.He does not mention the Caliph as the possible addressee.See Di Giovanni, "Philosophy Incarnate," p. 156-162.questions but is not adept enough at metaphysical demonstration to resolve them.This work would help such a person, without fully explaining all demonstrations, and at the same time steer the reader into further metaphysical speculation. | 9,366 | 2024-02-12T00:00:00.000 | [
"Philosophy"
] |
Online Detection of Driver Fatigue Using Steering Wheel Angles for Real Driving Conditions
This paper presents a drowsiness on-line detection system for monitoring driver fatigue level under real driving conditions, based on the data of steering wheel angles (SWA) collected from sensors mounted on the steering lever. The proposed system firstly extracts approximate entropy (ApEn) features from fixed sliding windows on real-time steering wheel angles time series. After that, this system linearizes the ApEn features series through an adaptive piecewise linear fitting using a given deviation. Then, the detection system calculates the warping distance between the linear features series of the sample data. Finally, this system uses the warping distance to determine the drowsiness state of the driver according to a designed binary decision classifier. The experimental data were collected from 14.68 h driving under real road conditions, including two fatigue levels: “wake” and “drowsy”. The results show that the proposed system is capable of working online with an average 78.01% accuracy, 29.35% false detections of the “awake” state, and 15.15% false detections of the “drowsy” state. The results also confirm that the proposed method based on SWA signal is valuable for applications in preventing traffic accidents caused by driver fatigue.
Introduction
Drowsiness seriously impairs peoples' ability to drive, as they find it difficult to maintain their attention on the task. This is a harmful risk on the road. It is reported that 35%-45% of road accidents are caused by drowsy driving (i.e., driving while sleepy or fatigued) [1,2], which is why more and more studies are motivated to design an automatic detector to deal with this dangerous problem.
Recent studies have focused on two methods-intrusive and non-intrusive-to detect driver drowsiness. Intrusive detection analyzes the psychological state of the driver through electroencephalographic (EEG) and electrooculographic (EOG) information features [3][4][5][6][7][8]. Generally, the fatigue detection systems based on EEG and EOG signals provide high accuracy; however, they rely on physiological information measured by sensors located on or around the driver, and the driver's movement therefore affects the reliability of data collection. Non-intrusive methods provide a fatigue warning based on the facial features [9,10] or the behavioral characteristics [11][12][13][14][15] of the driver. Facial features vary under different lighting conditions in the video frame, which affects the performance of the fatigue detection system. Behavioral characteristics, including steering wheel movement (SWM), standard deviation of the lane position (SDLP), and steering wheel angle (SWA) are excellent because they are reliable, real-time, and non-invasive, as the sensors embedded in various places inside the vehicle can acquire the operating information accurately and in real time. These characteristics have already demonstrated great importance in driving assistance systems [16][17][18].
The SWM method analyzes the steering wheel movement data collected from sensors mounted on the steering lever [19][20][21]. It measures the fatigue state based on the frequency of minor steering corrections [22]. When the driver is in a drowsy state, the frequency of his steering corrections reduces markedly. To avoid the interference of lane-changing, researchers usually conduct this test when only small steering angles are required. The SWM method is very reliant on the geometrical features of the road. The method can only work in certain situations.
SDLP evaluates the driver's drowsiness level through an external camera, which tracks the position of the lane [23][24][25]. An experiment conducted by Ingre et al. [23] derived several statistical features based on SDLP, and found that when Karolinska Sleepiness Scale (KSS) ratings increase, SDLP (meters) also increases. However, it does not show consistent results on all subjects. For some subjects, the KSS ratings are very high, while SDLPs do not increase accordingly. Therefore, there are two shortcomings for this method: first, its robustness is not satisfactory; second, it is highly affected by external factors, such as lane marks, temperature, lighting, etc. In addition, drivers under the influence of alcohol and drugs will show the same SDLP features.
These reported systems have achieved good results [15]; however, the recorded data are analyzed in a simulated environment. Few reports have focused on fatigue detection under real driving conditions; this complexity will bring non-linearity, time-space variation, and instability to the driving event. This paper proposes an online fatigue detection method based on SWA data collected from sensors mounted on the steering lever under real driving conditions. The core of our method is to extract useful information from SWA data and work online. Firstly, it extracts an approximate entropy from the SWA time series within fixed time windows. Next, it gives linear expressions to the approximate entropy (ApEn) feature series within the given deviation. After that, it calculates the warping distance between the linear features on-line. Finally, the system determines drivers' fatigue states: "awake" or "drowsy" by measuring the calculated warping distance in real time.
The remainder of this paper is structured as follows: Section 2 reviews the available ApEn of time series with a nonlinear dynamic parameter in order to discover their irregularity. There is also a brief introduction to the proposed SWA-based online drowsiness detection system, including extracting features from the SWA data, making piecewise self-adaptive linear expressions for the non-linear ApEn, calculating the warping distance between linear features for measuring its similarity to fatigue states, and designing a fatigue level classification method. Section 3 shows the experimental results for the online detection of driver fatigue based on SWA data collected from sensors situated in cars running on real roads. Section 4 discusses the performance of the proposed method under real driving conditions. Then, the conclusion is given in Section 5.
Methodology
The SWA-based fatigue detection method firstly extracts the approximate entropy from SWA time series within fixed time windows. Then, the ApEn series are given piecewise self-adaptive linear expressions. Following that, the warping distance between linear feature series is used to measure its similarity to fatigue states, so as to activate the online fatigue level detection. Finally, the driver fatigue level detection system performance is decided on-line by a designed binary classifier, as shown in Figure 1.
Extracting Approximate Entropy from Steering Wheel Angle
In previous studies [26][27][28][29], abnormal features in steering wheel operating behaviours appeared more frequently under drowsy situation than under waking situation. The SWA data we use for analysing and recognizing drivers' fatigue levels is usually hidden and nonlinearly distributed. However, the hidden nonlinear features extracted under different drowsy situations are very important signs in driver fatigue level detection.
The irregularity analysis of evaluating the nonlinearity of dynamical signals is considered to be an effective approach. Since (1) changes in the physiological and behavioural process of the driver can be characterized by ApEn [30,31] and (2) irregularity and complexity of the ApEn can be quantified through ApEn [32,33], the irregularity of the stochastic time series of the steering wheel angle is therefore able to be quantified by ApEn features. In addition, the ApEn has recently been used popularly in some potential applications to quantify irregularity and complexity in stochastic time series [32,33].
We can obtain a robust estimate of ApEn from short noisy time series data. Let SWA time series data be an input; the ApEn is calculated as: where m is the number of embedded dimensions, r is the scale or tolerance parameter, N is the total number of data points in an observation period or the length of inputted time series, and B i is the number of j such that d(X(i), X(j)) ≤ r at i, in which X(·) represents the m-dimensional vectors reconstructed from inputted time series X SWA = {X SWA (1), X SWA (2), . . . , X SWA (N)}: where d(X(i), X(j)) is a measure of the distance between vectors X(i) and X(j), i, j = 1, . . . , N − m + 1. The maximum difference between corresponding elements is determined by the distance. When calculating ApEn, the growth value of m is added to the engineering computing workload. Meanwhile, the instantaneous change of steering wheel control variables is also increased; this will further affect the features of drowsiness in the SWA data. According to Yentes [34] and Pincus [35], we set m = 2 and r = 0.2 × SD in this paper. SD here is the standard deviation of a fixed 2 seconds sliding window.
Representing ApEn Series with Adaptive Piecewise Linear Approximation
The ApEn series demonstrates the non-linear variation features of SWA data, but they cannot describe the fatigue level directly. Linear segmentation is a classic method of expressing time series, whose application to ApEn is to segment and extract basic variation modes that are relatively independent. Moreover, the linear expressions of time series are good in morphology and segment ability, so, after processing, they can be naturally segmented into different linear segments according to their variation forms. Every segment can clearly demonstrate the variation features of the time series in this given period of time, and each segment is relatively independent morphologically. To sum up, it is believed that one linear segment can represent a relatively independent variation mode. The linear approximation method of time series is shown below: where, (X i , Y i ) represents the observed value of the series, (X,Ŷ) represents the linear approximation value of the observed value, b 1 and b 0 stand for the estimated value of the linear coefficient approximated under the principle of least squares, e i is the residual of the observed value to the approximated line, and the condition of adaptive piecewise linear approximation (APLA) of the observed value is e i = 0.2.
Calculating Warping Distance of Dynamic Time Series
Dynamic time warping (DTW) is a widely applied measurement method for time series that has given satisfactory results in data clustering, classification, pattern discovery, and similarity searching. This method can conduct a warping measurement for time series of unequal lengths, and possesses robustness for abnormal data.
DTW obtains an optimal warping route by adjusting the relationship between the correspondent elements of different points in the time series, giving a better measure of the relationship between time series. Suppose there are two time series: Q = q 1 , q 2 , q 3 , . . . , q N Q and C = c 1 , c 2 , c 3 , . . . , c N C , where N Q and N C represent the lengths of Q and C, respectively. DTW finds the optimal warping route between the two series to obtain the minimum distance value, as shown in Equation (4): where i = 1, 2, . . . , N Q , j = 1, 2, . . . , N C , γ(0, 0) = 0, γ(0, ∞) = γ(∞, 0) = ∞, and in this paper,
Detecting Fatigue Patterns Online
When the warping distance is obtained in Equation (4), we then set an online discriminate criterion for fatigue level detection. We mark the fatigue level of the driver with "0" and "1" based on a decision making model. These stand for the two fatigue states: "awake" and "drowsy". The detection method of the SWA-based online driver fatigue state is shown as in Equation (5): Here, F DETEC ∈ (0, 1) is the output of the fatigue level in the binary decision model, S stands for the standard linear time series acquired through online unsupervised learning. The learning method is shown below: where var(x) represents the variation of the x series. S i in Equation (5) stands for the linear fitted value of the SWA time series in the i-th time window acquired through Equation (3). The value of λ in Equation (5) is determined as Equation (7): where L is the length of ApEn series of the sample SWA data. The sampling period set was one minute, L = 30.
Experiment Setup
Due to the monotonous driving environment on motorways, drivers easily feel tired. According to related research, 66% of accidents are caused by tired driving, 60% of accidents happened when speed was over 80 km/h, and 80% of accidents happened when speed was over 60 km/h [36]. We therefore conducted the experiment along a driving route from Beijing to Qinhuangdao, China, as shown in Figure 2. All drivers participating in this experiment were required to hold a valid driver's license and must have had at least one year of driving experience. The experiment started after lunch when people were prone to be sleepy. The recording system named VBOX was donated by China FAW (First Automobile Works) Group Corporation. In addition, the sensors also collected data including SWA, Brakefroce, Leftsteer, Rightsteer, Can_braking, Can_thrott, YawRate, X_Accel, Y_Accel, and synchronous facial video of a driver during the driving. The experiment was formed in two stages: firstly, divers got 15 min's preparation to familiarize themselves with the driving condition, then they took a 90 min drive. The driving speed was set as 100 km/h for all drivers, while they were allowed to adjust the speed and position depending on the real road situations. Note that we decided on the 90 min observation time based on the ground theory that most people may get tired in monotonous driving conditions after 60 min. The 90 min of driving is therefore able to provide data for both fatigue states: "awake" and "drowsy".
We kept quiet during the experiment, and set all cameras in the driving cabinet for recording the facial expressions of the drivers at the rate of 15 Hz. Data including SWA, brakeforce, leftsteer, rightsteer, can_braking, can_thrott, YawRate, X_Accel, Y_Accel, etc. were recorded at the rate of 100 Hz. SWA data were able to represent operating behavior characteristics of a driver more significantly [36]. Compared to other recorded signals, SWA made the greatest contribution to the driver's fatigue state recognition, and so we restricted our focus to SWA data. The average age of the participating drivers was 39.6, and the average driving experience was 9.6 years. The total experiment time exceeded 20 h. Because four observed divers did not show drowsy state during the experiment, we only selected data from six valid drivers' data in this paper. The total experiment time of the six drivers was over 14 h.
Fatigue Level Criteria
It is necessary to obtain prior knowledge of fatigue levels from sample data for both operating features extraction and detection model design and construction. In order to label the fatigue level of the corresponding data samples, we require an accurate and reliable measuring method to evaluate the real fatigue level of the driver. We therefore invited three experts to join our experiment.
There are three steps to constructing a qualified sample data of driver's fatigue level: segmentation, evaluation, and visualization of the sample experiment data.
Segmentation Synchronous Facial Videos and Operating Feature Data
We clipped facial videos into 1-min segments by video clipping software, and then the operating feature data were spliced by starting time and ending time of the video clips, accordingly.
According to [36], we used video-and-expert method, which evaluates the video clips based on the facial fatigue level evaluation criteria. The three experts evaluate all facial video samples based on this criteria chart according to the time windows. The inconsistent results can be marked directly by the three experts without argument; otherwise, they will negotiate the results until an agreement is reached. If the agreement cannot be made, the sample would be discarded.
After that, we visualize the samples; if a curved line or lane shifting appears, we discard the sample. Finally, the qualified sample data with fatigue level labels are stored in the sample database. Table 1 shows the criteria for fatigue level evaluation.
Experiment Results
The ApEn was computed for all samples, taking Subject 6 as an example. The ApEn distribution of the SWA time series under real driving conditions is shown in Figure 3. The horizontal axis represents numbers of short time sliding window of the sample, and the vertical axis represents the computing results of ApEn for each window in each sub-graph. Red marks 0 and 1 are the label of two fatigue levels, respectively-namely, "awake" and "drowsy"-evaluated by the experts. Figure 3a shows the ApEn distribution of Subject 6's fatigue states in the first 20 samples, Figure 3b shows the following 20 samples, and Figure 3c demonstrates the last 14 samples. As shown in these figures, the ApEn values of the SWA data is mainly distributed in the interval [0.8 1] when the driver is in the "awake" state, "0", and those values are distributed more widely in the interval [0.4 1] when the driver is in the "drowsy" state. Thus, there is evidence that the ApEn value distribution of the driver's SWA data represents a noticeable difference between the two fatigue levels. This allows us to input more details into the ApEn and explore the driver's fatigue features from the SWA data and detect the fatigue level. Figure 4 shows the results of adaptive piecewise linear approximation of ApEn series in Equation (3). Blue curves represent the ApEn series of the sample, and red curves represent their linear approximation results. This shows that the red lines can evidently express the changing trend of ApEn series in the correspondent segment, and thus demonstrate the inherent feature. It can be concluded that linear approximation simplifies the ApEn data distribution, which reflects their morphological features. This finding provides more visible grounds for determining the fatigue levels. Moreover, as seen from this figure, when the ApEn series distribution shows great changes, the approximated lines can be naturally segmented into different forms. Each segment-relatively independent in morphology-can directly reflect the variation features of the time series in the given period of time. Figure 4 also shows that each adaptive linear segment can represent a relatively independent variation mode and describe the inherent rules of the series. Figure 5 shows the results of the warping time distance between series calculated in Equation (4) based on Subject 6, where "o" represents the sample that is in the awake situation and "*" represents the sample in the drowsy situation. We can see that the warping distances of the samples under different fatigue levels show distinct distribution intervals. According to Equation (6), we set the threshold to determine fatigue levels (shown with the red line), which can clearly divide drowsy and awake samples. The rate of accuracy (AC), false positive (FP), and false negative (FN) of the data set are usually used to measure the performance of the detection system. Their calculation is shown below: where TP is the number of true positives, TN is the number of true negatives, FP is the number of false classification for positive samples; and FN is for negative samples. Their relationship is shown in Table 2. AC indicates the overall detection accuracy for both positive and negative patterns. We evaluate the proposed method in a confusion matrix based on Table 2. The columns refer to experts' decisions, and the rows represent the classification performed by the proposed method. The number of testing samples presented at each level is displayed in the last row. Usually, experiments only focus on the accuracy rate, while paying no attention to missed alarms. However, we are supposed to find a good compromise between a high accuracy rate for fatigue level and a low number of missed alarms. Indeed, a large number of false alarms may discredit the system, but a high rate of missed alarms is potentially risky for a driver.
Similarly, we applied the proposed online detecting method to all samples of the six subjects. The fatigue levels are divided into two levels: awake and drowsy, represented with the output labels "0" and "1". The testing results are shown in Table 3. We can see that there are 191 samples in the database, of which 92 are awake and 99 are drowsy. The system achieved 78.01% accuracy for all samples, with 70.65% true negatives and 29.35% false positives. It also shows 84.85% true positives, with 15.15% false negatives. Table 3. Confusion matrix of detection drowsiness Levels "0" and "l".
Discussion
The proposed method uses SWA data to detect drivers' fatigue levels. The mental state of a driver can be directly reflected onto his/her driving behavior, the most frequent and sensitive of which is steering wheel operation. This online detection method allows the detection of two fatigue levels: "awake" and "drowsy" under the real driving conditions. The detection system achieves a good performance with 78.01% of fatigue levels correctly classified in about 15 h driving on real roads. As a result, the extracted information from SWA data promotes the performance of fatigue detection with robustness and reliability as verified in real-road experiments. The experiments confirm that SWA data are closer to the real driving conditions, and can better reflect the mental state and operating behaviors of drivers. Meanwhile, the detecting system shows a tolerable rate of false alarms or false positives (29.35%) and missed alarms or false negatives (15.15%). This system is also robust because the evaluation criteria (see Table 1) decided by three experts are tested with various methods, which are based on facial video information rather than SWA information, including facial expressions, head positions and mental state.
The online method proposed in this paper extracts the real-time ApEn features from the SWA time series, then conducts a self-adaptive linear segmentation to them, and finally calculates the warping distance between the linear feature series in the present time frame and the reference time frame to determine the fatigue state. The ApEns extracted from SWA reflect the variation features of the driver's operating modes. When the driver is tired, his/her capacity to act will be reduced, and the time frame of the SWA series transformation will become larger; in other words, fewer new patterns will emerge in the activity time series. As shown in Figure 3, when the driver is very tired, the ApEn of the SWA series is small, and the overall distribution of the ApEn shows large fluctuations. To further explore the distinctive features between the drowsy state and waking state, we use linear fitting to reflect the local changing trend of the ApEn series and the self-adaptive segmentation to represent the morphological distribution features of time series. As shown in Figure 4, red lines can reliably reflect local variation features of time series and the inherent rules of the data, and accordingly more directly demonstrate the state modes of time series. Besides, compared with fixed-length linear segmentation, the self-adaptive segmentation proposed in this paper is more suitable for the online testing of fatigue states. The latter is commonly applied in fatigue testing; however, because of the complexity of driving conditions, the data lengths within the fixed-length time frame may vary. That is why the application of Euclidean distance to measure the similarity of time series can hardly satisfy needs under real driving conditions. The dynamic warping time distance can acquire an optimal warping route by adjusting the relationship between the corresponding elements of different points of time in the time series, so it can better measure the relationship between the time series. Evidently, this method satisfactorily solves the problem brought by unequal lengths of time series. This paper adopts dynamic warping distance between series to measure their similarity. If the distance is small, it is deemed that the tested series is highly similar to the reference series; i.e., the two samples are in the same fatigue state. As seen from Figure 5, the warping distance between the "awake" sample and reference is small; that is because the reference sample always maintains the awake condition under real driving conditions, through unsupervised learning in Equation (6). Contrarily, the warping distance between the series of the drowsy sample and that of the reference is large, so the driver shows a fatigue level different from that of the reference sample. Considering that the driving habits may vary, Equation (7) proposes the judgment threshold, which is suitable for all samples in this paper. For a larger sample, this threshold may need further verification or improvement.
However, as there are not many existing studies that report on SWA fatigue detection based on real road experiments, we are not able to show our superiority in comparison with other methods. We only reflect on Qu's work [36] in order to evaluate our results. He did fatigue level detection with SWA data in a laboratory environment, having an accuracy rate of 86.1%, higher than the 78.01% of this paper; the average rate of false positives was 12.09%, lower than the 29.35% in the current study; while the rate of false negatives in Qu's work was 18.47%, higher than the 15.15% of our work. In fact, the SWA data retrieved from real driving conditions are extremely difficult to analyze compared to those from the laboratory environment, considering that random vibration may cause the SWA series to drift dramatically. The irregular drifting time series existing in the original data were recognized as false statistic features. Therefore, although the proposed detection method shows slightly poorer performance than that based on simulation in the laboratory, it possesses greater value in its engineering application.
Conclusions and Future Work
This paper presents an on-line drowsiness detection system based on SWA information retrieved from sensors located in fixed positions, tested with 14.68 h real road driving. The proposed system firstly extracts ApEn from fixed sliding windows on a SWA time series. Then, it linearizes the ApEn features series through adaptive piecewise linear fitting within a given deviation. Following that, the system calculates the warping distance between the linear features series of the sample data. Finally, this system determines the drowsiness state based on the warping distance. The experimental data contains two fatigue levels: "awake" and "drowsy", according to the facial expressions, head position, and mental state of a driver, which have been examined by related experts.
The experiment retrieved an average accuracy rate of 78.01%, 29.35% false alarms (false positives), and 15.15% missed alarms (false negatives) in cases of two-class fatigue detection. As a result, this paper confidently confirms that the proposed on-line method is valuable for application in avoiding traffic accidents that are caused by driving in tired condition. Previous work [36] has shown that when SWA is combined with vehicle state information, like yaw angles, lateral positions, etc., it can produce a high detection rate in a simulation laboratory. Inspired by this, we will combine these types of information in real driving conditions to improve the detection rate of driver fatigue as our worthwhile future work. This could provide new information for the detection of driver fatigue. | 6,171.2 | 2017-03-01T00:00:00.000 | [
"Computer Science"
] |
Stripe noise removal in conductive atomic force microscopy
Conductive atomic force microscopy (c-AFM) can provide simultaneous maps of the topography and electrical current flow through materials with high spatial resolution and it is playing an increasingly important role in the characterization of novel materials that are being investigated for novel memory devices. However, noise in the form of stripe features often appear in c-AFM images, challenging the quantitative analysis of conduction or topographical information. To remove stripe noise without losing interesting information, as many as sixteen destriping methods are investigated in this paper, including three additional models that we propose based on the stripes characteristics, and thirteen state-of-the-art destriping methods. We have also designed a gradient stripe noise model and obtained a ground truth dataset consisting of 800 images, generated by rotating and cropping a clean image, and created a noisy image dataset by adding random intensities of simulated noise to the ground truth dataset. In addition to comparing the results of the stripe noise removal visually, we performed a quantitative image quality comparison using simulated datasets and 100 images with very different strengths of simulated noise. All results show that the Low-Rank Recovery method has the best performance and robustness for removing gradient stripe noise without losing useful information. Furthermore, a detailed performance comparison of Polynomial fitting and Low-Rank Recovery at different levels of real noise is presented.
In recent years, conductive atomic force microscopy (c-AFM) has been widely used for imaging local conductivity in materials [1][2][3] and is playing an increasingly important role in the optimization of materials for their use in novel microelectronic devices, including ferroelectric tunnel junctions 4 , resistive switching memories 5 and memristors 6 .This technique can be extremely useful, ranging from detecting defects of a couple of atoms in size to measuring the different resistance states of a memory.In addition, c-AFM has been key to investigating conduction paths in multiferroic materials.These are self-organized networks of topological defects that form periodic patterns and can carry electrical currents, acting as a dense mesh of nanoscale conducting paths 7 and are believed to hold promise for future electronic devices 8 .To explore the properties of conduction paths in the samples, see Fig. 1a as an example, one needs to identify and extract these paths in the conduction maps, which are mainly collected by c-AFM 7,9,10 .The metallic tip of the c-AFM's cantilever (with an end diameter of about 20nm), acting as an electrode, is brought in contact with the sample surface and it applies a voltage at that location by means of an electrical circuit, which also returns the electrical current measured across the sample (vertically, if the second electrode is located below the sample).After scanning all the points of the sample surface within the scan area, a conduction map is produced, from the local values and variations of which the conductivity/ resistivity of the materials can be inferred.
However, many artifacts, mainly stripe noise, occur in c-AFM measurements, especially in lateral measurements.Compared with vertical measurements, where charge flows from the top to the bottom electrode, lateral measurements 10 use a second (fixed) top electrode to achieve a charge flow parallel to the surface, between the fixed electrode and the scanning tip.Therefore, the generation of stripes could be related to charge accumulation and drift, since the direction of this stripe noise also coincides with the direction of the current from the electrode side.There are three conduction maps measured laterally on the same sample in the experiments of Rieck et al. 10 , where the fixed electrode side is at the right side in Fig. 1.The conduction paths in the conduction map (a) are clearly visible as the measuring area is only a few micrometres away from the electrode edge.The measuring area of (b) and (c) is almost the same, but at the edge of the electrode side there is a lower quality of conduction paths, as very high currents are generated as soon as the conductive tip touches the electrodes at this voltage.In (b), the stripes cross the entire image, and change values as they pass through the higher-conductivity paths (conduction paths).Due to the lateral electrode geometry used, the stripe noise in the image shows an intensity gradient that matches the current direction from the fixed electrode (which occupies the entire right side of the image) to the tip.Image (c) appears to be cleaner, but the current values on the electrode side are higher and display some glitches, so the conduction paths at the edge are not as clear as in (a).It is therefore possible that the extremely strong potential differences lead to stripe noise.There may be another cause for the occurrence of stripes that is not related to conductivity, since c-AFM measures not only the electrical current flow but also the topography at the tip contact point at the same time.A main cause of the stripe noise in c-AFM is due to a changing tip-sample interface, which could be caused by contaminants such as dirt particles adhering to the tip, or irregular edges or protrusions on the sample surface 11 , or by "parachuting" artifacts caused by high scanning speed 12,13 .The latter is not very likely in our case, as scanning speeds are low.
Finding the most robust and effective method to remove the stripes is important in c-AFM image processing.Stripe noise significantly impairs the observation and hinders subsequent analysis of conduction components.Moreover, the difficulty in obtaining clean images caused by the high sensitivity of the collection and the challenging experimental conditions means that we have to make better use of the existing noisy c-AFM images by applying denoising methods.Finally, analyzing stripe noise removal methods could help in investigating the physical source of the noise, which would enable us to find more effective removal methods.We may also be able to find correlations between some scan parameters and the noise model 13,14 .Adjusting the scan parameters can help to reduce noise generation during scanning.
Among the most advanced or commonly used methods for removing stripe noise from AFM images are Destripe2 15 , VSNR 16 and algorithms from the tool Gwyddion 14,17 , which have proven to be effective in removing stripe noise from scanning electron microscope modalities.Gwyddion is an important and popular modular program for processing and visualize AFM images.This makes Gwyddion the first choice for surface physicists when it comes to image processing tasks that involve denoising the images they measure.The work on denoising AFM images is usually compared with the results of Gwyddion's algorithms 15,16 .
Another method worth discussing but not used in the field of AFM image processing is SNRWDNN (Stripe Noise Removal Wavelet Deep Neural Network) 18 , which was developed for stripe noise removal only.In this method a directional regularizer function is designed to separate the details of the scene from the stripe noise and prevent irregular stripes in the estimation of the clean image.This means that this Deep Learning model could work very well in our particular case of gradient stripe noise, even though it was not developed specifically for AFM images.
Finding a good solution to remove stripe noise is a time-consuming and challenging task for the scanning probe microscopy user community.Even though the number of open-source software tools like Gwyddion is limited, it would be a time-consuming task to try out all of the denoising functions of the c-AFM images processing software tools and the state-of-art c-AFM image denoising algorithms, which is, indeed, not typically done by surface physicists.
There are many robust methods that are not yet used in the field of AFM image processing, but have already proven to be effective and are well developed in other applications for stripe noise removal.These include frequency-based algorithms, statistics-based methods, polynomial fitting algorithms, and difference correction algorithms.As the noise intensity varies, the optimal destriping methods change accordingly.To handle this issue we may consider optimization-based methods.For example, in the field of remote sensing, Group Sparsitybased Regularization (GSR) and Unidirectional Total Variation (UTV) as well as low-rank matrix recovery-based methods have been developed and fully compared 19,20 .These classical and effective optimization-based methods are not yet used for processing noise in AFM images.
Therefore, the field of c-AFM processing is in great need for a review of destriping methods and extensive experimentation in order to provide the most reliable methods and tools, and give recommendations for the use of image processing methods that are very effective but have not been used in c-AFM image processing so far.
In this study, we compare a total of 16 different artefact removal algorithms for c-AFM conduction maps.These include 13 state-of-the-art destriping methods and three additional optimization-based methods that we tailored towards the characteristics of stripe noise.After extensive experiments on natural and simulated noisy images, we determined the best processing method.The rest of the paper is structured as follows.In Sect.'Methods' , we present three different assumptions and corresponding models based on the stripe noise characteristics.Extensive experiments are presented and discussed in Sect.'Experiments' , including qualitative and quantitative visual comparisons of image quality.In the qualitative comparison, we first evaluate the 16 different methods by comparing the destriping results on a natural noisy image.Next, we compare all methods by computing destriping results of simulated images generated by our designed noise model to evaluate the consistency.For this purpose, simulated noise was added to a ground truth image to enable a quantitative comparison of image quality.To comprehensively test the robustness of the 16 methods, two types of experiments were conducted: (1) a simulated image dataset with the noise closest to the real noise, and (2) a series of images with very different noise intensities In Sect.'Discussion' , we discuss the experimental results, including computing time, machine requirements and required parameters, as well as the design of the methods.Finally, we summarize this study and suggest future work in Sect.'Conclusion' .
Methods
In this section, we make three different assumptions on the nature of the stripe noise and the clean image, and introduce three recovery models involving the observed image, the clean image and the noise.
Before presenting our assumptions let us make some observations.We can see very clear conduction paths in Fig. 1a.The conduction paths are the bright structures in the image, some of which are meandering and twisted, and some are connected in rows.However, in Fig. 1b we cannot discern the conduction paths clearly because of the stripe noise.The model assumptions for removing these are as follows: 1.The noise is low rank we see that the stripes appear as a series of closely spaced, parallel vertical lines extending from right to left, with different intensity and thickness, but similar properties.The rank of this noise pattern would be relatively low, as there are only a few distinct patterns.This means that the noise can be decomposed into a small number of independent patterns.It appears as relatively uniform and closely spaced stripes, while the main features of the conduction paths remain recognisable.2. The noise is group sparse there are certain areas or regions of the observed image that are affected by the stripes, while other regions are unaffected or have minimal noise.Also, the noise manifests itself as groups that occur in specific patterns with varying density, spacing, and thickness, rather than uniformly affecting the entire image, so that we can visually distinguish it from the surrounding clean regions.3. The clean image has minimal unidirectional total variation in the observation image, the stripe noise causes abrupt transitions and discontinuities in the horizontal direction, increasing the UTV of the image.So we can use total-variation regularization, a popular method for image denoising.Total-variation regularization minimizes the UTV of the image by promoting sparsity in the gradient or edge information.By applying this regularization, the algorithm could effectively remove the stripes while preserving the edges and important features of the conduction paths.Here we introduce the possibility of horizontal and vertical directions with minimal UTV.
Based on these three assumptions, we present three classical models that have never been used for denoising AFM images.The models are described below.
Low-rank recovery
In image destriping via low-rank recovery (LRR) 21 , the observed images are modeled as the sum of a clean image and stripe noise which is of low rank.Let N denote the observed image, M the clean image, and L the low-rank stripe noise.That is, assume that: Image destriping via LRR is performed as follows: first, obtain an estimate L * of the low-rank stripe noise from the observation N, then obtain an estimate M * of the clean image via The stripe-noise estimate L * is computed via an optimization problem as follows: where the notation || � * denotes the nuclear norm of , i.e., the sum of the singular values of , and the regu- larization parameter controls the trade-off between the two objectives of fitting and low-rank regularization, respectively.
The optimization problem Eq. ( 3) can be transformed into a well-known form via singular value decomposition.That is, suppose the singular value decompositions of L * and N, respectively, are as follows: and Then the optimization problem Eq. ( 3) is equivalent to: (1) It is worth noting that the norm of the regularization term in Eq. ( 6) has become the l 1 norm, as compared with the nuclear norm in Eq. (3).Hence, Eq. ( 6) is the well-known shrinkage or soft-thresholding formulation, of which the closed-form solution is as follows: the entry (i, j) of S * is given by In conclusion, the method of destriping via LRR is performed as follows.Given an observed image N and a regularization parameter : 1. compute the singular value decomposition of the observation N, as in Eq. ( 5), obtaining its singular value matrix S; 2. perform the soft-thresholding operation on the matrix S with the parameter via Eq.( 7); 3. recover the low-rank stripe noise estimate L * via Eq.( 4); 4. compute the clean-image estimate M via Eq.( 2).
Group sparse recovery
In destriping via group sparse recovery (GSR) 22 , the observed image is modeled as the sum of the clean image and stripe noise that is group sparse or column sparse.Suppose N denotes the observed image, M the clean image, and G the group sparse stripe noise, then we assume that: Image destriping via GSR is performed as follows: first recover the group sparse stripe noise estimate G * from the observation N, then obtain an estimate M * of the clean image via The estimate G * of the stripe noise is recovered via an optimization problem as follows: where the notation ||Ŵ�| 2,1 denotes the ℓ 2,1 norm of Ŵ , i.e.,Here Ŵ j denotes the j th column vector of the matrix Ŵ , and the regularization parameter µ controls the trade-off between the two objectives of fitting and group sparse regularization.
The optimization problem Eq. ( 10) has a closed form solution 23 , which is computed as follows: the j th column vector of the matrix G * is given by: In conclusion, the method of destriping via GSR is performed as follows.Given an observation image N and a regularization parameter µ : 1. recover the group sparse stripe image estimate G * with the parameter µ via Eq.( 12); 2. compute the clean image estimate M * via Eq.( 9).
Unidirectional total variation minimization
When destriping via UTV minimization 24 , the observed image is modeled as the sum of the clean image, which is supposed to have minimum UTV, and stripe noise.Suppose N denotes the observed image, M the clean image, and L the noise, then we assume that: The clean image estimate M * is obtained via an optimization problem as follows: where the notation M UTV denotes the unidirectional total variation of M .The discrete forms of horizontal and vertical UTV (denoted as UTV 1 and UTV 2 ) are defined as ( 6) and The modification is based on the observation that stripe noise has only one direction.
Experiments
To find the most robust and efficient method for removing stripe noise from c-AFM images, we developed a noise model and performed intensive comparisons on a noisy c-AFM image and simulated noisy images.All the conduction maps in this paper are from the experiments on the same sample reported by Rieck et al. 10 .The comparison of the 16 selected methods includes (1) the three models from the last section, (2) all denoising methods using line-by-line scanning in Gwyddion, (3) a deep learning method developed and trained for stripe noise removal only, and (4) two state-of-the-art denoising methods developed for AFM images.This section is divided into three subsections: "Method comparison", "Visual comparison", and "Quantitative image quality comparison".
In the Method Comparison subsection, we briefly explain each method and the reasons why we selected it, and propose the noise model.The next subsection concerns the visual comparison of the destriping image results.We analyze the results of natural and simulated noise removal in the Sect."Quantitative image quality comparison", using SSIM (Structural Similarity Index) 25 and PSNR (Peak Signal to Noise Ratio) 26 .The first experiment uses an image dataset with fixed noise weights.In this experiment, we create a dataset of 800 ground truths by flipping and cropping a clean image and adding random simulated stripe noise to obtain a corresponding dataset of 800 simulated noisy images.The second "Quantitative image quality comparison" experiment uses a set of images that have very different noise weights.We obtain boxplots of the 800 PSNR and SSIM results from the first experiment, and SSIM and PSNR curves from the second experiment.both leading to the same conclusion regarding the best denoising method.
Method comparison
In this subsection, we briefly describe each method and the reasons why we included it in the comparison.
Methods from the SPM image processing tool
The Scan Line Artefacts functions in Gwyddion 14 are used to flag and correct for various artefacts in AFM data related to line-by-line acquisition.It is important to include these functions for eliminating stripe noise caused by line-by-line scanning in our comparison.The 9 different Scan Line Artefacts algorithms in Gwyddion include statistical correction algorithms (finding a representative statistic for each scanline such as Median, Mode, and Trimmed Mean, and then subtracting it from the corresponding scanline); difference correction algorithms (Median Difference, Matching, Trimmed Mean Difference and Facet Level Tilt); a Polynomial Fitting Algorithm (which is mentioned in Section 'Introduction' as the best method to remove leveling artifacts); and a Defect Marking Algorithm (based on user-defined criteria).
Methods for microscopic image processing only
We add VSNR (Variational Stationary Noise Remover) 16 and DeStripe 15 in our comparison.VSNR uses a simple noise model and solves a convex programming problem through numerical optimization.It has shown improved image quality in various denoising applications, including microscopic imaging modalities.DeStripe is an advanced denoising protocol developed for AFM biomolecular imaging.It effectively removes stripe noise by applying a divide-and-conquer approach in the frequency domain, preserving edge sharpness and enhancing molecular feature visualization, leading to better interpretation of AFM images.
Deep learning model developed only for stripe noise removal
Unlike other existing destriping methods, the deep learning model SNRWDNN 18 was developed only for denoising stripes using the HDWT (Horizontal Discrete Wavelet Transform) to represent the directional properties of the strip components.It uses complementary information between different sub-bands to predict the strength and distribution of the noise.A directional regularizer function was designed to separate the scene details from stripe noise and to prevent irregular stripes in the estimation.This means that this deep learning model could perform very well in our gradient case.Therefore, we include it for comparison in our experiments.
We use the source code of the current stable version Gwyddion (version 2.63, Code release time: 2023-06-13) and use Pygwy to bind Gwyddion's objects, methods and functions to write our destriping comparison modules.
Further details on the methods used in our comparison can be found in 1b and the conduction map with clear conduction paths in Fig. 1a, it can be seen that the stripe noise is the strongest at the right, and then fades fast within at a short distance and seems to disappear almost completely near the edge.According to these noise characteristics, the gradient noise model can be defined as: where A is group-sparse noise, G represents a Gaussian kernel and * represents a 2D convolution operation, b and c are parameters whose optimal values are found by fitting one random column of the real noise.Simulated noise with different weights is generated and added to the ground truth Fig. 1a.So the simulated image can be defined as: 18) and (19).In (e), the red curve and the blue curve fit very well, which are two random columns of pixel values from image (b) and the natural noisy image Fig. 1b, respectively.This means that the noise model Eq. ( 18) and the noise level of (b) match well with the real noise.In the "Visual comparison" section, we use Fig. 2b as the image for the denoising comparison because the curves and markers show that it is very close to the real noisy image.
From (a)-(c) and their red frames, we can see that as the noise weighting increases, the conduction paths becomes less and less clear.
We used PSNR and SSIM, two commonly used metrics to quantify the correspondence between images.PSNR focuses on the signal differences while SSIM focuses on the similarity between the original and distorted images.In this work, we focus mostly on the SSIM value in the analysis of the denoising effect, while PSNR is only used as an auxiliary analysis.SSIM provides a more meaningful assessment of image quality and better fulfills our goal of detecting conduction paths.
The SSIM index for quality assessment is based on the calculation of three factors: luminance (l), contrast (c) and structure (s).The overall index is a multiplicative combination of these three factors: where l, c and s are calculated by the local means and standard deviations for images x and y, respectively.Using the common assumptions, the equation simplifies to: where µ x , µ y , σ x ,σ y are the local mean values and standard deviations for the images x, y, respectively.SSIM val- ues are between 0 and 1, where 1 means a perfect match between the two images that are compared (we express SSIM values as percentages below).
Visual comparison
In this subsection, we perform a visual comparison of the 16 methods for removing noise on the natural noisy image in Fig. 1b and a simulated image.
Experiments with a natural noisy image
The results for the natural noisy image of Fig. 1b are shown in Fig. 3.In Fig. 3c-r, which contain the denoising results of the 16 methods.Depending on the least remaining noise that can be seen, the four best methods are selected and enlarged in (o), (r), which allow us to look at the conduction paths in detail to make further observations.Only LRR, Poly, DS2 and GSR are able to show the texture of the conduction path more clearly.LRR and Poly are the only two that remove most of the stripes while recovering most of the conduction path.For the red frames in (q) and (r), LRR is better as more details of the path are recovered at the edge.In Fig. 3o, we can see that GSR removes some conduction paths but some stripe noise remains.While LRR perfectly removes all the stripe noise and restores the whole conduction paths, GSR, DS2 and Poly remove most of the noise and preserve most of the conduction paths.The images of most other methods (c)-(n) are still as dark as the original images and not many stripes are removed.Although the result of Modus (e) slightly brightens the conduction path, there are some streaks across the entire image.
Experiments with a simulated noisy image
We used our noise model to generate noisy images for the second experiment.The denoising results for the simulated images are shown in Fig. 4. Except VSNR, the image denoising results and SSIM values in Fig. 4 are consistent with the denoising results of the natural image in Fig. 3: LRR, Poly DS2 and GSR are the best four methods, with LRR it the best.The VSNR result (l) looks different because we used a different input filter in the experiment with denoising the natural noisy image.This is explained under Parameter settings in section 'Discussion' .We see that LRR is the only method whose SSIM score is above 90%.The SSIM value of Poly is close to 90%, and DS2 and GSR more than double the SSIM value of image (b).The images denoised by FLT, WNN, VSNR, UTV1 and UTV2 have the lowest SSIM values.Most other methods also have the same performance in 1.The larger frames in the bottom left corners of the images contain double-sized and contrast-enhanced versions of the contents of the small red frames at the bottom right corners (Same in Fig. 4).Fig. 3, which are dark and also the result of Modus is the same: brighter, but some stripes cross the image result.The image results in Fig. 4 are comparable with the those in Fig. 3 (except for VSNR (l)).
Quantitative image quality comparison
There are two types of experiments in this subsection: (1) experiments with a large dataset of simulated noisy images that have a similar SSIM value as the real noisy image; and (2) experiments with simulated noisy images with 100 different noise levels, which allow us to evaluate the robustness of the methods at different noise levels.
Experiments with the simulated noisy dataset
Each image in the simulated noisy dataset has a similar SSIM value as the natural noisy image, obtained by manipulating the noise weights.There are a total of 800 pairs of ground truth images and corresponding simulated noisy images with an average SSIM value of 36.57. Figure 5 shows the SSIM and PSNR boxplots corresponding to the denoising results and their ground truths.The average SSIM values of GSR, DS2, Poly and LRR in Fig. 5a are the highest and are 66.42%, 69.23%, 85.5% and 90.43%, respectively.UTV1, FLT, RS, VSNR have the lowest of 34.82%, 29.18%, 26.37%, 21.49%.The SSIM boxes of all methods in (a) are relatively short, which means that each method gives comparable results for the images in the dataset.We can easily interpret the SSIM results as consistent with the visual comparison by comparing the average values.As we can see from Fig. 5b, LRR outperforms the other methods also when using the PSNR metric.In (a), the Poly method ranks second to LRR, and is much better than the other methods.
Experiments with simulated images of different noise strengths
To discuss more general denoising situations, 100 images with 100 distinct noise levels are simulated and denoised, varying the noise weight from 0 to 100.The SSIM range of the resulting dataset is from 5.58% to 72.01%.The results are shown in Fig. 6, in which the curves show the SSIM and PSNR curves between the denoised images and the ground truth image for the 16 denoising methods.
From the SSIM curves, we can see that most of the results are consistent with the conclusions from the visual comparison and the experiment with the simulated noisy dataset.The best methods and the worst methods are almost the same.
In the PSNR results, LRR performs below Poly at most noise levels.This is probably due to the fact that LRR tends to brighten the image.SSIM is less sensitive to this.Although Figure 3 shows that the Poly result is darker than the LRR result, the results are opposite for most of the different noise levels.The noise weight of Fig. 3 is 3 and we can see in Fig. 6b that the LRR PSNR value is still higher than that of Poly because the LRR result is brighter than Poly.Most of the remaining image results with higher noise weight are brighter with Poly than with LRR, but the structure of the conduction paths in these Poly results is not as obvious as in the LRR results.For this reason, the SSIM and PSNR results of LRR and Poly are inconsistent.
Discussion
We first summarize the results of all the experiments in this paper, in particular the comparisons between the denoising results of LRR and Poly.Then we compare all methods in-depth based on the method designs, computational efficiency (computation time and machine requirements), and input parameters.For the quantitative results, we mainly look at SSIM in this section.
Results analysis
Firstly, the results of the different comparisons, whether visual or quantitative, in this paper are generally consistent.LRR has the best overall performance for real and similar simulated noisy images in terms of SSIM.Poly is a fairly close second, and in terms of PSNR, it outperforms LRR in Fig. 6b.Overall, the performances of these two methods are quite similar.The results shown in Fig. 2 showed that our noise model matches the real noise well.
Computational efficiency
In terms of computational efficiency, as we can see in Table 1, the computing times of almost all methods are short, that is, less than 0.1 seconds for the 520 × 520 natural image, except for the VSNR method which took 2.92 seconds.Note that the WNN method requires a GPU for fast performance.
Method designs
In the "Method" Section, LRR and GSR were proposed as denoising models for processing c-AFM images based on the data redundancy features of stripe noise.Of the two, LRR is apparently more effective in removing the stripe noise.
In previous work, polynomials were among the best methods for removing gradient stripe noise, but our new results show that LRR generally has an edge.
DS2 was specially developed for the removal of stripe noise in AFM images.Most results show that DS2 ranks third among the 16 methods.DS2 requires three topographic measurement parameters as input for the frequency information to enhance the features.The visual comparison showed that DS2 performs well, but is inferior to LRR and Poly, as the denoised images still show some strong stripes.
Parameter settings
The input weights for WNN have been trained for stripe noise so that no parameter needs to be input manually.Most other methods require one or more parameters of the sample, which can be determined by trial and error.UTV and LRR require only a single input parameter , which was fixed in all experiments.An optimal value of was found by trial and error in the first comparison.As we can see in Table 1, all methods of Gwyddion and GSR require the scan direction.Poly, TM and TMD only require one additional parameter.
VSNR does not provide a specific method for determining the optimal parameters for the input.However, it is mentioned that the filters used in the algorithm are selected based on the application and the type of noise present in the image 16 .In this work, we directly used a column of stripe noise as a filter and compared the performance of the VSNR algorithm with other methods as shown in the paper.It is possible that the authors used a trial-and-error approach to find suitable parameters for their specific images and noise types, but finding the optimal parameters for the VSNR algorithm can be time consuming.Compared to the other methods VSNR is more time consuming and requires more parameters.
For most comparison methods, we used the optimal or correct parameter values that we determined experimentally.
Summary
Based on the discussions above on the comparative results, computational efficiency, and input parameters, we conclude that LRR is the most suitable method for removing gradient stripe noise and retaining the original features in c-AFM images, closely followed by Poly.It has the most competitive results, is relatively fast without needing a GPU, and requires only one parameter, which is fixed and works well for all comparisons.
Conclusion
To preserve the structure of the conduction paths in c-AFM images, we have compared 16 methods for removing the stripe noise that occurs in lateral measurements of ferroelastic oxide materials.
First, we verified the assumption that the stripe noise in lateral measurements is low rank and found that LRR is the best method to remove it, with Poly a close second.We not only compared the popular tools and the state-of-the-art methods commonly used by surface physicists, but also included a deep learning model designed specifically for stripe noise removal.Some of these methods were considered to be the best methods for c-AFM image destriping at the time when LRR was not yet used in this field.All the results in this paper show that LRR is far better than all others at a noise level similar to the real noise and is also competitive at other noise levels.
Moreover, our designed noise model can simulate noisy images to enable quantitative image quality comparison.It is well matched to the real stripe noise, as confirmed by the visual and quantitative image quality comparisons.
Finally, the detailed methods overview and experimental results provide surface physicists with a clear comparison of 16 different destriping methods.This is important for c-AFM users for processing images to remove stripe noise.For other scanning probe modalities or any modality which is affected by stripe noise it may be useful as well.
In future work we will apply LRR in data processing pipelines, and explore its usefulness on other data. Vol
Figure 1 .
Figure 1.Three conduction maps of the same sample collected by lateral measurement, where the fixed electrode is at the right side.The conduction paths in (b) and the right edge of (c) are less clear than in (a) because of the stripe noise and the high current values at the edge of the electrode side, respectively.
Figure
Figure 2a-c are three simulated images generated by Eqs.(18) and(19).In (e), the red curve and the blue curve fit very well, which are two random columns of pixel values from image (b) and the natural noisy image Fig.1b, respectively.This means that the noise model Eq.(18) and the noise level of (b) match well with the real noise.In the "Visual comparison" section, we use Fig.2bas the image for the denoising comparison because the curves and markers show that it is very close to the real noisy image.From (a)-(c) and their red frames, we can see that as the noise weighting increases, the conduction paths becomes less and less clear.We used PSNR and SSIM, two commonly used metrics to quantify the correspondence between images.PSNR focuses on the signal differences while SSIM focuses on the similarity between the original and distorted images.In this work, we focus mostly on the SSIM value in the analysis of the denoising effect, while PSNR is only used as an auxiliary analysis.SSIM provides a more meaningful assessment of image quality and better fulfills our goal of detecting conduction paths.The SSIM index for quality assessment is based on the calculation of three factors: luminance (l), contrast (c) and structure (s).The overall index is a multiplicative combination of these three factors:
Figure 2 .
Figure 2. (a-c) images by adding noise of different weights to the same ground truth (image in Fig. 1a); the corresponding SSIM values are indicated.The larger frames in the lower left corners of the images contain three times larger and contrast-enhanced versions of the contents of the small red frames at the same position in the center.The black pixel value curves in (d-f) represent a random column in the simulated images (a-c) (blue curves); in each plot the pixel value curve (in red) of the corresponding column in the natural noisy image of Fig. 1b is added.
Figure 3 .
Figure 3. Image results of removing natural stripe noise: (a) is the natural noisy image (identical to Fig. 1b).(b) is the noise result of LRR.(c-r) are the denoising results of the 16 methods.(o-r) are enlarged images denoised by the four best methods, where the conduction paths can be seen in detail.The abbreviations of the methods are listed in Table1.The larger frames in the bottom left corners of the images contain double-sized and contrast-enhanced versions of the contents of the small red frames at the bottom right corners (Same in Fig.4).
Figure 4 .
Figure 4. Results of removing simulated stripe noise: (a) is the ground truth.(b) is the simulated image after adding simulated noise to the ground truth (a).(c-r) are the denoising results of the 16 methods.(o-r) are enlarged images denoised by the four best methods, where the conduction paths can be seen in detail.
Figure 5 .Figure 6 .
Figure 5. Box plots for SSIM and PSNR results for the 16 denoising methods on the simulated dataset (800 image pairs).
Table 1 with the abbreviations used in this work.Below, Median Difference, Facet Level Tilt, Polynomial, Trimmed Mean, Trimmed Mean Difference, Remove Scar, SNRWDNN and Destripe2 are abbreviated as MD, FLT, Poly, TM, TMD, RS, WNN (Wavelets Neural Network) and DS2.UTV1 and UTV2 denote vertical UTV and horizontal UTV, respectively.
Table 1 .
19) Simulated Noisy Image = Simulated Noise * Noise Weight + Ground Truth Methods used for comparison. | 8,556.8 | 2024-02-16T00:00:00.000 | [
"Materials Science",
"Engineering",
"Physics"
] |
Pregnancy recognition signaling mechanisms in ruminants and pigs
Maternal recognition of pregnancy refers to the requirement for the conceptus (embryo and its associated extra-embryonic membranes) to produce a hormone that acts on the uterus and/or corpus luteum (CL) to ensure maintenance of a functional CL for production of progesterone; the hormone required for pregnancy in most mammals. The pregnancy recognition signal in primates is chorionic gonadotrophin which acts directly on the CL via luteinizing hormone receptors to ensure maintenance of functional CL during pregnancy. In ruminants, interferon tau (IFNT) is the pregnancy recognition signal. IFNT is secreted during the peri-implantation period of pregnancy and acts on uterine epithelia to silence expression of estrogen receptor alpha and oxytocin receptor which abrogates the oxytocin-dependent release of luteolytic pulses of prostaglandin F2-alpha (PGF) by uterine epithelia; therefore, the CL continues to produce progesterone required for pregnancy. Pig conceptuses secrete interferon delta and interferon gamma during the peri-implantation period of pregnancy, but there is no evidence that they are involved in pregnancy recognition signaling. Rather, pig conceptuses secrete abundant amounts of estrogens between Days 11 to 15 of pregnancy required for maternal recognition of pregnancy. Estrogen, likely in concert with prolactin, prevents secretion of PGF into the uterine venous drainage (endocrine secretion), but maintains secretion of PGF into the uterine lumen (exocrine secretion) where it is metabolized to a form that is not luteolytic. Since PGF is sequestered within the uterine lumen and unavailable to induce luteolysis, functional CL are maintained for production of progesterone. In addition to effects of chorionic gonadotrophin, IFNT and estrogens to signal pregnancy recognition, these hormones act on uterine epithelia to enhance expression of genes critical for growth and development of the conceptus.
Type I and type II interferons
Interferons are cytokines with antiviral, antiproliferative and immunomodulatory biological effects critical to immune responses that protect the body against viral infections and malignant cells [1]. Type I interferons with a high degree of structural homology include interferons alpha (IFNA1-IFNA10, IFNA13, IFNA14, IFNA16, IFNA17 and IFNA21), interferon beta (IFNB), interferon delta (IFND), interferon epsilon (IFNE), interferon kappa (IFNK), interferon tau (IFNT) and interferon omega (IFNW1-IFNW3). IFNT is unique in being the pregnancy recognition signal in ruminants. The IFNT family of proteins are structurally and functionally related to each other and to other type I interferons and IFNT likely arose from duplication of an IFNW gene some 36 million years ago when IFNT came to be expressed in the trophectoderm under control of an Ets-2/AP-1 enhancer element [2]. IFNT is expressed only by mononuclear trophectoderm cells of ruminant conceptuses (embryo and its extra-embryonic membranes). IFND, another novel type I interferon is expressed by conceptuses of pigs and horses during the periimplantation period of pregnancy [3]. Interferon gamma (IFNG) is a type II interferon secreted by pig conceptuses during the peri-implantation period of pregnancy [3]. The functions of IFND and IFNG from pig conceptuses are not known [4][5][6].
Type I IFNs bind a common receptor composed of IFNAR1 and IFNAR2 to induce cell signaling via the Janus activated kinases (JAKs) and tyrosine kinase 2 (TYK2) pathway [1,7,8]. Type I IFNs also induce formation of signal transducer and activator or transcription homodimers (STAT1-STAT1) known as gammaactivation factor (GAF) that translocate to the nucleus and bind GAS (gamma-activation site) elements in the promoter region of interferon stimulated genes (ISG). One GAS-regulated gene is interferon regulatory factor 1(IRF1) which binds and activates interferon stimulated response elements (ISREs) of many ISG to amplify effects of type I IFNs [9,10]. However, type I IFNs act predominantly via interferon stimulatory gene factor 3 gamma (ISGF3G) rather than GAF. The ISGF3G complex includes the STAT1: STAT2 heterodimer and IRF9. The predominant cell signaling pathways involve STAT2 and ISGF3G that prolong effects of IFNT by increasing expression of STAT2 and IRF9 which favors formation of ISGF3G rather than GAF [11,12]. However, type I IFNs also activate non-classical cell signaling pathways that include mitogen activated protein kinases (MAPKs), especially p38 and ERK1/2, as well as the phosphatidyl inositol kinase 3 kinase (PI3K)/V-AKT murine thymoma viral oncogene homolog 1 (AKT1) pathway and mechanistic target of rapamycin (MTOR) [1,13].
IFNG is critical for innate and adaptive immunity against viral and intracellular bacterial infections and tumor control, as well as activating macrophages. The importance of IFNG in immunology derives from its ability to inhibit viral replication directly and exert immunostimulatory and immunomodulatory effects. IFNG is produced by natural killer and natural killer T cells in innate immune responses and by CD4 Th1 and CD8 cytotoxic T lymphocyte effector T cells after development of antigen-specific immunity [14]. IFNG induces cellular responses via its interaction with a heterodimeric receptor consisting of IFNG receptor 1 (IFNGR1) and IFNGR2 which activates the JAK-STAT pathway. IFNG also binds to heparan sulfate at the cell surface which inhibits its biological activity [14].
Characteristics of interferon Tau
IFNT was discovered by culturing sheep conceptuses in the presence of radiolabeled amino acids and detecting radiolabeled de novo synthesized proteins that included an abundant low molecular weight protein first named protein X and then ovine trophoblast protein 1 [15][16][17][18] in my laboratory and trophoblastin by Martal et al. [19]. When the gene for oTP1/trophoblastin was cloned and sequenced it was found to be a type 1 interferon designated IFNT [20,21]. The antiviral, antiproliferative and immunosuppressive activities, and insight into its structural motif have been reported [22][23][24][25]. My laboratory used a synthetic gene for IFNT to produce recombinant IFNT with immunosuppressive, antiviral, antiproliferative and antiluteolytic properties identical to those for native IFNT [26,27]. IFNT has a molecular weight of 19 to 24 kDa depending on glycosylation and an isoelectric point between 5.3 and 5.8. It has 172 amino acids with disulfide bridges between cysteine residues at 1 and 99, as well as 29 and 139 [28]. Ovine IFNT is not glycosylated, whereas bovine IFNT is N-glycosylated and caprine IFNT is a mixture of nonglycosylated and N-glycosylated forms with the glycosylation site being at ASN 78. The amino terminal amino acid is proline. IFNT is very stable to pH as low as 2 to 3 [28].
Antiluteolytic effects of interferon Tau
The model for studies of the antiluteolytic effect of IFNT in my laboratory was based on McCracken's model of the "progesterone block" for regulation of the estrous cycle in ewes [29]. The hypothesis states that P4 blocks expression of estrogen receptor alpha (ESR1) and oxytocin receptor (OXTR) for about 10 days after which time P4 downregulates expression of progesterone receptors (PGR) in uterine epithelia which allows rapid increases in expression of ESR1 and OXTR genes ( Figure 1). Then, pulsatile release of oxytocin (OXT) from the posterior pituitary gland and CL induce pulsatile secretion of prostaglandin F2α (PGF) from uterine epithelia on Days 15 and 16 which induces functional and structural regression of the CL followed by estrus and another opportunity for the ewe to mate and become pregnant. Our understanding of pregnancy recognition in ruminants is from studies see [30][31][32] indicating that: 1) IFNT silences transcription of the ESR1 Figure 1 Interferon tau (IFNT) is the pregnancy recognition hormone in sheep and other ruminants that acts to silence expression of estrogen receptor alpha (ESR1) and, in turn, oxytocin receptor (OXTR) to prevent development of the luteolytic mechanism that required oxytocin (OXT) from the corpus luteum (CL) and posterior pituitary to induce luteolytic pulses of prostaglandin F 2α (PGF). Thus, IFNT blocks the ability of the uterus to develop the luteolytic mechanism, but does not inhibit prostaglandin synthase 2 (PTGS2) or the basal production of PGF during pregnancy.
gene and, therefore, estradiol-induced expression of OXTR in uterine luminal and superficial glandular epithelia (LE/sGE) to abrogate development of the endometrial luteolytic mechanism involving OXT-induced luteolytic pulses of PGF; 2) basal production of PGF and PGE2 is higher in pregnant than cyclic ewes due to continued expression of prostaglandin synthase 2 (PTGS2) in uterine LE/sGE; 3) IFNT silencing of ESR1 expression prevents estradiol from inducing PGR in endometrial epithelia; and 4) loss of PGR by uterine epithelia is required for expression of P4-induced and IFNT-stimulated genes that support development of the conceptus. Caprine IFNT secreted between Days 16 and 21 of gestation also abrogates the luteolytic mechanism to prevent pulsatile release of luteolytic PGF and extend lifespan of the CL in goats [33]. Bovine IFNT, secreted between Days 12 and 38 of pregnancy, also prevents secretion of luteolytic pulses of PGF by uterine epithelia and blocks effects of exogenous E2 and oxytocin to stimulate uterine release of PGF. Expression of ESR1 and OXTR mRNAs is either silenced or the receptors are not responsive to estradiol and OXT in endometria of both pregnant cows and cyclic cows treated with intrauterine injections of either ovine or bovine recombinant or native IFNT see [34,35].
IFNT silences expression of ESR1 to ensure that estradiol does not increase expression of ESR1 in uterine epithelia during pregnancy. Thus, uterine LE/sGE do not express ESR1, PGR, IRF9 or STAT1 because IFNT induces expression of IRF2, a potent suppressor of transcription, in uterine LE/sGE that is in direct contact with conceptus trophectoderm [31]. Therefore, uterine LE/sGE in direct contact with the conceptus express unique non-classical interferon stimulated genes such as those for transport of nutrients into the uterine lumen to support growth and development of the conceptus. The uterine LE/sGE are affected by P4; however, the action of P4 is mediated by PGR-positive uterine stromal cells that secrete one or more progestamedins, particularly FGF10 in ewes, and effects of IFNT on uterine LE/sGE are mediated via a JAK/STAT-independent cell signaling pathway [3,30,31]. Therefore, IFNT abrogates the uterine luteolytic mechanism to prevent pulsatile release of luteolytic PGF while also increasing expression of many genes critical for uterine receptivity to implantation and conceptus development ( Figure 2). These genes include wingless-type MMTV integration site family member 7A (WNT7A) induced by IFNT, as well as LGALS15 (galectin 15), CTSL (cathepsin L), CST3 (cystatin C), SLC2A1 (solute carrier family 2 (facilitated glucose transporter), member 1), SLC7A2 (cationic amino acid transporter), HIF2A (hypoxia-inducible factor 2A) and gastrin releasing peptide (GRP) that are induced by P4 and further stimulated by IFNT and/or prostaglandins [30].
Prostaglandins and IFNT affect uterine gene expression and conceptus development
Dorniak et al. [36] reported that prostaglandins (PG) secreted by epithelial and stromal cells of the uterus effect expression of genes critical to elongation and implantation of the ovine conceptus. Although IFNT inhibits expression of ESR1 and OXTR in uterine LE/sGEof pregnant ewes, IFNT does not inhibit expression of prostaglandin synthase 2 (PTGS2), the rate-limiting enzyme in synthesis of PGs. IFNT stimulates PGE2 production by cells of the bovine uterus and other Type I IFNs stimulate phospholipase A2 (PLA2) and synthesis of PGE2 and PGF in various cell types. Intra-uterine infusions of meloxicam, a specific inhibitor of PTGS2, prevents elongation of ovine conceptuses. The elongating conceptuses of ewes and cows synthesize and secrete more PGs than the uterus; therefore, the abundance of PGs is greater in the uterine lumen of pregnant as compared to cyclic ewes and cows. Sheep conceptuses secrete mainly PGF, 6-keto-PGF1α (i.e., a stable metabolite of PGI2), and PGE2 during the peri-implantation period of pregnancy and PG receptors are present in all cell types of the uterus and conceptus during pregnancy. Conceptus-derived PGs have autocrine, paracrine and possibly intracrine effects on cells of the Figure 2 Silencing expression of progesterone receptor (PGR) in uterine epithelia is a prerquisite for implantation in mammals. Therefore, progesterone (P4) acts via PGR-positive uterine stromal cells to increase expression of progestamedins, e.g. fibroblast growth factor-7 (FGF7) and FGF10, as well as hepatocyte growth factor (HGF) in sheep uteri. The progestamedins, as well as interferon tau (IFNT) exert paracrine effects on uterine epithelia and conceptus trophectoderm that express receptors for FGF7 and FGF10 (FGFR2 IIIb ) and HGF (MET) to stimulate cell signaling pathways including phosphatidyl inositol kinase 3 kinase (PI3K) and mitogen activated protein kinase (MAPK) to stimulate gene expression and secretory responses by trophectoderm and uterine luminal (LE) and superficial glandular (sGE) epithelia that do not express signal transducers and activators of transcription (STAT1/STAT2). Thus, IFNT activates undefined alternate cell signaling pathways that may include PI3K and MAPK to influence gene expression by uterine LE and sGE.
uterus and conceptus. For example, the expression of PTGS2 by Day 7 bovine blastocysts predicts successful development of that blastocyst to term and delivery of a live calf. The infusion of PGE2, PGF, PGI2 or IFNT into the uterine lumen of cyclic ewes increases expression of GRP, insulin-like growth factor binding protein 1 (IGFBP1) and LGALS15, but only IFNT increases expression of cystatin 6 (CST6). Differential effects of PGs were also observed for CTSL and its inhibitor CST3. For glucose transporters, IFNT and all PGs increased SLC2A1, but only PGs increased SLC2A5 expression, whereas expression of SLC2A2 and SLC5A1 mRNAs were increased by IFNT, PGE2, and PGF. Infusions of all PGs and IFNT increased the amino acid transporter SLC1A5, but only IFNT increased SLC7A2. In the uterine lumen, only IFNT increased glucose concentrations, and only PGE2 and PGF increased the abundance of total amino acids. Thus, PGs and IFNT coordinately regulate endometrial functions important for growth and development of the conceptus during the peri-implantation period of pregnancy.
Cortisol regulates endometrial function
The expression of 11-beta-hydroxysteroid dehydrogenase, type I (HSD11B1) is induced by P4 and stimulated by IFNT in ovine uterine LE/sGE and it is one of two isoforms that regulate intracellular levels of bioactive glucocorticoids. The ovine uterine endometrium and conceptus generate active cortisol from inactive cortisone and cortisol regulates expression of genes via the glucocorticoid receptor (GR). The few GR target genes identified in the uterus or placenta include those involved in lipid metabolism and triglyceride homeostasis. In addition to progesterone induction and IFNT stimulation of HSD11B1 expression in the ovine endometrium, PGs regulate activity of HSD11B1 in the bovine endometrium, and PGF stimulates HSD11B1 activity in human fetal membranes [36][37][38]. Elongating sheep conceptuses generate cortisol from cortisone via HSD11B1. GR are present in all cells of ovine uterus during the estrous cycle and pregnancy and in conceptus trophectoderm; therefore, cortisol may have paracrine and autocrine effects on the endometrium and conceptus trophectoderm. Intrauterine infusions of cortisol into cyclic ewes from Days 10 to 14 increased expression of several elongation-and implantation-related genes in ovine uterine epithelia. In humans, cortisol at the conceptus-maternal interface is proposed to stimulate secretion of chorionic gonadotropin by trophoblast, promote trophoblast growth and invasion, and stimulate placental transport of glucose, lactate, and AA. Interestingly, administration of glucocorticoids increased pregnancy rates in women undergoing assisted reproductive technologies and pregnancy outcomes in women with a history of recurrent miscarriage [39,40].
Interferon Tau drives a servomechanism for uterine functions
The establishment and maintenance of pregnancy requires integration of endocrine and paracrine signals from the ovary, conceptus, and uterus [41]. In ewes, implantation and placentation occur as a protracted process from Days 15-16 to Days 70 to 80 of pregnancy [42,43]. During this period, the uterus and placenta grow and remodel for support of rapid conceptus development and growth during the last one-half of pregnancy [44]. In addition to development of placentomes in the caruncular areas of the endometrium and changes in uterine vascularity, the uterine glands in the intercaruncular endometrium increase in length (4-fold) and width (10-fold) and degree of secondary and tertiary branching during pregnancy [42]. Hyperplasia of uterine GE occurs between Days 15 and 50 to 60 of gestation and then uterine glands undergo hypertrophy to increase surface area for maximal production of histotroph after Day 60 [45].
The ovine uterus is exposed sequentially to estrogen, progesterone, IFNT, placental lactogen (CSH1), and placental growth hormone (GH1) during pregnancy as these hormones initiate and maintain endometrial gland morphogenesis and differentiated secretory functions of uterine GE [46]. Ovine CSH1 is produced by binucleate cells of conceptus trophectoderm from Days 15 or 16 of pregnancy which is coordinate with onset of expression of genes for uterine milk proteins (UTMP) and secreted phosphoprotein 1 (SPP1, also known as osteoponin) by uterine GE [45,47]. UTMP are members of the serpin family of serine protease inhibitors [48] and SPP1 is an extra-cellular matrix protein [49]. UTMP and SPP1 are excellent markers for differentiation and overall secretory capacity of uterine GE during pregnancy in ewes [46]. CSH1 is detectable in maternal serum by Day 50 and peak concentrations are between Days 120 to 130 of gestation [50]. A homodimer of the prolactin receptor (PRLR) and a heterodimer of PRLR and growth hormone receptor (GHR) transduce CSH1 cell signaling [51]. In the ovine uterus, CSH1 binding sites for PRLR are specific to GE [52]. Temporal changes in circulating levels of CSH1 are correlated with endometrial gland hyperplasia and hypertrophy and increased production of UTMP and SPP1 during pregnancy [45,49]. Placental GH1 is produced between Days 35 and 70 of gestation [53] when onset of hypertrophy of uterine GE occurs along with maximal increases in the abundance of UTMP and SPP1 proteins from uterine GE. Thus, two members of the lactogenic and somatogenic hormone family stimulate endometrial gland morphogenesis and differentiated function during pregnancy to facilitate conceptus growth and development in ewes.
The sequential exposure of the ovine uterus to estrogen, progesterone, IFNT, CSH1 and placental GH1 during pregnancy constitutes a "servomechanism" that activates and maintains remodeling, secretory function and growth of the uterus [46]. Chronic treatment of ovariectomized ewes with progesterone induces expression of UTMP and CSH1 by uterine GE and insures that PGR are not in uterine epithelia beyond Day 13 postestrus [41]. Down-regulation of PGR in uterine GE is required for progesterone to induce expression of UTMP and SPP1, but a combination of progesterone and estrogen increases expression of ESR1 and PGR in uterine GE which inhibits expression of both SPP1 and UTMP. Thus, progesterone must down-regulate expression of PGR in uterine GE in order for CSH1 and GH1 to stimulate expression of UTMP and SPP1 [46].
The intrauterine infusion of CSH1 or GH1 increases expression of UTMP and SPP1 by uterine GE of progesterone-treated ewes. However, the ewes must first receive intrauterine infusions of IFNT between Days 11 and 21, and then either CSH1 or GH1 from Days 16 to 29 after onset of estrus [46]. The increase in expression of UTMP by uterine GE is due in part to effects of CSH1 and GH1 to increase branching and surface area of uterine glands. Intrauterine infusion of CSH1 and GH1 into ewes treated with progesterone and IFNT increased hypertrophy of uterine glands, but this response did not occur if ewes were not treated with IFNT prior to receiving intra-uterine infusions of CSH1 or GH1. The ability of prolactin, CSH1 and GH1 to elicit similar effects on uterine glands is consistent with the fact that these hormones are members of a unique hormone family that shares genetic, structural, binding, receptor signal transduction and function on glandular tissues including the uterus and mammary gland [51]. These studies revealed that developmentally programmed events mediated by specific paracrine-acting hormones at the conceptus-uterine interface stimulate remodeling and differentiated function of uterine GE for production of histotroph essential for fetal-placental growth during gestation. Importantly, actions of IFNT, through an unknown mechanism, are required for actions of CSH1 and GH1 on uterine gland development and function.
Pregnancy recognition signaling in pigs
The blastocysts of pigs undergo a morphological transition from large spheres of 10 to 15 mm diameter and then tubular (15 mm by 50 mm) and filamentous (l mm by 100-200 mm) forms between Days 10 and 12 of pregnancy and achieve a length of 800 to 1000 mm between Days 12 and 15 of pregnancy see [31]. Rapid elongation of conceptus trophectoderm allows maximum surface area of contact between trophectoderm and uterine LE/sGE. During this period of rapid elongation, the trophectoderm secretes estrogens (catecholestrogens, estrone and estradiol) [54], and IFNG and IFND [4,5]. Estrogen is the pregnancy recognition signal from conceptus trophectoderm in pigs and it must be secreted between Days 11 and 15 of pregnancy. Estrogen does not inhibit secretion of PGF by uterine endometrium, rather it activates a mechanism whereby secretion of PGF is into the uterine lumen (exocrine secretion) rather than into the uterine vasculature (endocrine secretion) as occurs in nonpregnant gilts and sows (Figure 3). Thus, in pregnant pigs, PGF is sequestered within the uterus and metabolized to prevent it from exerting luteolytic effects on the CL. The conceptus estrogens also modulate expression of genes responsible for endometrial remodeling for implantation between Days 13 and 25 of gestation [55]. Both SPP1 and FGF7 are induced by estrogen in uterine LE to affect trophectoderm and LE adhesion, signal transduction and cell migration during the peri-implantation period [56][57][58]. The trophectoderm also secretes interleukin 1 beta (IL1B) during this period and estrogen appears to modulate uterine responses to IL1B [59].
Pig conceptus trophectoderm secretes both IFNG and IFND during the peri-implantation period of pregnancy [4,5]. IFNG mRNA is abundant in trophectoderm between Days 13 and 20 of pregnancy, whereas IFND mRNA is detectable in Day 14 conceptuses only by RT-PCR analysis [54]. IFNG and IFND proteins co-localize to peri-nuclear membranes typically occupied by the endoplasmic reticulum and golgi apparatus, as well as cytoplasmic vesicles within clusters of trophectoderm cells along the uterine LE. This expression is characterized by de novo appearance of zona occludens one (ZO1), a marker of epithelial tight junctions on their basal aspect which suggests changes in endometrial polarity [5]. There is no evidence that either IFNG or IFND have antiluteolytic effects to prevent regression of CL or alter concentrations of progesterone in plasma. However, they do stimulate secretion of PGE2 by uterine cells which may enhance structural integrity of CL and their secretion of P4 [60].
A number of genes are expressed by uterine epithelial and stromal cells in pigs in response to intra-muscular injections of estradiol and/or intra-uterine injections of pig conceptus secretory proteins that include IFNG and IFND [61][62][63]. Implantation in pigs is non-invasive and pigs have a true epitheliochorial placenta. Genes induced in uterine LE by estrogen include SPP1, FGF7, aldo-keto reducing family 1 member B1 (AKR1B1), cluster of differentiation 24 (CD24), neuromedin beta (NMB), STAT1 and IRF2. Expression of IRF2 is induced in uterine LE/sGE by estrogen, the pregnancy recognition signal in pigs whereas IFNT induces IRF2 in uterine LE/sGEin ewes. In both pigs and ewes the expression of IRF2 in uterine LE and sGE prevents IFNT in ewes and IFNG and IFND in pigs from inducing expression of ISG in uterine LE/sGE. The genes expressed by uterine LE of pigs are for stimulation of proliferation, migration and attachment of trophectoderm to uterine LE. Also, IFND and/or IFNG may affect blastocyst attachment to uterine LE in pigs by inducing labilization and remodeling of uterine LE to affect polarity and stimulate production of PGE2.
Since IRF2 is expressed in uterine LE of pigs, these cells do not express classical ISG, rather expression of classical ISG is limited to uterine GE and stromal cells [66]. The classical ISG induced by IFNG and/or IFND in uterine GE and stromal cells, as well as endothelial cells include STAT1, STAT2, IRF1, MX1, swine leukocyte antigens (SLA) 1-3 and 6-8, and beta 2 microglobulin. The pregnancy-specific roles of these uterine ISGs may be to: 1) affect decidual/stromal remodeling to protect the fetal semi-allograft from immune rejection; 2) limit conceptus invasion into the endometrium; and/or 3) stimulate development of uterine vasculature. Because IFNG can initiate development of the endometrial vasculature, it is hypothesized to facilitate establishment of hematotrophic support of developing conceptuses.
Secretion of both IFND and IFNG by conceptus trophectoderm is unique to pig conceptuses, but little is known of their interactions. Type I IFND and Type II IFNG may each induce expression of non-overlapping sets of genes; however, they may act synergistically to induce physiological responses. Cooperative induction and maintenance of expression of ISGs such as STAT1 for reinforcement of their effects on distinct cell-surface ligands while maintaining their individual specificities for inducing ISGs may occur. Although IFNG may enhance uterine receptivity to implantation in pigs, highly localized and abundant expression of IFNG, TNFA, IL1B and IL1R in the endometrium is reported to interfere with conceptus development between Days 15 and 23 of pregnancy [64].
Progestamedins, estramedins, corticoids and prostaglandins
Uterine receptivity to implantation is dependent on progesterone which is permissive to actions of IFNs, chorionic gonadotrophin and lactogenic hormones such as prolactin and placental lactogen [2,[30][31][32]. The paradox is that cessation of expression of PGR and ESR1 by uterine epithelia is a prerequisite for uterine receptivity to implantation, expression of genes for secretory proteins by uterine epithelia, and selective transport of molecules into the uterine lumen that support conceptus development. Down-regulation of PGR is associated with loss of expression of proteins on uterine LE such as MUC1 which would interfere with implantation. Further, silencing expression of PGR in uterine epithelia allows progesterone to act on PGR-positive uterine stromal cells to induce expression of progestamedins, i.e., FGF7 and −10, and hepatocyte growth factor (HGF), that exert more specific paracrine regulation of differentiated functions of uterine epithelia and conceptus trophectoderm that express receptors for FGF7 (FGFR2IIIb) in pigs see [30]. Many ISGs are P4-induced and IFN-stimulated; however, a fundamental unanswered question is whether Figure 3 The theory of pregnancy recognition in the pig is that secretion of prostaglandin F 2α (PGF) is endocrine, that is, toward the uterine vascular drainage to induce luteolysis in cyclic pigs. However, PGF is secreted in an exocrine direction, that is, toward the uterine lumen in pregnant pigs where it is metabolized and unavailable to exert luteolytic effects.
actions of progestamedins and IFNs on uterine epithelia or other uterine cell types involve non-classical cell signaling pathways, independent of PGR and STAT1, such MAPK and PI3K/AKT to affect gene expression and uterine receptivity to implantation [1,30]. Interestingly, type I IFNs bind the same receptor, but activate unique signaling pathways that are cell-specific to differentially affect gene expression in uterine LE/sGE versus GE and stromal cells [55,64,65].
Estramedins in pigs
Pig conceptuses secrete estrogens between Days 10 and 15 for pregnancy recognition, but also to increase expression of growth factors including insulin-like growth factor 1 (IGFI) and FGF7 which, in turn, act on conceptus trophectoderm to stimulate proliferation and/or gene expression [32]. IGFI is expressed by uterine glands of cyclic and pregnant pigs and IGF1 receptors (IGF1R) are expressed by cells of the endometrium and conceptuses suggesting paracrine and autocrine actions of IGFI. FGF7, an established paracrine mediator of hormone-regulated epithelial growth and differentiation, is expressed uniquely by uterine LE in pigs between Days 12 and 20 of the estrous cycle and pregnancy. FGF7 binds to and activates FGFR 2IIIb expressed by uterine epithelia and conceptus trophectoderm. Estradiol increases FGF7 expression following effects of progesterone to down-regulate expression of PGR in uterine LE. FGF7 then increases cell proliferation, phosphorylated FGFR2IIIb, the MAPK cascade and expression of urokinase-type plasminogen activator, a marker for trophectoderm cell differentiation [56,59]. From about Day 20 of pregnancy, FGF7 expression shifts from uterine LE to uterine GE in pigs and likely continues to affect uterine epithelia and conceptus development [57,58]. In addition to the increase in secretion of estrogens between Days 11 and 15 of pregnancy for maternal recognition of pregnancy, increases in estrogens from the placenta between Days 20 and 30 increase expression of endometrial receptors for prolactin that may allow prolactin to stimulate secretions from uterine GE, placentation and uterine blood flow for increased transport of nutrients [66].
Corticoids
There are positive actions of glucocorticoids in early pregnancy. For example, in primates, glucocorticoids stimulate secretion of chorionic gonadotrophin, suppressuterine natural killer cells, and promote trophoblast growth and invasion, as well as exert negative effects that might compromise pregnancy that include inhibiting cytokineprostaglandin signaling, restriction of trophoblast invasion, induction of apoptosis, and inhibition of conceptus development [67]. With respect to implantation of blastocysts, a dialogue initiated by cell surface signalling molecules on conceptus trophectoderm and uterine LE includes integrins and fibronectin that glucocorticoids suppress to enhance implantation. The effects of glucocorticoids on fibronectin expression are tissue-specific with dexamethasone suppressing fibronectin in term human cytotrophoblasts and amnion, but acting in synergy with transforming growth factor beta to increase expression of fibronectin in matched samples of chorion and placental mesenchymal cells. Also occurring during the periimplantation period of pregnancy are events mediated by pro-inflammatory cytokines such as IL1B, TNFA and prostaglandins that are modulated by anti-inflammatory effects of glucocorticoids which likely modulate cytokineprostaglandin signaling required for implantation. Both IL1B and TNFA increase expression and activity 11BHSD1 while suppressing expression of 11BHSD2 in term human chorionic trophoblasts. This has the net effect of increasing the conversion of corticosterone to cortisol and creating a negative feedback loop at the uterine-conceptus interface between glucocorticoids and inflammatory cytokines.
In most tissues, one aspect of the anti-inflammatory effect of glucocorticoids is to inhibit the synthesis of prostaglandins and thromboxanes by decreasing the expression and/or actitivity of phospholipase A2 (PLA2) and, therefore, liberation of arachidonic acid as substrate for PTGS1 and PTGS2 [68]. However, in the placenta, glucocorticoids increase PLA2, PTGS2 and prostaglandin synthases [69] and decrease expression of 15-alpha hydroxyprostaglandin dehydrogenase (HPGD) that converts prostaglandins to their inactive forms [70]. Within the placenta, prostaglandins increase expression and activity of 11BHSD1 [37] to increase cortisol production and decrease activity of 11BHSD2 that converts cortisol to inactive cortisone [71]. Glucocorticoids can stimulate growth of trophoblast and expression of pro-matrix metalloproteinase (proMMP-2) [72], but other reports indicate that they inhibit expression of MMP9 and migration (invasiveness) of cytotrophoblast cells [73]. Further, glucocorticoids affect degradation of extracellular matrix during trophoblast invasion with urokinase-type plasminogen activator (uPA) that leads to plasmin-associated degradation of extracellular matrix and tissue-type enzyme (tPA) plasmin-dependent breakdown of fibrin for establishment of an efficient vascular exchange in the placenta [74]. The activities of both uPA and tPA are inhibited by plasminogen activator inhibitor (PAI1) secreted by trophoblast and decidual cells [75] and both cortisol and dexamethasone increase expression of PAI1 [76] which may result in poor placental exchange of nutrients and gases and lead to pre-eclampsia and intrauterine growth retardation [77].
In sheep, establishment of pregnancy requires elongation of the conceptus and production of IFNT for pregnancy recognition signaling as discussed previously. Expression of HSD11B1 may be stimulated by P4, prostaglandins and/or cortisol and HSD11B1 mRNA is more abundant in uterine LE/sGE between Days 12 and 16 of pregnancy than the estrous cycle and expression of both HSD11B1 and PTGS2 by uterine LE/sGE is coordinate with conceptus elongation in ewes [78]. Physiological levels cortisol are also potent stimulators of expression of both arginase and ornithine decarboxylase in cells which increases synthesis of polyamines essential for cell proliferation and differentiation of cells of the conceptus [79]. Although HSD11B1 is abundant in the uterine epithelia, it is barely detectable in the conceptus, whereas HSD11B2 is barely detectable in uterine epithelia, butabundant in the conceptus. Expression of HSD11B1 is induced by P4 and further stimulated by IFNT in uterine LE/sGE. The corticoid receptor, NR3C1, is present in all ovine uterine cell types. Therefore, HSD11B1 expression in uterine LE/sGE is regulated by P4, IFNT and prostaglandins generate cortisol that act via NR3C1 to regulate ovine endometrial functions, such as production of prostaglandins, during pregnancy. Prostaglandins represent another activator of gene expression via their respective receptors, such as PGE receptors (PTGER1-PTGER3) to activate MAPK cell signaling pathways. In bovine uteri, IFNT stimulates expression of PTGS2 and PGE synthase to increase the relative abundance of PGE, but also increases expression of prostaglandin E receptor 2, EP2 subtype in endometrial epithelia [80] and PGE may stimulate gene expression by activation of p38 MAPK [81]. Therefore, in uterine epithelia, there is the potential for IFNT, progestamedins and prostaglandins to act additively or synergistically to stimulate expression of genes by uterine epithelia that support growth and development of the conceptus.
Summary
The focus of this review is pregnancy recognition signaling molecules in ruminants by IFNT and in pigs by estrogens. IFNT abrogates development of the luteolytic mechanism by silencing expression of ESR1 and OXTR to prevent pulsatile release of luteolytic PGF by uterine epithelia. Estrogens from pig conceptuses, on the other hand, induce mechanisms for exocrine secretion of PGF into the uterine lumen where is metabolized and, therefore, unavailable to cause luteolysis. Both IFNT and estrogens, in concert with effects of progesterone, exert effects particularly on uterine LE and sGE the increase expression of genes that include growth factors and nutrient transporters critical to growth and development of the conceptus. The PGs and corticoids within the uterine lumen also play important roles in regulation of gene expression favorable to a uterine environment supportive of conceptus development. The complex interactions between hormones from the ovaries, conceptus trophectoderm/placenta and maternal pituitary are discussed with respect to effects on growth and development of uterine glands that secretion of nutrients critical to conceptus development. Collectively, the outcome of actions of the many hormones, growth factors, cytokines, lymphokines, extra-cellular matrix and nutrients is highly conducive to a successful outcome of pregnancy that includes establishment of mechanisms whereby the conceptus semi-allograft is protected from the maternal immune system.
Competing interests
The author has nothing to declare regarding conflicts of interest or competing financial interests. | 7,721 | 2013-06-26T00:00:00.000 | [
"Biology",
"Medicine"
] |
Robustness of the Poverty Measures: Evidence from Farm Households in Akwa Ibom State, Nigeria
The use of a plethora of poverty indexes is sometimes fraught with difficulties. The purpose of this research was to quantitatively assess poverty and to examine the robustness of the poverty metrics. Selecting representative farm homes required a multistage sample technique, which was implemented. A total of 150 rural homes were surveyed using questionnaires. Stochastic dominance and the weighted poverty measures of Foster, Greer and Thorbecke were used in this work to examine the weighted poverty measures' resilience and sensitivity to changes in the poverty line. According to the findings, as people become older and their families get larger, the likelihood, severity, and depth of poverty increases. An asymptotic sampling distribution was utilized to infer whether poverty was larger across a variety of hypothetical poverty lines by stochastic dominance analysis. First-order stochastic dominance was found, with the Cumulative Distribution Function (CDF) of households headed by people over 60 years old lying totally above the other distribution functions (CDFs). The CDF of single families was lower than the CDF of married households, according to the findings. At any poverty level, the CDF of families with more than 10 household members stochastically dominated those with fewer family members. Many households will be lifted out of poverty if poverty-reduction initiatives are targeted at those over 60 and those with big families.
Introduction
Researchers and development stakeholders should be alarmed about the dire state of poverty in Nigeria. Despite its abundance of natural resources, the nation remains one of the world's 1 poorest. (Etim and Edet, 2014;Etim and Patrick, 2010) There have been many studies linking agricultural to poverty (Canagarajah et al.,1995;FOS, 1999;Khan, 2001;Okunmadewa, 2001;Etim, 2007;Etim et al.,2017;Etim et al.,2019;Etim et al.,2021) and the bulk of Nigeria's poor reside in rural regions and depend on farming for their existence (Etim and Udoh, 2013;Etim et al.,2017;Etim and Ndaeyo, 2020). Poverty may be measured in a variety of ways, each with advantages and disadvantages, including by looking at things like family income, quality of living, spending, and access to essential services. Poverty has become a worldwide problem in recent years, but the need for international comparisons of poverty indicators in the framework of the 2030 Agenda for Sustainable Development is essential (United Nations Economic Commission for Europe, 2017).
Scientists are working around the clock to develop accurate, consistent, and comparable indicators of poverty that can be used by people throughout the globe (Kamanou, 2005). Using multiple poverty indices to examine welfare changes frequently results in a number of issues. Using stochastic dominance approaches, Madden (2000) and Garcia-Gomez et al., (2019) argued that the difficulties associated with the plurality of poverty indicators may be overcome. While other multidimensional distributions solely analyze marginal distributions, the stochastic dominance approach is appropriate since it takes into account the nexus between the many aspects of poverty. The stochastic dominance approach will be used to experimentally quantify changes in farm household welfare in this research. According to the research findings, quantitative poverty measurements are more resilient to shifting poverty thresholds than unweighted ones.
Conceptual Framework
In order to rank welfare distributions, the mechanics of ranking them must be understood in conjunction with the idea of stochastic dominance. As a result, stochastic dominance is nothing more than treating consumers as a continuum rather than dealing with the concept sure as the fraction of people whose consumption says is less than x, we assume x to be continuously distributed in the population with cumulative distribution function (CDF) F (x). It is possible to conduct a comparison of two welfare distributions whose CDFs are F1 (x) in order to determine one is superior than the other in some yet to be determined way.
The first definition is the first-order stochastic dominance. A distribution with CDF F1(x) stochastically dominates another distribution with CDF F2 (x) in the first order if and only if, for all monotone non-decreasing function f(x).
Where integral is taken over the whole range of x.
We may better understand the concept above if we consider f(x) to be a valuation function and monotonicity to denote that more is better in general (or at least no worse). The average value of f in distribution 1 is at least as big (if not greater) than the average value of f in distribution 2 according to equation I This is true regardless of how we evaluate x, as long as more is better (Deaton,1997). As a result, distribution 1 is superior than distribution 2 and stochastically dominates it in the sense that it contains more of x. Another alternate scenario including first order stochastic dominance yields a significant finding, which is discussed below. In the case of condition, I it is the same as the condition that for every x.
F2 (x) ≥ F1 (x) -------------2 2 is due to the higher mass in the lower portion of the distribution. The second definition is the second order stochastic dominance of the random variables. While this notion has more strength than the first order stochastic concept, it is less powerful than the second order stochastic concept since first order dominance always implies second order dominance, but not vice versa. For every monotone non-decreasing and concave functions f, if either of the inequality I or (ii) holds, the distribution (f1(x) stochastically dominates the distribution F2(x) at second order for all monotone non-decreasing functions f (x). Because the monotone nondecreasing and concave functions are both members of the class of monotone non-decreasing functions (Deaton, 1997), first order dominance implies second-order stochastic dominance in the case of the concave function. The function f(x) has a positive first derivative in the first order stochastic dominance, but it has a positive first derivative and a negative second derivative in the second order stochastic dominance.
It is possible to represent the second order stochastic dominance in a variety of ways, just as it is possible to express the first order stochastic dominance. F1(x) has a greater second-order dominance than F2(x) for all values of x, as seen in the table below (Deaton, 1997).
According to the preceding discussion, second order stochastic dominance is not tested by comparing the CDFs themselves, but rather by comparing the area under the CDFs. It should be noted that while considering poverty, a limited form of stochastic dominance is utilized in which Z0 x z1 is chosen as a starting point. According to Deaton (1997), more orders of stochastic dominance may be defined by continuing the sequence that has previously been established. Therefore, for third order stochastic dominance, the f(x) function has three derivatives: a positive first derivative, a negative second derivative, and a positive third derivative (see figure).
Application of Stochastic Dominance to Poverty
In addition to being helpful in examining the resilience of poverty to slight changes in the placement of poverty lines, dominance tests also enable us to broaden the scope of our investigation to include a larger variety of poverty lines (Assadzadeh and Paul, 2003). It is beneficial to employ stochastic dominance analysis in order to avoid arbitrarily selecting the poverty line. Starting with the incidence of poverty and considering what happens when the poverty line shifts, we have P0, which is the percentage of the population living below the poverty line, as the starting point.
Because of its position to the left of the equation, it stresses not just that poverty incidence is a function of poverty line, but also that poverty incidence is a function of the distribution F. If we have two distributions F1 and F2 relating to different sub-groups and we want to know which distribution shows more poverty and to what extent the comparison is dependent on the choice of poverty lines, then (4) tells us that if for all poverty lines, the distribution F1 shows more poverty than the distribution F2.
F1 (z) > F2 (z)
The headcount for distribution 1 will always be greater than the headcount for distribution 2.
In order to assess the robustness of poverty incidence, we just need to draw the cumulative distribution functions (CDFs) of the two different poverty incidence distributions. As long as one has a lower poverty level than another within a range of relevant poverty levels, the choice of poverty levels within that range will make no difference to the result.
This means that if and only if one of the two distributions stochastically dominates the other at first order, the poverty ranking of the two distributions according to the head count ratio is resistant to all conceivable choices of poverty line. In other words, regardless of the poverty line picked within the range, sub-groups with lower CDF will have lower poverty incidence, depth and severity than sub-groups with higher CDF, regardless of the poverty line selected within the range.
When there is any intersection between the CDFs of the sub-groups, no ranking can be determined, necessitating the use of the second order stochastic dominance test, which is performed on the intersection. This test is carried out using the poverty deficit curve (PDC), which is defined as the area under the CDF up to a certain poverty level.
According to Deaton (1997), the utility of the poverty depth in assessing the second order stochastic dominance may be shown by integrating the right hand side of (ii) to yield Whenever μP is the mean wellbeing among the poor and P1(Z;F) is the measure of the poverty gap, Equation iii establishes that we can use the poverty deficit curve (PDC) to examine the robustness of the poverty gap measure to different choices of the poverty line in the same way that we can use the CDF to examine the robustness of the poverty incidence measure to different choices of the poverty line.
Overall, if the CDF of an individual is greater or lower than that of another individual throughout a variety of poverty lines, then the same would inevitably be true for the poverty deficit curves as well (Deaton, 1997;Omonona,2001). F1 and F2 are two distributions that might be considered informally.
If the CDFs of two or more sub-groups do not cross, then the occurrence of poverty and its severity will be unaffected by the choice of the poverty line.
Alternatively, if the CDFs cross but not the PDCs cross, it indicates that poverty occurrence is sensitive to the choice of poverty line, whereas poverty depth and severity are not sensitive to the choice of poverty line for that sub-group.
Study Area
The research was carried out in the Nigerian state of Akwa Ibom. The state is located between the latitudes of 4 º33' and 5o53' North and the longitudes of 7 º25' and 8 º25' East and has a total land area of 7,246 square kilometers. Its capital is Uyo. The state is bordered to the north by Abia State, to the east by Cross River State, to the west by River State, and to the south by the Atlantic Ocean, according to the National Population Commission (2006). Its estimated population is around 3.9 million. The state is organized into 31 local government units for administrative convenience, and there are six Agricultural Development Project (ADP) zones, which are as follows: Oron, Abak, Ikot Ekpene, Etinan, Eket, and Uyo. It is situated in the rain forest zone and has two different seasons, namely, the rainy season and the dry season, which are separated by a mountain range. The yearly precipitation is between 2000 and 3000 millimeters. The majority of the people who live in rural areas in the study region are farmers, and the crops that are most typically grown include cassava, oil palm, yam, cocoyam, fluted pumpkin, okra, waterleaf, and bitter-leaf (among others). Additionally, some micro-livestock is often produced on a small scale in the backyards of most homesteads.
Sampling and Data Collection Technique
The representative farm homes for this research were selected using a multistage sampling approach, which was used throughout the process. The first step included the selection of three Agricultural Development Project Zones in Akwa Ibom State at random from a pool of six. The random selection of 5 villages each ADP zone was used in the second stage sample, for a total of 15 villages in the final stage sampling. Furthermore, a total of 10 farm homes were picked at random, for a total of 150 farm households in total. The information in this research was derived from primary sources. The core cross-sectional data for the research came from a farm-level extensive itinerary survey that included 150 rural farm families in the study region. The information was gathered from the heads of agricultural households via the use of a wellstructured questionnaire. Primary data contained information on household income and spending, socioeconomic characteristics of households and their heads, farm specific factors, and so on. Secondary data included information on household income and expenditure.
Analytical Technique
The weighted poverty index developed by Foster, Greer, and Thorbecke (FGT) (1984) was utilized for the quantitative measurement of poverty. The rationale for this selection is due to the fact that it is decomposable among the subgroups.
The FGT measure for the ith subgroup (Pi) is given as: For each subgroup, Pi is the weighted poverty index; ni is the total number of families in that group that are poor; Yj is the per adult equivalent spending of household j in subgroup I z is the poverty line, and the degree of worry for the depth of the poverty is indicated by the letter i. (IFAD, 1993).
The descriptive statistics employed in this research include graphical analysis and frequency distribution, amongst other things. The stochastic dominance analysis was carried by using graphical representations. For the stochastic dominance analysis to be effective, the cumulative distribution function and poverty lines must be used in conjunction with the poverty threshold in order to determine if the P measure is sensitive or not to changes in the poverty threshold. A frequency distribution was used to illustrate the frequency of occurrence of a specific sample among a set of samples.
As established by the World Bank (1996), the poverty line utilized in this research is defined as two-thirds of the mean household spending (adult equivalent). However, according to Simler et al (2004), since children's food needs are lower than those of adults (and the converse may be true for other commodities and services, such as education), consumption is often represented in adult equivalent units (AEU) (AEU). Accordingly, adult equivalents for this investigation were constructed in accordance with Nathan and Lawrence (2005) in the following ways: Where AE = Adult equivalent N1 = Number of adults aged 15 and above N2 = Number of children aged less than 15 When ∝ is equal to zero in equation 8, it indicates that there is no cause for worry, and equation 8 provides the head count ratio for the incidence of poverty in the population (the proportion of the farming households that is poor). That is When ∝ is equal to 1, it shows uniform concern and equation 8 becomes The depth of poverty is determined by the equation (10) above (the proportion of expenditure shortfall from the poverty line). It is sometimes referred to as the poverty gap, which is defined as the average difference between the income of the poor and the poverty line. Hall and Patrinos (2005) developed a formalized formalized formalized formalized formalized formalized formalized (Hall and Patrinos, 2005).
It is possible to distinguish between the poor and the poorest when ∝ is equal to 2 or more (Foster et al., 1984, Assadzadeh andPaul, 2003). Equation 8 become Equation 11 yields a distribution sensitive FGT index, which is referred to as the severity of poverty in the United States. It provides information on the degree to which spending is distributed among the poor.
The FGT measure for the whole group or population was calculated using the following formula: For example, P is the weighted poverty index for the whole group, m is the number of subgroups present, and ni is the total number of households present in the entire group as well as the ith sub-group.
The contribution (Ci) of each sub-weighted group's poverty measure to the overall group's weighted poverty measure was calculated using the method of least squares regression.
Ci = niP∝i/nP∝ --------------14
Since the FGT measures were estimated on the basis of sample observation, we tested whether the observed differences in their values are statistically significant or not.
The test of significance of P∝I (subgroup poverty measure) relative to the P∝ (whole group poverty measure) is given according to Kakwani (1993) by where standard error of P∝I, denoted by SE (P∝i) is (P∝i)ni for large samples (ni 30) and according to Spiegel (1975) (P∝i)ni for small samples (ni < 30) with ni -1 degree of freedom.
The descriptive statistics employed in this research include graphical analysis and frequency distribution, amongst other things. The stochastic dominance analysis was carried by using graphical representations. The stochastic dominance analysis makes use of the Cumulative Distribution Function (CDF) and poverty lines in order to determine whether or not a variable is sensitive to changes in the distribution. If we want to know whether or not a relationship of stochastic dominance exists between two distributions, we must first describe the distributions using their cumulative distribution functions, or CDFs, and then compare them. According to Davidson (2006),
Poverty profile of farm households
The poverty of agricultural families in Akwa Ibom State was broken down into sub-groups based on age, marital status, and household size in order to better understand how poverty differs across sub-groups.
Three age groups were utilized to characterize poverty among farm households: those aged 21 to 40 years, those aged 41 to 60 years, and those aged 61 to 80 years. The frequency of poverty among agricultural families, on the other hand, rises with the age of the household's head. The results of Dercon and Krishnan (1998) showing poverty incidence, depth, and severity are lower in families led by people under the age of 45 are confirmed by the findings of Dercon and Krishnan (1998). The findings are likewise consistent with those of the Federal Office of Statistics (FOS) (1999), which found that older farm family heads had a higher prevalence of poverty than younger farm household heads. According to Table 2, 31 percent of heads whose heads' ages range between 21 and 40 years are poor, 52 percent of heads whose heads' ages range between 41 and 60 years are poor, and 69 percent of heads whose heads' ages range between 61 and 80 years are in poverty, respectively. All other age sub-groups are significantly different (p> 0.10) from the overall poverty incidence of the group, with the exception of those with household heads aged 41 to 60 years. The poverty incidence in the two sub-groups, ages 21 to 40 and 61 to 80, is statistically significant at (P0.10). The findings are identical to a previous empirical result obtained by Omonona (2001).
All of the potential pairings of age groups, as indicated in Table 3, exhibit substantially different poverty incidence rates when compared to one another (P 0.01). In other words, the age of the head of the home has an impact on the amount of poverty occurrence.
According to Table 3, farm families with heads aged 21-40 years, 41-60 years, and 61-80 years make up 15, 73, and 12 percent of the whole group's poverty incidence, respectively.
The age of the household head has a positive relationship with the amount of poverty in the home. This is due to the fact that as one's age grows, one's capacity to do tough tasks diminishes as well. Because children and younger household members are receiving education and training, the number of members of the household who are available to work on the farm diminishes as the age of the head of the family increases. As a result, the farm's size and cultivable land area are reduced, and the farm's revenue is reduced as a result. (Etim,2007). Although the findings do not contradict Dercon andKrishnan (1998) or FOS (1999), they do confirm that the amount of poverty is inversely proportional to the age of the family head.
One-fifth of married farm families live in poverty, according to the Census Bureau. Unmarried household heads, on the other hand, accounted for 15% of the impoverished. The t-values indicate that the occurrences of poverty in the two sub-groups are statistically significant. The p-values were calculated in order to determine if there was a statistically significant difference between the poverty occurrences of the two sub-groups. A result of -9.75 indicates that there is a statistically significant difference (p 0.01). Consequently, the marital status of the heads has an impact on the occurrence of poverty. The contribution of married homes to the overall poverty of the group is 96 percent, while the contribution of unmarried households is just 4 percent of the total. Following a similar pattern to poverty incidence, the depth and severity of poverty follow a similar pattern to poverty incidence as seen in table 4.
Farm families led by married people tend to be poorer than farm households headed by unmarried people. This may be due to the fact that married farm households have large household sizes, which increase dependency and, as a result, have lower welfare status than farm households headed by unmarried people. Farm households were divided into three subgroups: those with 1-5 members, those with 6-10 members, and those with 11-15 members, respectively. When compared to homes with fewer than 5 people, households with more than 10 members are more likely to be poor (28 percent compared to 28 percent).
The results of Table 5 reveal that the occurrence of poverty is statistically significant (P 0.10) in all of the sub-groups studied. This indicates that the prevalence of poverty in the three subgroups differs from the prevalence of poverty in the whole group. Furthermore, according to the findings of the study in Table 6, there are statistically significant variations in the incidence of poverty across all conceivable pairs of household size (P 0.10). This means that the size of a household has an impact on the prevalence of poverty in that home. The contribution of the 1-5 members subgroup to the overall poverty incidence of the group is 4 percent, but the contributions of the 6-10-and 11-15-members subgroups to the overall poverty incidence are 41 and 55 percent, respectively.
Results reveal that when household size grows, both the level of poverty and the extent to which they contribute to the overall group's poverty increases, which is consistent with previous research. Another possible explanation is related to the fact that larger households have more dependents who make less contribution to the household's overall revenue. Findings, on the other hand, are identical with World Bank (1991), Lanjouw and Ravallion (1994), Schubert 1994, World Bank (1996, Dercon and Krishnan (1998), Etim and Udofia (1998), and others (2013).
Robustness of the weighted poverty measures
The findings of the study of the sensitivity of the weighted poverty P measures (Po, P1, and P2) to changes in the poverty line, which are dictated by a certain amount of subjectivity and arbitrariness, are described in the following section. The results of the stochastic dominance analysis are used to assess the robustness of the finding about the disparities in poverty between sub-groups. Specifically, given a certain range of poverty lines, say Z-to Z+, the condition of first order dominance may be described as follows: poverty incidence in a sub-group is indisputably lower if the CDF for the sub-group falls inside the range of poverty lines specified. Thus, a head count ratio is resilient to all feasible selections of poverty line within the stated range only when one CDF stochastically outperforms the other CDF. The poverty measure, on the other hand, is considered to be sensitive. Furthermore, once a distribution exhibits first order stochastic dominance, second order stochastic dominance follows as a natural consequence (Omonona, 2001).
In the course of the study of stochastic dominance, four additional poverty lines were identified, each of which is a multiple of the original poverty line. These are the multiples of 0.4, 0.6, 0, 1.0, and 1.2, respectively. The following is a summary of the findings of this investigation: As seen in figure 1, the CDF of families with heads aged over 60 years is much higher than the other CDFs, and the CDF of households with heads aged 21 -40 years is significantly higher than the CDF of households with heads aged 41 -60 years, as well. The results reveal that there is first order stochastic dominance, and as a consequence, the incidence, depth, and severity of poverty will be greatest for the sub-groups older than 60 years and lowest for the sub-groups younger than 40 years over the whole range of the poverty lines.
As seen in figure 2, the CDF of households headed by unmarried people is much lower than that of families led by married people. Accordingly, families led by married individuals were much poorer than households headed by unmarried people, suggesting that first order stochastic dominance exists. Figure 3 illustrates that the CDF of households with 6-10 members stochastically dominates the CDF of households with fewer than 6 members, which in turn is stochastically controlled by the CDF of households with more than 10 members (as shown in the previous paragraph).
As a result, for any given poverty threshold, large-sized families will have the greatest incidence, depth, and severity of poverty among all households. In accordance with Etim and Udoh (2015), who discovered that families with fewer members stochastically outperformed those with bigger members, this conclusion is also supported.
Conclusion
The quantitative assessment of poverty and the evaluation of the robustness of the poverty measures to changes in the poverty line were carried out using the FGT weighted poverty measure and stochastic dominance analysis, respectively. The findings found that the frequency of poverty was greater in households with members over the age of 60 and lower in families between the ages of 20 and 40. The prevalence of poverty was greater in married households than in unmarried ones. Among addition, the findings found that the frequency of poverty was greater in households with more than one child. Policies and initiatives for poverty reduction should be focused towards the elderly population as well as households with multiple members. | 6,158.6 | 2022-01-26T00:00:00.000 | [
"Economics"
] |
Reduced-scale experimental and numerical investigation on the energy and smoke control performance of natural ventilation systems in a high-rise atrium
. Natural ventilation (NV) is an effective energy-saving strategy to remove the excessive heat in high-rise atria. The traditional NV system in high-rise atria has inlet openings at the bottom and outlet openings at the top. However, this traditional system may bring fire safety concerns due to the rapid spread of smoke during an atrium fire. To remove the fire safety concern, a new NV system was proposed in this study. This new system applies a segmentation slab to divide the high-rise atrium into upper and lower parts, which can limit the smoke movement. A ventilation shaft is installed to maintain the NV rate and extract smoke. To investigate the energy and smoke control performance of the new and traditional NV systems, a 1:20 small-scale experimental model and CFD numerical model were built. The results indicate that the new NV system with the shaft and segmentation can remove more heat than the traditional NV system. Furthermore, the new NV system can simplify the mechanical smoke exhaust system and improve the smoke control performance, e.g., requires a lower volumetric flow rate and maintains a thinner smoke layer. NV system. A 1:20 small-scale experimental model and a CFD model were developed. The experimental model is used to evaluate the ventilation performance, which demonstrates the temperature distribution in the atrium, ventilation rate, and heat removed by ventilation. The CFD numerical model is used to examine the smoke control performance, which shows the smoke layer heights.
Introduction
Reducing the cooling energy consumption in high-rise buildings is critical. On the one hand, the huge space cooling demand is becoming a challenge on electricity systems. The space cooling energy consumption has tripled in the world from 1990 to 2016 [1]. On the other hand, the number of high-rise buildings is increasing, due to population growth and limited land in cities. It was reported that the cities with over 100 high-rise buildings reached 142 in 2019 [2]. Among many technologies (e.g., advanced glazing and shading), natural ventilation (NV) is an effective energy saving strategy in high-rise buildings, i.e., ventilative cooling. The previous study showed that the high-rise building can save up to 25%~86% of the electricity consumption if the NV is applied [3,4]. The driven forces of NV include wind and buoyancy. The buoyancy force is generated by the pressure difference when the indoor temperature is greater than the outdoor temperature. Compared with using wind-driven NV, one of the advantages of high-rise buildings is to utilize the buoyancy-driven NV due to their large vertical space.
Atria provide impressive aesthetic space and increases socialization and interactions, which is the appropriate architectural component to employ buoyancy-driven NV [5,6]. The traditional NV system in high-rise atria has inlet openings at the bottom and outlet openings at the top. The air enters the atrium at the bottom and then exhausts to the outside at the top of the atrium. * Corresponding author<EMAIL_ADDRESS>However, when a fire occurs in high-rise atria, the smoke movement is difficult to control. The high-rise atrium will become a smoke path. Toxic smoke will move vertically and reach the upper part of the atrium, damaging the building and threatening the occupants [7,8]. Therefore, the atrium height usually is limited by the segmentation slab, which damages the potential of NV in turn. It was reported that several high-rise atria were divided into smaller sub-atria due to fire safety. For example, the atrium in Commerzbank (56-story) in Germany was built into 12-story-high sub-atria [9]. The atrium in Concordia EV building was divided into five sub-atria (three floors each segment) [10].
To mitigate the conflict between the fire safety concern and energy performance in the atrium NV system, a new NV system is proposed. This new NV system consists of two components: segmentation slab and ventilation shaft. The segmentation slab is to divide the atrium into different parts, which limits the height of the atrium. The ventilation shaft is to maintain the NV rate and extract smoke. This paper aims to evaluate the ventilation and smoke control performance of this new NV system. A 1:20 small-scale experimental model and a CFD model were developed. The experimental model is used to evaluate the ventilation performance, which demonstrates the temperature distribution in the atrium, ventilation rate, and heat removed by ventilation. The CFD numerical model is used to examine the smoke control performance, which shows the smoke layer heights.
Experimental settings
To compare the ventilation performance of the new NV and the traditional NV systems, a 1:20 small-scale acrylic model, 1.5 m (L) * 0.6 m (W) * 1.5 m (H) was built, which is shown in Fig.1. To switch the NV systems between the novel and traditional type, a removable segmentation slab is placed at 0.8 m height. A ventilation shaft is installed at the side of the model.
To simulate the internal cooling load and generate the buoyancy forces, a 250W heater is positioned at the centre of the atrium bottom, which is 0.6 m (L) * 0.2 m (W). To measure the temperature distribution inside the atrium, the thermocouples with the accuracy of 0.3 ℃ were installed at A, B position and shaft, as shown in Fig.1. (b) and (c). At the inlet and outlet opening, two anemometers with the accuracy of 0.1 m/s were installed to measure the air velocity.
For the traditional NV system as
Numerical settings
To compare the smoke control performance of the new NV system and traditional NV systems, two small-scale CFD numerical simulation models were built in Fire Dynamic Simulator (FDS), as Fig.2. shows.
According to the reference, in the atrium with limited fuel, the heat release rate (HRR) can be assumed as 2MW [11]. Based on Froude scaling (1:20), a 0.1 m * 0.1 m fire source with a 1.1 kW fire was simulated at the centre of the atrium bottom. Considering the limitation of the makeup air velocity (1.02 m/s), the makeup air openings on the two sides of the atrium were added (0.17 m 2 ). In this simulation, based on the reasonable ratio of characteristic diameter to grid size (i.e., the grid size should not be larger than 0.015 m) [12], the 0.013 m grid size was used.
The first model is used to show the performance of the traditional mechanical exhaust system. According to the ASHRAE handbook [13], this system has six exhaust inlets, and each inlet has a 13.4 L/s flow rate (81 L/s in total). This exhaust flow rate aims to maintain a minimum smoke layer of 20% floor-to-ceiling height when a 1.1 kW fire occurs. The second case is the new mechanical exhaust system. According to the ASHRAE handbook [13], due to the reduction of the atrium height, the required volumetric flow rate is only 32 L/s for a same 1.1 kW fire and smoke layer depth.
For these two cases, the smoke layer heights were measured at A and B positions (see Fig.1.). Fig.3. shows that the temperatures are very close at the bottom for the two NV systems, which means that the new NV system can maintain the temperature of the occupied space as low as the traditional NV system. For example, the temperature differences between two NV systems are 1.1℃ and 0.5 ℃ at the bottom of A and B positions.
To further investigate the ventilation performance of two NV systems, Fig. 4 presents the measured mass flow rate and calculated heat transfer by ventilation. It was found that compared with the traditional NV system, the mass flow rate and the heat removed by ventilation in the new NV system increase 22% and 21% respectively. These results that the new NV system can better utilize the buoyancy force and ventilation to remove heat.
However, it can be found that the temperature near the segmentation slab in the new NV system is clearly higher than the temperature at the same height in the traditional NV system. This difference may be caused by the heat transfer through the wall and airflow pattern, which requires further investigation in future. These results show that the atrium with the new NV system has a better smoke control performance than the atrium with the traditional NV system. Firstly, compared with the atrium with the traditional NV system, the atrium with the new NV system has a thinner smoke layer. For example, Fig. 5. (a) shows that the smoke layer mainly spans from 0.9 m to 1.5 m, i.e., the smoke layer depth is 0.6 m. However, Fig. 5. (b) shows that the smoke layer concentrates on the heights from 0.3 m to 0.8 m, i.e., the smoke layer depth is only 0.3 m. Secondly, as mentioned in section 2.2, the mechanical exhaust flow rate in the atrium with the new NV system (32 L/s) is only 40% of that in the atrium with the traditional NV system (81 L/s). However, in this study, only one heat release rate and exhaust flow rate were examined. More fire conditions and mechanical systems should be investigated in future.
Conclusion
This study proposes a new NV system. The ventilation and smoke control performances of this new ventilation system were evaluated by small-scale experiments and numerical simulations. For NV performance, the atrium with the new NV system can have a similar temperature distribution at the atrium bottom (i.e., occupied space), but a larger ventilation rate than the traditional NV system. The heat removed by ventilation is larger in the new NV system than in the traditional NV system (i.e., better ventilative cooling). For smoke control performance, the new ventilation system requires a lower volumetric flow rate for smoke exhaust than the traditional design, but the smoke layer in the new design (around 0.3 m) is thinner than in the traditional design (around 0.6 m). These two performances indicate that this new system can promote the NV application in highrise atria but mitigate the smoke control problem. | 2,326.8 | 2022-01-01T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Optimal Patterned Wettability for Microchannel Flow Boiling Using the Lattice Boltzmann Method
Microchannel flow boiling is a cooling method studied in microscale heat-cooling, which has become an important field of research with the development of high-density integrated circuits. The change in microchannel surface characteristics affects thermal fluid behavior, and existing studies have optimized heat transfer by changing surf ace wettability characteristics. However, a surface with heterogeneous wettability also has the potential to improve heat transfer. In this case, heat transfer would be optimized by applying the optimal heterogeneous wettability surface to channel flow boiling. In this study, a change in cooling efficiency was observed, by setting a hydrophobic and hydrophilic wettability pattern on the channel surface under the microchannel flow boiling condition, using a lattice Boltzmann method simulation. In the rectangular microchannel structure, the hydrophobic-hydrophilic patterned wettability was oriented perpendicular to the flow direction. The bubble nucleation and the heat transfer coefficient were observed in each case by varying the length of the pattern and the ratio of the hydrophobic-hydrophilic area. It was found that the minimum pattern length in which individual bubbles can occur, and the wettability pattern in which the bubble nucleation-departure cycle is maintained, are advantageous for increasing the efficiency of heat transfer in channel flow boiling.
Introduction
Microscale heat removal has played a critical role in determining the performance of high-density integrated circuits [1].Furthermore, because there is limited steady air cooling, the cooling efficiency can be improved through the forced cooling of the artificial fluid flow in the cooling channel [2][3][4].
There has been extensive research to improve microchannel heat transfer efficiency.Most studies have focused on the modification of the surface structure, including the micro pin fin [5,6], cavity [7], and porous [8].Deng et al. [5] used a laser micro-milling method to propose structured microchannels with micro cone pin fins.Xu et al. [6] studied the flow patterns during flow boiling instability in a microchannel with three types of pin fin arrays.Lin et al. [7] carried out an experimental study on convective boiling heat transfer and the CHF (Critical heat flux) of methanol-water mixtures in a diverging microchannel with artificial cavities.Deng et al. [8] investigated flow boiling heat transfer performance in a porous microchannel.As mentioned earlier, although micro structures in microchannels enhance heat transfer, it is challenging to fabricate a physical structure in a microchannel, and it takes considerable processing time to construct multiple channels.
Another method of research on surface characteristics that enhance boiling heat transfer is to use a surface coated with various materials.This modification can be easily achieved using a coating technique that does not involve any deformation of the surface [9] and can easily control the wettability.It is important that the characteristics of the two-phase flow are affected by the Coatings 2018, 8, 288 2 of 14 wettability of a surface.Zhou et al. [10] experimentally investigated the saturated flow boiling difference between the hydrophilic and super-hydrophilic surfaces in a rectangular microchannel.They concluded that the local dry out occurred on the hydrophilic surface at high heat fluxes for low mass fluxes and did not occur on the super-hydrophilic surface.Park et al. [11] also conducted an experimental study on the effect of flow boiling on wettability.They showed that the CHF is enhanced by a wettable surface.Li et al. [12] numerically investigated the boiling heat transfer for different levels of wettability and found that the CHF and wall superheat decreases at bad wettability.Gong et al. [13] conducted a numerical study on the effects of wettability during saturated pool boiling heat transfer.They demonstrated that a hydrophilic surface shows a higher onset of nucleate boiling temperature, lower heat transfer at low superheat, and higher CHF than a hydrophobic surface.Multiple studies on surfaces of homogeneous wettability have concluded that a hydrophilic surface has higher CHF and higher superheat than a hydrophobic surface [14,15].The development of surface treatment technology enables the implementation of heterogeneous wettability surfaces [16,17], and the pattern wettability has been studied as the method of overcoming the disadvantages of homogeneous wettability.Nam et al. [18] showed that the bubble nucleation speed on a surface with pattern wettability is higher than that on a surface with homogeneous wettability.Jo et al. [19] observed the changes in bubble formation and heat flux by controlling the size and arrangement of controlled hydrophobic patterns on a hydrophilic substrate.Lee et al. [20] employed a lattice Boltzmann method simulation to implement controlled hydrophobic patterns on a hydrophilic substrate and studied various modifications, such as to the shape and arrangement, to optimize pattern wettability.
In previous studies, patterned wettability has been found to influence bubble nucleation and the heat transfer coefficient during phase change.However, these studies focused on pool boiling, and it is necessary to also study the fluid flow in a channel.It is also important to establish a criterion for the optimized form in determining the pattern.In this paper, the effects of flow factors and patterns on the patterned wettability channel flow are investigated using a lattice Boltzmann method simulation, and the optimization of the patterned wettability in the channel flow is presented.
Materials and Methods
In this study, the modified pseudo-potential lattice Boltzmann model, which was proposed by Gong et al. [13,21], is used.This model includes a new interaction force using the exact difference method to minimize spurious velocity and increase the liquid-air density ratio.Other forcing schemes, such as that of Li et al. [22] or Xu et al. [23], can also be used to realize a large density ratio in a pseudo-potential model.
The derivation of the lattice Boltzmann equation through the discretization of the Boltzmann equation (the governing equation) can be found in the Appendix A [24], and the evolution equations for the flow and heat transfer are as follows: Here, f i and g i are the density-and temperature-distribution functions, respectively.The evolution of these distribution functions depend upon the position, x, time, t, and relaxation times τ and τ T , where τ is the density-relaxation time and τ T is the thermal relaxation time.The temperature fields of these media are solved concurrently because τ T has different values for a working fluid and a solid substrate.The density-and temperature-equilibrium-distribution functions are: where f i eq is the density-equilibrium distribution function and g i eq is the temperature-equilibrium distribution function.For the three-dimensional lattice Boltzmann method using the D3Q19 model with distribution functions in 19 directions, the parameters ω i and e i are given as follows: [e 0 , e 1 , e 2 , e 3 , e 4 , e 5 , e 6 , e 7 , e 8 , e 9 , e 10 , e 11 , e 12 , e 13 , e 14 , e 15 , e 16 , e 17 , e 18 ] = 0 0 0 The body-force term is based on the exact difference method [25] and is calculated as follows: The velocity change, ∆u = F•δt/ρ, is calculated from the force F: where F int is the interparticle-interaction force, given by: Here, β is based on Reference [26].G(x, x ) has a nonzero value when x and x are adjacent to one another: where g 1 = g 0 , g 2 = g 0 /2.for the D3Q19 scheme.The effective mass, ψ(x) = 2[p(x) − ρ(x)]/(6g), is based on the Peng-Robinson equation of state, as follows: where a = 0.45724R 2 T 2 c /p c , b = 0.0778RT c /p c , and φ 0 (T) is: where ω = 0.344 is the acentric factor.The wettability effect [27] is implemented in F s and is given by The contact angles are controlled by the fluid-solid-interaction strength, g s , and s(x) is an indicator factor described as follows: The gravitational force is given by: Coatings 2018, 8, 288 where g is the gravitational acceleration and ρ ave is the average density.The macroscopic equations for u, T, and ρ are expressed as follows: The real fluid velocity, u real , is obtained via force correction as follows: The thermal source related to phase change ϕ is given by: Figure 1 shows a schematic flow chart.
Coatings 2018, 8, x FOR PEER REVIEW 4 of 14 The gravitational force is given by: where g is the gravitational acceleration and ρave is the average density.The macroscopic equations for u, T, and ρ are expressed as follows: ρ , ρ , The real fluid velocity, ureal, is obtained via force correction as follows: δ ρ 2 The thermal source related to phase change ϕ is given by: Figure 1 shows a schematic flow chart.
Simulation Domain Design
In this numerical approach, the characteristic length L0 is the capillary length: where γ is the surface tension of the gas-liquid interface, and ρl and ρg are the liquid and gas phase densities of water, respectively, at 100 °C and 1 atm.
Simulation Domain Design
In this numerical approach, the characteristic length L 0 is the capillary length: where γ is the surface tension of the gas-liquid interface, and ρ l and ρ g are the liquid and gas phase densities of water, respectively, at 100 where T s is the saturation temperature, T c is the critical temperature, and k is the thermal conductivity of liquid water at 100 • C and 1 atm [20].
Figure 2 shows the structure of the computational domain with a rectangular channel.Figure 2a shows the x-y plane on the bottom channel surface (z = 0).The inlet, where fluid is applied to the channel, is located at x = 0.The outlet where the channel flow escapes is located at x = L l .There is a constant temperature boundary condition at y = 0, and the walls spanning y = L t in the x-y plane and spanning y = L w in the y-z plane create a symmetric boundary condition.The boundary was applied to improve simulation efficiency, under the assumption that both sides of the channel shared the behavior.The hydrophobic-hydrophilic pattern is applied in a stripe formation on the bottom channel surface (z = 0).Other channel surfaces are hydrophilic in nature.
Figure 2b shows the y-z plane at the inlet (x = 0).There is a constant temperature boundary at y = 0, and the walls to y = L t along the z-axis and to y = L w along the x-axis form a symmetric boundary condition.z = 0 is the bottom channel surface where the patterned wettability is implemented, and a constant heat flux, Q 0 , is supplied to the entire surface.There is a wall from z = L h − L t to z = L h , and there is a constant temperature boundary condition at z = L h .Each geometric section can be represented by the capillary lengths: L w = 0.076L 0 , L l = 0.4L 0 , L t = 0.005L 0 , and L h = 0.17L 0 .
In Figure 2c, the wettability pattern is shown on the bottom channel surface (z = 0).The striped pattern is repeated throughout the channel.The pattern pitch is a set of hydrophobic and hydrophilic stripes of lengths L pho and L phi , respectively.The contact angle of the hydrophobic area is 123 • , and the contact angle of the hydrophilic area is 54 • [19].
The number of the grid is a 500 × 75 × 160 uniform grid.The constant heat flux is 80 kW/m 2 .The Reynolds number of the flow is 0.11.Table 1 shows the pattern pitch and ratio of the L pho and L phi (pattern ratio) of each simulation case.
Coatings 2018, 8, x FOR PEER REVIEW 5 of 14 The normalized superheat, T*, and the heat flux Q″ are: where Ts is the saturation temperature, Tc is the critical temperature, and k is the thermal conductivity of liquid water at 100 °C and 1 atm [20].
Figure 2 shows the structure of the computational domain with a rectangular channel.Figure 2a shows the x-y plane on the bottom channel surface (z = 0).The inlet, where fluid is applied to the channel, is located at x = 0.The outlet where the channel flow escapes is located at x = .There is a constant temperature boundary condition at y = 0, and the walls spanning y = in the x-y plane and spanning y = in the y-z plane create a symmetric boundary condition.The boundary was applied to improve simulation efficiency, under the assumption that both sides of the channel shared the behavior.The hydrophobic-hydrophilic pattern is applied in a stripe formation on the bottom channel surface (z = 0).Other channel surfaces are hydrophilic in nature.
Figure 2b shows the y-z plane at the inlet (x = 0).There is a constant temperature boundary at y = 0, and the walls to y = along the z-axis and to y = along the x-axis form a symmetric boundary condition.z = 0 is the bottom channel surface where the patterned wettability is implemented, and a constant heat flux, , is supplied to the entire surface.There is a wall from z = − to z = , and there is a constant temperature boundary condition at z = .Each geometric section can be represented by the capillary lengths: = 0.076 , = 0.4 , = 0.005 , and = 0.17 .
In Figure 2c, the wettability pattern is shown on the bottom channel surface (z = 0).The striped pattern is repeated throughout the channel.The pattern pitch is a set of hydrophobic and hydrophilic stripes of lengths and , respectively.The contact angle of the hydrophobic area is 123°, and the contact angle of the hydrophilic area is 54° [19].
The number of the grid is a 500 × 75 × 160 uniform grid.The constant heat flux is 80 kW/m 2 .The Reynolds number of the flow is 0.11.Table 1 shows the pattern pitch and ratio of the and (pattern ratio) of each simulation case.
Validation
The results for bubble nucleation and bubble nucleation heat transfer were compared with experimental results to validate the numerical model.First, the analytical equation for the bubble departure diameter from the study of Hazi et al. [28] was compared with the simulation results.The simulation was performed in a single hydrophobic dot domain.The z-direction length of the domain was L h = 8.51L 0 , the width of the hydrophobic dot was 0.28L 0 , and the xand y-direction lengths of the domain were both L w = 11.26L0 .The constant heat flux boundary had a width of L f = 4.54L 0 [20].Table 2 lists the physical properties, density ratio of vapor and liquid, surface tension, viscosity, and thermal diffusivity used in the simulations.
Relaxation times are obtained as follows: Coatings 2018, 8, 288 where v is kinematic viscosity and α is the thermal diffusivity.Figure 3 shows the bubble departure diameter with a gravitational force g.This result was consistent with the results from a previous study [29].The bubble departure diameter D d approximated by g −0.5 , was consistent with the theoretical equation given in Equation (23).
In Figure 4, the boiling curve from the experimental study of Jo et al. [19] was compared with the results in this study.The boiling curve is divided by the convection regime and the nucleate boiling regime.Heat transfer occurred by only natural convection in the convection regime, i.e., at lower heat fluxes without bubbles [30,31].In the nucleate boiling regime, forced convection occurred through bubble nucleation.The Figure 4 shows that the numerical data is consistent with the experimental results [19].
where v is kinematic viscosity and α is the thermal diffusivity.Figure 3 shows the bubble departure diameter with a gravitational force g.This result was consistent with the results from a previous study [29].The bubble departure diameter Dd approximated by g −0.5 , was consistent with the theoretical equation given in Equation (23).
In Figure 4, the boiling curve from the experimental study of Jo et al. [19] was compared with the results in this study.The boiling curve is divided by the convection regime and the nucleate boiling regime.Heat transfer occurred by only natural convection in the convection regime, i.e., at lower heat fluxes without bubbles [30,31].In the nucleate boiling regime, forced convection occurred through bubble nucleation.The Figure 4 shows that the numerical data is consistent with the experimental results [19].
Pattern Ratio
Because surface wettability is an important factor for bubble nucleation, controlling the wettability pattern strongly affects the phase change phenomena.In this section, the effect of the pattern ratio, which is a size ratio of hydrophobic area and hydrophilic area, is analyzed.Figure 5 shows types of bubble nucleation on patterned surfaces with different pattern ratios.These cases were compared with an equivalent pattern pitch of 0.03 and a time step of 30,000.Bright green, bright blue, and dark blue correspond to the hydrophobic surface, hydrophilic surface, and channel wall, respectively.At this stage, the bubbles were nucleated on the hydrophobic surface.The nucleated bubbles become smaller as the pattern ratio was lowered.Thus, it was confirmed that the size of the hydrophobic area influenced the size of the nucleated bubble.Furthermore, as size of the hydrophobic area got smaller, the bubble nucleation time was delayed.
Figure 6 shows the boiling curve with superheat and heat flux.In this simulation, the phase change was cyclic and was stabilized in the superheat and heat flux of the narrow section.Therefore, the types of phase change could be compared by looking at the maximum superheat and heat flux.The region at which the superheat decreased is where the maximum size bubble and the highest heat transfer coefficient were observed.These results show that the lower maximum superheat was observed in Case 0.03-9.00,so a relatively high heat transfer coefficient could be expected.
Figure 7 shows the normalized heat transfer coefficient as a function of time and confirms the periodicity of bubble nucleation.The heat transfer coefficient increased when the bubble was generated and decreased when the bubble departed.This periodicity is identically observed in all cases.However, as the pattern ratio decreased, the heat transfer coefficient trend also decreased.Examining the averaged heat transfer coefficient, the highest value in Case 0.03-9.00and the lowest value in Case 0.03-0.11were observed.The averaged heat transfer coefficient gradually decreased to a pattern ratio of 1.00 (Case 0.03-1.00)and sharply decreased after the pattern ratio of 1.00 (Case 0.03-1.00).This was due to bubble nucleation delay.
Pattern Ratio
Because surface wettability is an important factor for bubble nucleation, controlling the wettability pattern strongly affects the phase change phenomena.In this section, the effect of the pattern ratio, which is a size ratio of hydrophobic area and hydrophilic area, is analyzed.Figure 5 shows types of bubble nucleation on patterned surfaces with different pattern ratios.These cases were compared with an equivalent pattern pitch of 0.03 and a time step of 30,000.Bright green, bright blue, and dark blue correspond to the hydrophobic surface, hydrophilic surface, and channel wall, respectively.At this stage, the bubbles were nucleated on the hydrophobic surface.The nucleated bubbles become smaller as the pattern ratio was lowered.Thus, it was confirmed that the size of the hydrophobic area influenced the size of the nucleated bubble.Furthermore, as size of the hydrophobic area got smaller, the bubble nucleation time was delayed.
Figure 6 shows the boiling curve with superheat and heat flux.In this simulation, the phase change was cyclic and was stabilized in the superheat and heat flux of the narrow section.Therefore, the types of phase change could be compared by looking at the maximum superheat and heat flux.The region at which the superheat decreased is where the maximum size bubble and the highest heat transfer coefficient were observed.These results show that the lower maximum superheat was observed in Case 0.03-9.00,so a relatively high heat transfer coefficient could be expected.
Figure 7 shows the normalized heat transfer coefficient as a function of time and confirms the periodicity of bubble nucleation.The heat transfer coefficient increased when the bubble was generated and decreased when the bubble departed.This periodicity is identically observed in all cases.However, as the pattern ratio decreased, the heat transfer coefficient trend also decreased.Examining the averaged heat transfer coefficient, the highest value in Case 0.03-9.00and the lowest value in Case 0.03-0.11were observed.The averaged heat transfer coefficient gradually decreased to a pattern ratio of 1.00 (Case 0.03-1.00)and sharply decreased after the pattern ratio of 1.00 (Case 0.03-1.00).This was due to bubble nucleation delay.
Pattern Pitch
Figure 8 shows the variation of the boiling curve depending on the pattern pitch.The pattern ratio was identical to the pattern ratio of 9.00, which showed the highest heat transfer in Section 4.2.1.As the pattern pitch increased, the hydrophilic area increased, and the maximum super heat decreased.However, the change of the boiling curve according to the pattern pitch was relatively small because the ratio of the hydrophobic area was high due to the high pattern ratio.
Figure 9 indicates the average Normalized Heat Transfer Coefficient (NHTC) of each pattern pitch.From pitch 0.01 to 0.03, the average NHTC sharply increased, and then from pitch 0.03, the average NHTC gradually decreased.At pitch 0.03 (Case 0.03-9.00), the highest average NHTC was observed.
The bubble nucleation and the surface temperature with pattern pitch are shown in Figure 10 with pitch 0.01 (Case 0.01-9.00),pitch 0.03 (Case 0.03-9.00),and pitch (Case 0.05-9.00)at a time step of 30,000.Bubbles individually nucleated on the hydrophobic area in pitch 0.03 and pitch 0.05, whereas in pitch 0.01, bubbles coalesced and then slid as soon as they nucleated.As bubbles absorbed more heat from their surroundings, they grew bigger.Therefore, in pitch 0.05, the bubble size was larger and the surface temperature below the bubble was lower.However, in pitch 0.03, the averaged surface temperature had the lowest value due to the number of bubble nucleation.
Pattern Pitch
Figure 8 shows the variation of the boiling curve depending on the pattern pitch.The pattern ratio was identical to the pattern ratio of 9.00, which showed the highest heat transfer in Section 4.2.1.As the pattern pitch increased, the hydrophilic area increased, and the maximum super heat decreased.However, the change of the boiling curve according to the pattern pitch was relatively small because the ratio of the hydrophobic area was high due to the high pattern ratio.
Figure 9 indicates the average Normalized Heat Transfer Coefficient (NHTC) of each pattern pitch.From pitch 0.01 to 0.03, the average NHTC sharply increased, and then from pitch 0.03, the average NHTC gradually decreased.At pitch 0.03 (Case 0.03-9.00), the highest average NHTC was observed.
The bubble nucleation and the surface temperature with pattern pitch are shown in Figure 10 with pitch 0.01 (Case 0.01-9.00),pitch 0.03 (Case 0.03-9.00),and pitch (Case 0.05-9.00)at a time step of 30,000.Bubbles individually nucleated on the hydrophobic area in pitch 0.03 and pitch 0.05, whereas in pitch 0.01, bubbles coalesced and then slid as soon as they nucleated.As bubbles absorbed more heat from their surroundings, they grew bigger.Therefore, in pitch 0.05, the bubble size was larger and the surface temperature below the bubble was lower.However, in pitch 0.03, the averaged surface temperature had the lowest value due to the number of bubble nucleation.
Discussion
Figure 5 shows that the hydrophobic area had a significant effect on bubble nucleation.At the same pattern pitch, the bubble diameter increased as the pattern ratio increased.In Case 0.03-0.43 and Case 0.03-0.11,bubble nucleation was not observed at the same time step.These characteristics are also shown in the boiling curve of Figure 6.In Case 0.03-2.33 and Case 0.03-1.00, in which bubble nucleation occurred normally, a boiling curve similar to Case 0.03-9.00was observed.In Figure 7, the NHTC changes with the pattern ratio.As with the previous results, the average NHTC increased as the pattern ratio increased, and decreased sharply as the pattern ratio decreased.In Case 0.03-9.00, the average NHTC was higher than that of other cases.This is due to the bubble nucleation-departure cycle in Figure 7.In the cases of patterned wettability, there is a difference in the NHTC but the cycle is maintained.In Case 0.03-0.11, it can be seen that the cycle was not maintained because the heat transfer was adversely affected by the occurrence of uneven bubbles after the first nucleation.
Figure 8 shows the boiling curve with the pattern pitch as well as a similar trend in all cases.However, the increase in pattern ratio also increased the hydrophobic area, and the boiling curve shows a similar pattern.
Figure 9 plots the average heat transfer coefficients with the pattern pitch.The pattern ratio was fixed at 9.00.The highest average heat transfer coefficient was observed in Case 0.03-9.00,and the average heat transfer coefficient decreased at higher and lower pattern pitches.The cause of this result can be found in Figure 10, which shows bubble nucleation and surface temperature.As shown in Figure 10, the larger the hydrophobic area, the larger the diameter of the bubble.In other words, the amount of generated bubbles increased as the pattern pitch at the same channel length decreased.The increase of the contact area, due to the generation of many bubbles, has a positive effect on the heat transfer of the surface.This can be confirmed by the lower surface temperature.If the pattern pitch decreases below a certain level, the hydrophobic area decreases.This means that the distance between the nucleated bubbles is close enough to quickly coalescence.This reduces the heat transfer efficiency.
Conclusions
In this study, the optimization of heat transfer through a hydrophobic pattern was performed on a micro channel surface with hydrophilic characteristics.The pattern cases were divided into a pattern pitch, representing the overall length of the pattern, and a pattern ratio, representing the ratio of hydrophobic to hydrophilic.Pattern optimization requires several conditions.
From the viewpoint of heat transfer, the wider the hydrophobic area, the better.Therefore, an excellent heat transfer coefficient is observed on a hydrophobic surface with a high pattern ratio.At certain pattern ratios, the heat transfer coefficient is better than the uniform hydrophobic surface because the bubble nucleation-departure cycle is stable due to the pattern ratio effect.
Discussion
Figure 5 shows that the hydrophobic area had a significant effect on bubble nucleation.At the same pattern pitch, the bubble diameter increased as the pattern ratio increased.In Case 0.03-0.43 and Case 0.03-0.11,bubble nucleation was not observed at the same time step.These characteristics are also shown in the boiling curve of Figure 6.In Case 0.03-2.33 and Case 0.03-1.00, in which bubble nucleation occurred normally, a boiling curve similar to Case 0.03-9.00was observed.In Figure 7, the NHTC changes with the pattern ratio.As with the previous results, the average NHTC increased as the pattern ratio increased, and decreased sharply as the pattern ratio decreased.In Case 0.03-9.00, the average NHTC was higher than that of other cases.This is due to the bubble nucleation-departure cycle in Figure 7.In the cases of patterned wettability, there is a difference in the NHTC but the cycle is maintained.In Case 0.03-0.11, it can be seen that the cycle was not maintained because the heat transfer was adversely affected by the occurrence of uneven bubbles after the first nucleation.
Figure 8 shows the boiling curve with the pattern pitch as well as a similar trend in all cases.However, the increase in pattern ratio also increased the hydrophobic area, and the boiling curve shows a similar pattern.
Figure 9 plots the average heat transfer coefficients with the pattern pitch.The pattern ratio was fixed at 9.00.The highest average heat transfer coefficient was observed in Case 0.03-9.00,and the average heat transfer coefficient decreased at higher and lower pattern pitches.The cause of this result can be found in Figure 10, which shows bubble nucleation and surface temperature.As shown in Figure 10, the larger the hydrophobic area, the larger the diameter of the bubble.In other words, the amount of generated bubbles increased as the pattern pitch at the same channel length decreased.The increase of the contact area, due to the generation of many bubbles, has a positive effect on the heat transfer of the surface.This can be confirmed by the lower surface temperature.If the pattern pitch decreases below a certain level, the hydrophobic area decreases.This means that the distance between the nucleated bubbles is close enough to quickly coalescence.This reduces the heat transfer efficiency.
Conclusions
In this study, the optimization of heat transfer through a hydrophobic pattern was performed on a micro channel surface with hydrophilic characteristics.The pattern cases were divided into a pattern pitch, representing the overall length of the pattern, and a pattern ratio, representing the ratio of hydrophobic to hydrophilic.Pattern optimization requires several conditions.
•
From the viewpoint of heat transfer, the wider the hydrophobic area, the better.Therefore, an excellent heat transfer coefficient is observed on a hydrophobic surface with a high pattern ratio.At certain pattern ratios, the heat transfer coefficient is better than the uniform hydrophobic surface because the bubble nucleation-departure cycle is stable due to the pattern ratio effect.
•
The amount of generated bubbles increases as the pattern pitch decreases.This is effective for surface heat transfer.However, when the pattern pitch is reduced below 0.03L 0 , the amount of bubbles decreases due to an insufficient hydrophobic area to generate bubbles.As bubble generation decreases, surface heat transfer also decreases.From these results, it was concluded that finding an optimal area is necessary for determining the pattern pitch.
•
To optimize channel heat transfer, patterned wettability must be applied to widen the hydrophobic area while keeping the bubble nucleation-bubble departure cycle uniform.It is also necessary to use the minimum pattern pitch where many bubbles are generated.However, a pattern pitch of less than a certain size reduces the effect of the hydrophobic area and hinders the generation of individual bubbles, which adversely affects heat transfer.
The optimization in this study is based on a simple stripe pattern.Stripes oriented perpendicular to the flow were able to bring about pattern optimization by affecting the bubble nucleation-departure cycle.Potential future studies include investigating the application and optimization of different types of patterns depending on the shape of the channel or the trend of the flow.
Figure 2 .
Figure 2. (a) Simulation domain description of the x-y plane at z = 0. (b) Simulation domain description of the y-z plane at x = 0. (c) Pattern wettability description of the x-y plane at z = 0.Figure 2. (a) Simulation domain description of the x-y plane at z = 0. (b) Simulation domain description of the y-z plane at x = 0. (c) Pattern wettability description of the x-y plane at z = 0.
Figure 2 .
Figure 2. (a) Simulation domain description of the x-y plane at z = 0. (b) Simulation domain description of the y-z plane at x = 0. (c) Pattern wettability description of the x-y plane at z = 0.Figure 2. (a) Simulation domain description of the x-y plane at z = 0. (b) Simulation domain description of the y-z plane at x = 0. (c) Pattern wettability description of the x-y plane at z = 0.
Figure 3 .
Figure 3. Departure diameter as a function of gravitational acceleration.Reprinted with permission from [20].Copyright 2018 Elsevier.
Figure 3 .
Figure 3. Departure diameter as a function of gravitational acceleration.Reprinted with permission from [20].Copyright 2018 Elsevier.
Figure 4 .
Figure 4. Comparison of the experimental data from Jo et al. and the numerical data of the current study.Reprinted with permission from [19].Copyright 2018 Elsevier.The convection regime and the nucleate boiling regime are indicated by shading, and the slopes of both regimes are indicated by dashed lines.Reprinted with permission from [20].Copyright 2018 Elsevier.
Figure 4 .
Figure 4. Comparison of the experimental data from Jo et al. and the numerical data of the current study.Reprinted with permission from [19].Copyright 2018 Elsevier.The convection regime and the nucleate boiling regime are indicated by shading, and the slopes of both regimes are indicated by dashed lines.Reprinted with permission from [20].Copyright 2018 Elsevier.
Figure 5 .
Figure 5.Comparison of bubble nucleation type and biggest bubble diameter according to pattern ratio.Time step = 30,000 and the pattern pitch is 0.03L0.
Figure 6 .
Figure 6.Boiling curves with superheat and heat flux graph according to the pattern ratio in 0.03L0 pattern pitch.
Figure 7 .
Figure 7. Normalized heat transfer coefficient vs. time step graph.
Figure 5 .
Figure 5.Comparison of bubble nucleation type and biggest bubble diameter according to pattern ratio.Time step = 30,000 and the pattern pitch is 0.03L 0 .
Figure 6 .
Figure 6.Boiling curves with superheat and heat flux graph according to the pattern ratio in 0.03L 0 pattern pitch.
Figure 7 .
Figure 7. Normalized heat transfer coefficient vs. time step graph.
Figure 8 .
Figure 8. Boiling curves on a heat flux vs. superheat graph according to the pattern pitch 9.00 pattern ratio.
Figure 9 .
Figure 9. Average normalized heat transfer coefficient according to the pattern pitch in 9.00 pattern ratio.
Figure 8 .
Figure 8. Boiling curves on a heat flux vs. superheat graph according to the pattern pitch 9.00 pattern ratio.
Figure 9 .
Figure 9. Average normalized heat transfer coefficient according to the pattern pitch in 9.00 pattern ratio.
Figure 10 .
Figure 10.Bubble nucleation and surface temperature according to pattern pitch; Time step = 30,000; 9.00 pattern ratio.
Table 2 .
Physical properties of the conducted simulations.
Table 2 .
Physical properties of the conducted simulations. | 7,694.4 | 2018-08-17T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
Kinetic Analysis of Crystallization Processes in In60Se40 Thin Films for Phase Change Memory (Pram) Applications
In the present work, a systematic investigation of crystallization kinetics of In60Se40 alloy has been made. Thin films of In60Se40 alloy were prepared by thermal evaporation using Edward Auto 306 evaporation system. Electrical measurements at room temperature and upon annealing at different heating rates were done by four point probe method using Keithley 2400 source meter interfaced with computer using Lab View software. The dependence of sheet resistance on temperature showed a sudden drop in resistance at a specific temperature corresponding to the transition temperature at which the alloy change from amorphous to crystalline. The transition temperature was also found to increase with the heating rates. From the heating rate dependence of peak crystallization temperature (Tp) the activation energy for crystallization was determined using the Kissinger analysis. The films were found to have an electrical contrast of about six orders of magnitude between the as-deposited and the annealed states, a good quality for PRAM applications. The activation energies were determined to be 0.354 ± 0.018 eV.
Background
Chalcogenide glasses containing sulfur (s), selenium (se) or tellurium (Te) constitute a rich family of vitreous semiconductors. There has been intense research activity based on these glasses (Lathrop and Eckert 1989;Rao and Mohan 1980;Asokan et al., 1889;Balasubramanian and Rao 1994;Kolobov et al., 2005 andKumar et al., 2008) in the view of basic physics as well as device technology. The freedom allowed in the preparation of glasses in varied composition brings about changes in their short-range order and thus results in variation of their physical properties. It is therefore easy to tailor their various properties as desired for technological applications.
These materials have a wide range of applications such as making optical fibres, memory devices, reversible phase change optical recording (Suri et al., 2006). Besides the wide commercial device applications like switching, memory and xerography etc of Se, it also exhibits a unique property of reversible transformation (Aggarwal and Sanghera, 2002;Tanaka, 1989) a property that makes it very useful in memory devices. Se based glasses can be heated above glass transition temperature and then quenched to amorphous or cooled slowly to crystalline with each of these two reversible states having distinct optical and electrical properties which is the basis for a digital phase change memory device. It is known that addition of a metal [indium (In), bismuth (Bi), antimony (Sb), silver (Ag), tin (Sn), arsenic (As), lead (Pb), zinc (Zn)] or chalcogen (S, Te) to Se can bring remarkable alterations in properties and reduce ageing. The effects of impurities on the electronic properties of amorphous chalcogenide glasses have been the subjects of serious debate ever since their discovery.
Thin Film Deposition
Indium and Selenium were prepared by Thermal evaporation technique. Microscope slides having dimensions 2.54cm x 7.62cm were used as substrates. The substrates Phase Change Memory (Pram) Applications were thoroughly cleaned before placing them on substrate holder directly above the evaporation boat. About 0.1g of the fine powder of a mixture of Indium and Selenium was placed on the molybdenum boat and the material heated. The vapour deposited on the substrates forming thin films of In 60 Se 40 .
Sheet Resistance Measurement
Electrical resistivity measurements of thin films were done using the four point probe arrangement adopting the van der pauw method. With a symmetrical square geometry adopted, the probe leads were connected to the Keithly Source Meter for voltage and current measurements. Figure 1 shows a schematic diagram of four-point probe resistivity measurement. Current of 1.0 x 10 -10 A was applied through the contacts A and B and potential drop across D and C measured as shown in figure 1 (a). The switching of the probe tips on the sample was done and the same amount of current applied through the contacts A and D. Potential drop was again measured across the contacts B and C as illustrated in figure 1 (b). The measured values of current, voltage drops and film thickness were used to compute sheet resistivity. The sheet resistance was also measured upon annealing. Thermal annealing was done by placing In 60 Se 40 thin films in a quartz tube of the electric furnace where argon was ambient. The furnace was programmed to heat the sample at a particular heating rate. The heating rates ranged from 25°C to 250°C. Measurement of voltage and current was done using of Keithly SourceMeter which was configured to source current and measure voltage simultaneously on the thin film sample. A four wire remote sensing technique was adopted to minimize errors due to potential drops in the test leads when sourcing or reading voltages. Four wire remote sensing ensured that the programmed voltage was delivered to the thin film under test.
Determination of Activation Energy of Crystallization
For stoichiometric combination of In 60 Se 40 under investigation different temperature dependence of sheet resistance curves were obtained. The heating rates were varied between 2.5K/min and 12.5K/min. For each run, fresh specimens of the film were used. The peak crystallization temperature for each specific heating rate was obtained from the minimum of the first derivative of sheet resistance. The Kissinger plots were used in determining the activation energies for the sample.
Measurement of Film Thickness for as Deposited Films
The thickness of the as deposited films was determined using Alpha-Step IQ Surface profiler with a vertical resolution of 0.012 Å and vertical range of 100 Å-0.4 mm. The profiler compares the thickness of a blank glass slide with that having a film. The thickness was found to be 213.0 ± 0.1 nm.
Sheet Resistance Dependence on Temperature
Sheet resistance for as deposited and annealed films was measured using the four point probe following a procedure proposed by Van Dar Pauw and compared to those obtained for GST (Friendrich et al., 2000). The sheet resistance of as deposited films was 1.696 x 10 8 ±0.0005 (Ω/Sq) which is higher compared to that of GST at 0.9 x 10 8 ±0.0005 (Ω/Sq) which is widely used for PCM applications.
The obtained values of sheet resistance for as deposited films are approximately twice the sheet resistance of the widely used GST. This is an advantage with respect to PCM applications as high crystalline resistance reduces reset operating power.
Variation of Sheet Resistance with Temperature for
In 60 Se 40 Fig. 2. The graph of sheet resistance dependence on temperature for In60Se40. Figure 2 shows variation of sheet resistance upon annealing for In 60 Se 40 thin film at a heating rate of 5k/min. Annealing was done at a temperature range between 25°C to 250°C. Initially the film had a resistance of 1.72 x 10 8 Ω/Sq which reduced to 1.71 x 10 6 Ω/Sq. after annealing. The drop in resistance shows that there was a change of phase from amorphous to crystalline, however the change was not abrupt. Transition took place at temperature range between 84°C to 122°C. Transition temperature was obtained by differentiating the results obtained. The differentiated curve is shown in figure 3. From figure 3 transition temperature for In 60 Se 40 was 105.38 ± 0.54°C. The transition temperatures for In 60 Se 40 was 105.38±0.54 which imply that a PRAM from this alloy will be stable at room temperature since the value of transition temperature is above the room temperature. The table 1 give summary of sheet resistance for as deposited and annealed films. The values have also been compared to those of GST according to Friedrich et al. (2000). These values were used to compute sheet resistivity using the measured values of thickness.
Comparison of Sheet Resistance for as Deposited and as Annealed Films
Sheet resistivity for In 60 Se 40 as deposited was 3.61 x 10 3 Ωcm and 6.37 x 10 -3 Ωcm upon annealing showing an electrical contrast of six orders of magnitude. According to chung et al. (2008), an electrical contrast of at least three orders of magnitude is required for a phase change material. Therefore based on electrical contrast these alloys are suitable for PCM applications. In 60 Se 40 thin films were annealed at heating rates of 2.5, 5, 7.5, 10 and 12.5K/min within the temperature range of 25°C to 250°C.
The Activation Energy of Crystallization for In 60 Se 40
The result for crystallization temperature dependence on heating rates for In 60 Se 40 shows a positive shift in transition as the heating rate is increased for the heating rates 2.5, 5, 7.5, 10 and 12.5k/min. Figure 4 shows the various shifts as the heating rates are increased. Figure 4 show the variation of sheet resistance upon annealing at different heating rates (2.5, 5, 7.5, 10 and 12.5k/min). A positive shift in the transition temperature with increase in heating rates is also observed for this alloy. The crystallization parameters for this alloy are tabulated as shown in table 3. Figure 5 shows the Kissinger plot for the alloy. The activation energy for In 60 Se 40 was found to be 0.354 ± 0.018eV.
The values of transition temperature of the alloys at all heating rates: 1, 2.5, 5, 7.5 and 10K/min were much higher than the room temperature. This is an advantage for these alloys because it is essential to prevent self-transition of recording materials between the two phases. Hence one can expect the PCM made from this alloy to remain stable in its amorphous and crystalline states at the room temperature.
Since the crystalline SET state is a stable low resistance state, it is the stability of the quenched high resistance RESET phase which dominates retention issues (Burr et al., 2010). The amorphous phase suffers from two independent resistance altering processes: resistance drift and spontaneous crystallization (Burr et al., 2010). Although resistance drift does not cause any data loss, thermally activated crystallization leads to significant reduction in the resistance of the active layer, causing eventual retention failures for the binary storage.
Conclusion and Recommendation
Crystallization kinetics of In 60 Se 40 thin films has been successfully investigated. The value of crystallization temperatures was much higher than room temperature. Therefore a PRAM cell from this material is stable and therefore this alloy is suitable for PRAM applications. These temperatures prevent self-transition of glassy alloys which is required for the application as stable glasses (shukla et al., 2000). The films also showed high crystalline resistance which reduces reset operating power. An electrical contrast of six orders of magnitude was also exhibited. According to chung et al. (2008) an electrical contrast of at least three orders of magnitude is required for a phase change material. A high activation energy of is 0.354 ± 0.018 eV was realised. Activation energy for crystallization is proportional to crystallization temperature and high crystallization temperature leads to high data stability. | 2,507.4 | 2016-07-27T00:00:00.000 | [
"Materials Science"
] |
Fuzzy Clustering Applied to ROI Detection in Helical Thoracic CT Scans with a New Proposal and Variants
The detection of pulmonary nodules is one of the most studied problems in the field of medical image analysis due to the great difficulty in the early detection of such nodules and their social impact. The traditional approach involves the development of a multistage CAD system capable of informing the radiologist of the presence or absence of nodules. One stage in such systems is the detection of ROI (regions of interest) that may be nodules in order to reduce the space of the problem. This paper evaluates fuzzy clustering algorithms that employ different classification strategies to achieve this goal. After characterising these algorithms, the authors propose a new algorithm and different variations to improve the results obtained initially. Finally it is shown as the most recent developments in fuzzy clustering are able to detect regions that may be nodules in CT studies. The algorithms were evaluated using helical thoracic CT scans obtained from the database of the LIDC (Lung Image Database Consortium).
Introduction
In the field of medical image analysis, the thorax area has been the object of extensive investigation [1] due to the complexity of the pulmonary structure itself, with approximately 23 generations of branching arteries, and the problems experienced in the detection of elements of interest within this structure (nodules, tumours, etc.) [2].
The most widely used images for diagnosis have traditionally been chest X-rays because of their low cost. However, images obtained using helical CTs are being used more and more since they enable high-definition observation of lung structures, allowing images to be acquired in intervals of time shorter than a breath and with resolutions of less 1 mm. It is becoming increasingly possible to find multislice CTs [3] which provide a more accurate image of the area under examination, although they are rather costly and still not very widespread.
Within this field, one of the problems that has received most attention is the detection of pulmonary nodules due to the high rates of lung cancer found in modern societies. This disease has one of the highest mortality rates (Figure 1, [4]) and therefore early detection is fundamental [5].
The analysis of these types of studies is extremely time consuming for the radiologist because of the huge amount of data that has to be analyzed (more than 100 thin-section images) [6] and also due to the difficulty in distinguishing nodules in their initial phase because they are not clearly defined and due to their similarity to other elements present in the lungs. Clinically speaking, a solitary pulmonary nodule is considered to be any isolated and intrapulmonary lesion, rounded or oval in shape, surrounded by ventilated lung, whose diameter according to arbitrarily established criteria is less than 4 cm [7]. Furthermore, the contours of a nodule or mass must also be sufficiently defined and clear in order to be able to determine its approximate size with relative precision.
On the basis of the aforementioned information, multiple CAD (Computer Aided Diagnosis) systems have been developed to perform this task with a wide variety of techniques being used for this purpose: [8] proposed a multilevel thresholding technique designed to identify connected components of similar intensity and eliminate vessels present in the CT in order to detect nodules; [9] divided the CT into grids using a genetic algorithm that used a template to detect elements which could correspond with nodules; [10] proposed a new QCI filter as part of a CAD to detect nodules in CT; and [11] used thresholding and morphological operators to detect candidate nodules followed by the use of a Fisher Linear Discriminant classifier to reduce false positives. Other papers describing major systems within this area are [12][13][14][15][16][17][18][19]. Our research group is developing a CAD system to perform this task automatically. This system uses fuzzy logic as a basis for detecting lung nodule candidates and, in particular, fuzzy clustering algorithms.
In Figure 2 we can see the phases of a typical CAD system. The first task to be undertaken in pulmonary CAD systems is a preprocessing stage to isolate the pulmonary lobes, removing external elements that may affect classification. The system we are developing also includes an initial stage for this purpose [20,21], Figure 3. In this process each of the unwanted elements (e.g., the diaphragm) is isolated and eliminated in a series of steps and when the only remaining elements are the lungs themselves, a range of morphological operations (opening, closing) are applied to eliminate any defects that might have arisen during the process, such as the recuperation of pixels previously eliminated from the juxtapleural nodules.
This work focuses on the following phase, the purpose of which is to detect ROIs with a view to reducing the search area and obtaining the lowest possible number of candidate zones that may be nodules; the aim is to reduce the number of false positives and increase that of true positives. The objective is for this stage to be conducted automatically by the system given its advantages: a significant reduction in the workload of the specialist and the elimination of bias errors.
In this paper, we present and analyze the results of various fuzzy clustering algorithms that use different strategies to classify the pixels that make up an image. We also propose a new algorithm, formulated by merging two of the algorithms we have analyzed.
The FCM, KFCM, SFCM, and SKFCM algorithms were studied and the MSKFCM algorithm is proposed. The algorithms analyzed using spatial information were modified so that 3D neighborhoods could be used in the classification process (these algorithms were originally designed for use with 2D neighborhoods) which should allow for a better classification, working with further information, and offer a better reflection of the authentic anatomical structure. Section 2 on material discusses the characteristics of the studies used in the tests and the tools employed to implement the algorithms. A description is then provided of each algorithm. Section 3 describes the methodology used in the tests. In Section 4 we present and discuss the results obtained and the metrics used to take measurements. Finally, the conclusions will be considered.
Material and Methods
For the purposes of this analysis we used a set of helical thoracic CT scans from the LIDC (Lung Image Database Consortium) [22], which can be accessed from the National Biomedical Imaging Archive (NBIA). The goal of this project is to develop a reference repository of CT lung images for the development and evaluation of CAD systems in the detection of lung nodules. Five North American institutions have collaborated in its construction: Cornell University; the University of California, Los Angeles; the University of Chicago; the University of Iowa; and the University of Michigan.
Each image was annotated by four experts, initially as a blind review, so that any discrepancies between annotations could then be forwarded to the corresponding experts, who could then make the appropriate amendments. The images are stored according to the DICOM standard, sized 512 × 512, with a pixel size from 0.5 to 0.8 mm and a 12-bit grayscale of 12 bits in Hounsfield Units (HU). These CT scans were acquired from a wide range of scanner manufacturers and models under X-ray tube current ranging from 40 to 627 mA (mean: 221.1 mA) and tube voltage at either 120 or 140 kVp. The CT studies were reconstructed with pixel resolution ranging from 0.461 to 0.977 mm (mean: 0.688 mm) and slice thickness ranging from 0.45 to 5.0 mm (mean: 1.74 mm) [23].
Each analysis incorporates an XML file indicating the presence of one or more nodules (or their absence), their type, and their contour (specified by the coordinates of the constituent pixels). Figure 4 shows some of the slices used in the study with the location of the nodule marked by a black rectangle. Fuzzy clustering algorithms were used to detect ROIs due to their capacity for handling multidimensional information, making them easily adaptable for the classification of images, their low sensitivity to noise, which should make it easier to differentiate between nodules and other elements in images, and their capacity for handling ambiguous information, a common characteristic of medical images due to the low signal/noise ratio [24].
In the last years, new algorithms have been developed in order to resolve the problems associated with classical fuzzy algorithms and to provide better results [25][26][27]. In this paper, we selected some of the recent fuzzy algorithms that use kernel functions (KFCM) to simulate calculation in larger spaces or algorithms that use the pixel neighborhoods to calculate their membership (SFCM), increasing their insensitivity to noise. Moreover, those algorithms have been developed and tested within the medical image analysis field, being suitable to the problem described in this paper.
In this study, the two algorithms mentioned above were combined to obtain a new spatial kernelized algorithm and to determine whether the combination of these two techniques yielded better results than each technique individually for the problem addressed in this research. The SKFCM algorithm was analyzed to estimate the improvement that the new algorithm was expected to offer compared with the other algorithms using the same strategy. This has also been used for medical imaging analysis and specifically for MRI (Magnetic Resonance Imaging) in brain scans [28].
To enhance the quality and increase the scope of the analysis, the spatial algorithms were modified so that 3D neighborhoods could be used. These neighborhoods enabled a better appreciation of the real structure of the element to be defined and used more information in the classification process. For this reason, we expected to obtain a better classification than by using 2D neighborhoods.
We also analyzed the FCM algorithm which was the first fuzzy clustering algorithm to be developed and is currently used as a reference in the literature.
The ITK toolkit was used to implement the algorithms. This is an open-source software toolkit for registering and segmenting medical images, developed in C++ using the generic programming paradigm. The algorithms were implemented using base classes since there was no support for fuzzy logic.
Details of the implementation of some algorithms used in this analysis were published in Insight Journal [29] and are freely available to any interested researchers to allow the scientific community to confirm that the algorithms were implemented correctly and facilitate their use.
FCM (Fuzzy C-Means).
The FCM algorithm was developed by Bezdek et al. [30] and is the first fuzzy clustering algorithm. It is a method for the division of sets based on Picard iterations on the necessary conditions for calculating the minimum square error of the objective function: In this algorithm, represents the membership value of pixel to class , is the th pixel, V the centroid for class , is the number of clusters, is the number of pixels to classify, and is a weight factor that must be bigger than 1. The FCM initially needs the number of clusters in which the image will be divided and a sample of each cluster.
The steps of this algorithm are as follows.
(3) If the error stays below a certain threshold, stop. In the contrary case, return to step (1). The parameters that were varied in the analysis of the algorithm were the samples provided and the value of .
KFCM (Kernelized Fuzzy C-Means).
This algorithm was proposed in Chen and Zhang [31] and is based on FCM, integrated with a kernel function that allows the transfer of the data to a space with more dimensionality, which makes it easier to separate the clusters. The purpose of the kernel function is to "simulate" the distances that would be obtained by transferring the points to a space with more dimensionality, which in most cases would imply exaggerated computational costs. The proposed objective function is The kernel functions used most often are polynomial functions (5) and Gaussian radial basis functions (6). Consider where is the sigma of the Gaussian function. The algorithm consists of the following steps.
(1) Calculation of the membership function: (2) Calculation of the new kernel matrix ( , V ) and (V , V ): (3) If the error stays below a determined threshold, stop.
In the contrary case, return to step (1).
The different parameters for the analysis of this algorithm were the initial samples and number of clusters.
SFCM (Spatial Fuzzy C-Means)
. This is a spatial fuzzy clustering algorithm [32] that uses a spatial function, which is the sum of the memberships of the pixels in the neighborhood of the pixel under consideration. The main advantages deriving from the use of a spatial function are the possibility of obtaining more homogeneous regions and less sensitivity to noise.
In the initial stage the algorithm applies the traditional FCM (Fuzzy C-Means) algorithm to obtain the initial memberships for each pixel, the iterative stage being omitted. It then calculates the spatial function value for each pixel in the image: BioMed Research International 5 where NB( ) represents a square window centred around the pixel under consideration, its size being a configurable parameter of the algorithm. The greater the number of neighboring pixels that belong to the same cluster, the higher the value of the function. The next step is to calculate the spatial membership function: where and are control parameters for the importance of functions and ℎ . Finally, the new centroids are calculated: The error is calculated. When this is below a determined threshold, the algorithm will stop; otherwise the FCM will be recalculated and a further iteration will commence.
SKFCM (Spatial Kernelized Fuzzy C-Means).
This algorithm [28] introduces a penalty factor that contains spatial neighborhood information to the KFCM (Kernelized Fuzzy C-Means) algorithm proposed in the same study. The paper only considers the Gaussian radial basis function kernel: Therefore, modifying the objective function of the FCM in order to introduce the kernel function and add the penalty factor, we obtain the final objective function: where represents the square window which includes the neighbors of pixel (without considering it), is the cardinality of , and (0 < < 1) is a parameter that controls the effect of the penalty term. Deriving the objective function (see (13)) with respect to and V , the authors obtained two conditions that minimize the objective function. Finally, an iterative algorithm can be derived from the above conditions.
When initialising the algorithm, the parameters , that is, the number of clusters, the initial class centroids, and the threshold epsilon, must be determined.
In the first step of the iterative process, the memberships are calculated as follows: Finally, the centroids are updated as follows: As in the other algorithms, repeat these steps until condition ‖V −1 − V ‖ ≤ is satisfied, where epsilon is a determined threshold.
MSKFCM (Modified Spatial Kernelized Fuzzy C-Means).
The modification proposed in this study ( [29]) is a combination of the algorithms described previously (KFCM and SFCM) in order to combine their strengths. Thus, kernelized algorithms simulate the calculation of distances in a space of greater dimensionality, enabling better classification of elements. Spatial algorithms reduce sensitivity to noise and local variations by using the membership of all the pixels belonging to the neighborhood we wish to calculate.
The initial parameters required for the proposed modification are the number of clusters into which the image is to be divided, a sample of each cluster, and the values for the parameters , in order to calculate spatial membership.
The algorithm consists of the following steps.
(1) Calculation of the membership function: (2) Calculation of spatial memberships: with ℎ = ∑ ∈NB( ) , where NB is the neighborhood centred in . (3) Calculation of the new centroids: (4) If the error stays below a determined threshold In the contrary case, return to step (1).
By combining what are currently the two most widely used techniques for developing fuzzy clustering algorithms, we aimed to improve the classification of the pixels forming the nodule, improve the detection of true positives using a kernel function to improve cluster separation, and reduce false positives using neighboring pixels to calculate membership. Consequently, it was determined that, with the exception of very fuzzy nodules, the pixels forming it had neighborhoods that allowed better differentiation from other areas of the image with similar values for each pixel.
Methodology
To carry out the analysis, a moderate number of studies were used to better determine the different features that influence the outcome and detect any possible problems that might arise. 1500 slices were used belonging to the nine studies which contain the different cases in this type of medical image: the initial stage, adherent to the pulmonary membrane, clearly consolidated, and located in the different thoracic zones: lower, middle, and upper.
In order to measure the success rate of the algorithms we decided to calculate the number of true positives (TP) and false positives (FP), sensitivity against sensibility, considering the true positive as those pixels that are part of the nodule and they are classified as nodule. Oppositely, false positives are pixels classified as nodule but really they are part of another element of the slice. An algorithm that correctly classifies the nodules must assign a high number of true positives and a low number of false positives. If other values were obtained, this would indicate that the algorithm performed a poor classification, either because the rate of success in terms of the classification of the nodule pixels was low or because the algorithm incorrectly classified a large number of pixels that were not nodule as nodule pixels.
The traditional method used to evaluate CAD systems is to use the outcome for a series of cases for which the results are known and to construct a ROC curve on the basis of TPF (True-Positive Fraction) and FPR (False-Positive Fraction) [33] so that the quality of the system can be observed as well as the outcome through the variation of different parameters. However, given the declared aim of this work, this is not the most adequate focus, given that the objective of this module is not to identify the ultimate outcome but to reduce the search space to localize those zones that may be nodules.
For this reason, we decided to use the approximation proposed by Bowyer [34] to evaluate edge detection algorithms. In this framework, each set of parameter values for each edge detector and image will produce a count of true-edge pixels and false-edge pixels. By sampling broadly enough in the parameter space for an edge detector, and at fine enough intervals, it is possible to produce a representative range of possible tradeoffs in true versus false positives. This results in a graphical representation of possible "true positive/false positives" tradeoffs similar to a receiver operating characteristic (ROC) curve. This provides a comparison of the behaviour of the algorithm for different parameters and the selection of the best combination, adapted and aligned to our aim.
Another factor that favored this solution for evaluating these results from fuzzy clustering algorithms was that the masks supplied by the LIDC for the different slices only contain information about the nodules indicating the points that constitute their edge and type, with no data on the other elements that may exist in each slice. The use of other measurements would involve creating masks with the correct classification for each pixel in each slice and for each study which is beyond the capacity of our group. Even so, we had to create an application using XML files that provides LIDC for each study with the data of the nodules for each slice and translates this information as a representation allowing for a rapid and efficient evaluation.
In Figure 5 the steps followed to make the tests can be seen. In first place, a preprocessing was applied to all the studies with the objective of isolating the lungs. In the next step, the relevant parameters were identified that influenced the results obtained for each algorithm. In the fourth step, we determine the test interval for each parameter of each algorithm, in order to reduce the search space. In this sense, different values were tested based on a fixed space covering the entire interval. Following that, we test each algorithm and the different combination of parameters over the data set. Finally, we evaluate the results obtained for each slice applying each algorithm with its combinations of parameters. Table 1 shows the parameters analyzed for each algorithm and the ranges used for each parameter analyzed. The first parameter analyzed was the number of clusters into which the image was to be divided; the best results were obtained with three and four clusters; a different set of test images and different validity indices were used [35,36]. The second parameters were the number and initial samples used for initialisation since these parameters could induce variations in algorithm convergence speed and results [37]. We decided to use samples that were obtained randomly and through an operator for each slice. Finally, it was observed that for parameters and of the SFCM and MKSFCM algorithms the best results were obtained in the interval [0, 2].
To illustrate the results we will use graphs that allow us to see the conditions in which the best results were obtained for each algorithm. The aim is to estimate how stable they are and to visualize their behaviour for our studies. In order to improve the clarity in the presentation of the results, each algorithm will be presented using a different subsection.
Results
The first algorithm analyzed was the FCM due to its current status as a reference algorithm, as mentioned above. Figure 6 shows the results obtained for this algorithm in one study; the algorithm is used with samples selected by an operator (a radiologist) with each result represented by a point. For this algorithm, it was decided to represent the TP against sensitivity to better observe its behaviour. It can be observed in the graph that variability is quite high for the different slices: there are cases were the success rate is very low (below 40%) or very high (close to 100%); in addition the number of false positives is also high increasing with the success rate. The behaviour of the algorithm in the rest of the studies was similar. This result is due to the FCM algorithm classifying by means of hyperspheres (if the Euclidean distance is used in the calculation of the memberships); it is not possible to separate mixed classes that have different structures [38], as is the present case, which impedes the algorithm from calculating centroids of sufficient quality to produce a good partitioning of the image. Further evidence that corroborates this fact is the results obtained using random samples, in which the values for TP and FP measurements were similar to those obtained using samples selected by an operator (the difference was less than 1%), which shows that the result principally depends on membership function used in the classification rather than the samples used. Figure 7 shows the result for the FCM algorithm for one of the slices used in the tests in which it can be observed that while the algorithm is able to detect a part of the nodule, there are an elevated number of false positives. The same slice will be used in the remainder of the paper to illustrate the results of the different algorithms and to facilitate their comparison.
KFCM.
A Gaussian kernel was used in the testing process for the KFCM algorithm. Figure 8 shows the results for this algorithm in all the analysis studies. In the graph, it can be observed that a high success rate for a significant number of slices was achieved. Nevertheless, the results indicate that this algorithm is not adequate for the automatic detection of ROIs, the aim of this paper. Although the success rate for the majority of slices is high (more than 65%), the noise level is very high (more than 30% in almost all). This can be clearly seen in the graph with the majority of points situated in the upper right corner making them very difficult to eliminate. Figure 9 shows the results obtained for 23 slices selected from all the studies analyzed in order to obtain a clearer insight into these results. This combination of slices was also used to illustrate the behaviour of the rest of the algorithms to allow for the comparison of the results and the performance of each algorithm. The graph shows how false positives reach 70% in various slices and are not lower than 20-30% in almost all. This implies that, even with the construction of an efficient classifier for the following stage, it would be extremely difficult to eradicate these erroneous zones from the result. An elevated number shows features that are very similar to those of a nodule, such as midrange HU values, shape, and size, which makes it very difficult to establish criteria that allow for a good classification.
The kernelized function employed by this algorithm is not able to discriminate between the pixels that belong to each cluster because of the overlap existing between the pixels in different clusters given that the only information that the algorithm uses is the attenuation value, which for the majority of pixels is very close for the nodule and the lung tissue. As such, this algorithm provides a good classification and a quality result for each slice or, if there are many serious errors, a low quality result. There is a substantial range in the success rate from 10 to 90%. Figure 10 shows a slice result typical of the majority of cases. It can be seen that the algorithm correctly classifies almost all the pixels of the nodule but the number of false positives is very high complicating, to a large extent, the analysis in subsequent stages.
SFCM.
The next algorithm to be analyzed was SFCM and Figure 11 shows the results of its application to the pool of test studies. What is notable about this algorithm, and clearly visible in the previous graph, is the low number of false positives produced (10-15% in almost slices). The reason for this result is the spatial character of the algorithm which makes it easier to differentiate (compared with the previously analyzed algorithms) the pixels which make up the nodule and those pixels which are part of the tissue when using the neighboring space to calculate membership. However this algorithm is unable of achieving a high success rate in the detection of the nodule in the majority of slices. It was only able to achieve an adequate level of success in about 30% of the slices which can be observed in Figure 12 in the distribution of points along the TP axis. This means that it is not a good option for the aim we have in mind in this paper, given that it cannot provide, with its high level of variability, a consistent rate of success for all the test studies. Figure 12 shows the results obtained for the selected slices which are similar to those for the complete study: the number of false positives is low with a high success rate but clear variability depending on the slice. Selected samples were used in these tests. The slices with a low success rate were 2, 11, 18, and 7. The best results were obtained by partitioning the image in 3 clusters with the number of false positives less than if it was partitioned in 4 clusters without significantly reducing the true positives. However, the differences in the results were minimal when the only parameter varied was the samples: random or operator-selected.
This, in our view, does not indicate a limitation in this algorithm as it does in the FCM algorithm because, to obtain good results, it is necessary that the spatial function is the component with greater weight in the membership function. It is used as an additional characteristic to calculate the value of membership allowing the discrimination between pixels of different clusters based on neighborhood; so the more the importance it has, the less the number of false positives. This, however, causes the initial samples to have much less weight in the classification with the FCM membership much less valued and its influence on the final result much less. Figure 13 shows a result for one of the test slices.
SKFCM.
The results obtained for the algorithm SKFCM show a low level of false positives using selected samples. The best results were obtained by dividing the image into three clusters and using a spatial window 3 × 3; the success rate was above 80% for the majority of slices with the false positives lower than 20% for most of the study. Figure 14 shows the results obtained for all the studies used in the analysis.
The figure of true positives, using random samples, is grouped within the range of 60%-100%, although values of below 20% can be observed in some slices as, for example, slice numbered 2 (15% with random samples) (Figure 15). In the latter case, this results from the loss of pixels from the nodule during the preprocessing stage of the lungs and, above all, from the inability of the algorithm to divide the more complicated slices for classification. The most critical entry parameter for this algorithm is the sigma selection (Figure 16), obtaining significant variations in the results for the false positives varying this parameter, creating associated problems in the ROI classification at the next stage, and making identification of nodules difficult (Figure 16(b)).
In Figure 15, the results, using random and selected samples, can be observed having good ratios of true positives of around 100% for the greater part of the study using operator samples, although they do present a greater number of false positives with respect to using random samples. In the latter case, it can be seen that the success rate decreases for some images, to a range of between 50 and 100%.
The size of the neighboring window has not produced significant variations with its best value as indicated previously. This is due to the membership function having a strong dependence on the kernel function, which is strongly influenced by the initially selected samples.
MSKFCM.
Finally, we will analyze the results of the new algorithm we are proposing which combines the two previous strategies, the objective of which is to improve classification using a kernelized function and to decrease the false positives taking into account the spatiality of each pixel. This is the trend that the most recent algorithms follow. It can be observed in Figure 18 how the algorithm achieves a good success rate for almost all images with a low number of false positives (the worst result around 15%). This behaviour can also be seen in Figure 17 where the number of false positives has decreased substantially compared to other algorithms analyzed, maintaining a high success rate (>60%) for the majority of slices. It should be pointed out that although the curve is similar to that of SKFCM, this is due, not to the similar behaviour of the algorithm, but to the fitting function used.
By analyzing more in detail the results of the selected set of slices, the success rate deteriorated in the case of random samples (≈10%); we can also see a low rate of false positives was maintained, and in some cases improved results were obtained ( Figure 17). Individually examining each slice with a low rate success, it can be seen that the lost part of the nodule in the majority of the slices could later be recovered using other techniques. Figure 19 shows an example of a result applying this algorithm.
This algorithm also displays a more stable performance than the others (Table 2). A problem observed with the other Table 2: Results (%) for a subset of slices displaying the greatest problems for the algorithms using spatial information. algorithms is that when the sample set was modified in order to improve the results, there was also a variation in the cluster to which the nodule was assigned, depending on the initialisation and the number of clusters into which the slice had been divided. In the case of the new algorithm, however, when the number of clusters is set at 3, it consistently classifies the nodule in the same cluster, enabling, in addition to a good and stable performance with random samples, automated classification, which was the objective outlined at the beginning of this paper.
3D Neighborhood.
From the analysis of the results, it can be deduced that the algorithms which best address the problem presented in this paper are those which use spatial membership functions and, among these, those which combine this technique with a kernelized membership. To improve these results, we decided to modify the spatial kernelized algorithms to use 3D instead of 2D neighborhoods in the calculation of the memberships. Helical thoracic CT scans allow for a 3D reconstruction of the target zone that is very similar to the original, given the high levels of resolution it is able to achieve. The use of the 3D structure instead of 2D provides more information when calculating memberships and avoids noise and loss of information associated with projecting a 3D structure in 2D.
This modification was applied to those algorithms which provided the best results and presented more stable behaviour during the analysis: SKFCM and MKSFCM. Figure 20 presents the scheme followed to obtain 3D neighborhood and the pixels that are used to calculate the spatial function for a 3 × 3 × 3 neighborhood, formed by the slice that the pixel belongs to; the previous and following in the form of a rectangular prism. It should be noted that, in its implementation using ITK, any shape (spherical, rhomboid) can be used to obtain the neighborhood.
The methodology, described for the test process in Section 3, was applied and in order to allow a direct comparison of the results, which were obtained in the same way from the same set of images, the test unit was the study and not The success rate, in the results obtained for the 3D version of MSKFCM, was close to 100% in more than 90% of the slices analyzed and the false positives did not exceed 18% in any of the slices. The slices that had a low success rate were juxtapleural nodules with problems, at the initial preprocessing stage, in maintaining all the points that belong to the nodule and nodules marked with a single pixel and classified as having an indefinite nature in the database. As such, and not being able to identify them as a nodule or not, they were of no interest to the present study.
It is worth noting in the results that using larger neighborhoods reduced the number of TPs and FPs until, in extreme cases, the algorithm does not detect any pixel as belonging to the nodule. The best results for success rates and greater stability were obtained using 3D neighborhoods sized 3 × 3 × 3.
The success rate for 3D SKFCM was similar to the previous algorithm at around 100%. However, the FP figure was high exceeding 60% in the poorest results. In addition, stability was low with a lot of variabilities in the results for different slices and the same slice with different parameters. The best results were obtained using small , reducing the weight of the spatial factor. The behaviour of this algorithm is opposite to that of 3D MSKFCM: the greater the size of the neighborhood, the more the TPs and FPs increased.
For both algorithms, it was proven that the greater the size of the neighborhood, the greater the tendency of the algorithms to classify all the points in one cluster; the 3D distribution of the points does not correspond with the anticipated shape by the membership function (the membership function of the algorithms is based on FCM) resulting in an accumulation of errors in the classification.
The SKFCM algorithm tends to group all the pixels in the cluster identified as a nodule, because, with most important factor being the initial samples or centroids (in this case, the pixels which have been identified as belonging to the nodule are prioritized), the accumulation of errors means that more and more pixels associate with this cluster. MKFSCM, for its part, groups all the pixels in one cluster, identified as a lung, giving more weight to those pixels which form part of the neighborhood than to the centroids in the calculation of memberships, as the majority of the pixels in the slice are lung owing to the preprocessing which seeks to eliminate all elements of no interest; therefore, all of the pixels end up being assigned to this cluster. Table 3 shows the results for the two algorithms for a combination of slices (for this table, different slices have been used from those used in Tables 2 and 4), selected from the nine studies using 2D and 3D neighborhoods; those that best reflect the behaviour of all the set have been chosen. The most notable aspect of all the results obtained is that the two types of neighborhoods are similar for the majority of cases. This is due to the spatial functions having been designed for work with 2D neighborhoods, unable to benefit from the additional information provided with the use of 3D neighborhoods.
As such, the algorithm which provides the best results using 3D neighborhoods is MKSFCM, the results of which are similar to the 2D algorithm with an improvement in results in only some slices.
Discussion
The most complicated pixels to classify correctly are those which belong to less well defined nodules, still at a very initial stage or juxtapleural, which are very difficult to distinguish from other pulmonary elements.
This was confirmed using the first version of the masks for the studies provided by the LIDC. Each pixel in these masks was assigned a value between 0 and 1000, representing the level of consensus among radiologists that the pixel under consideration belonged to a nodule (1000 indicates that all radiologists are in agreement with the membership of the pixel to a nodule and 0 that all were in agreement that it did not belong to a nodule). For those pixels, where there was strong agreement among the radiologists over membership to a nodule (with a punctuation equal to or above 800 points) both SKFCM and MSKFCM were capable of detecting them without any problems. Figure 21 shows a section of a slice classified as a nodule by radiologists and the different results provided by the algorithms which have been considered in this study. The majority of classification errors correspond to pixels with a low punctuation (100-200) especially those at the edge of the nodule.
The best results and performance of the MKSFCM algorithm were obtained by dividing the slices into three clusters. This was because the membership function with a larger number of classes is unable to divide the space of the problem in which the nodule pixels are clearly separated from the pixels belonging to other clusters; the cluster to which they are assigned depends on the distribution of coefficients in the image and the number of clusters into which the image is to be divided. A much more powerful membership function, capable of performing a better classification, would be required to obtain better results with a larger number of clusters. This conclusion is corroborated by the fact that when using 3D neighborhoods which use more information and better reflect the structure of the element, the results do not present an improvement (Table 3) and maintain their level of success and, in some cases, increase the false positives. The best results were also obtained by dividing the image into three clusters.
Conclusions
This paper presents an extensive and thorough analysis of the use of traditional and state-of-the-art (Table 4) fuzzy clustering algorithms for detecting ROIs in helical thoracic CT slices, with the aim of incorporating this method into a CAD system that will help professionals to detect pulmonary nodules, tested using a set of studies selected from a public database.
Traditional algorithms have been shown not to be the most appropriate solution due to the limitations of the membership functions they use; they are unable to achieve good quality results with large sets of slices. | 9,412.2 | 2016-07-18T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Boron-Doped Carbon Nano-/Microballs from Orthoboric Acid-Starch: Preparation, Characterization, and Lithium Ion Storage Properties
A boron-doped carbon nano-/microballs (BC) was successfully obtained via a two-step procedure including hydrothermal reaction (180C) and carbonization (800C) with cheap starch and H 3 BO 3 as the carbon and boron source. As a new kind of boron-doped carbon, BC contained 2.03 at% B-content and presented the morphology as almost perfect nano-/microballs with different sizes ranging from 500 nm to 5 μm. Besides that, due to the electron deficient boron, BC was explored as anode material and presented good lithium storage performance. At a current density of 0.2 C, the first reversible specific discharge capacity of BC electrode reached as high as 964.2mAh g–1 and kept at 699mAh g–1 till the 11th cycle. BC also exhibited good cycle ability with a specific capacity of 356mAh g–1 after 79 cycles at a current density of 0.5 C. This work proved to be an effective approach for boron-doped carbon nanostructures which has potential usage for lithium storage material.
Results and Discussion
2.1.Synthesis.The boron-doped carbon was synthesized via a two-step procedure [48][49][50].The most important procedure was the first step, of which the high temperature, high pressure, and ionic water got three purposes: (a) to promote the swelling and reshaping of microcrystal bundle of starch with deionized water; (b) to improve the penetration and absorbance of HBO 3 into the microcrystal bundle of starch; (c) to speed up the bonding of HBO 3 with -OH groups of starch.It has been reported that the morphology of starch powder tended to be nano-/microspheres [49,50] with sizes in a certain range.A black powder was finally obtained as boron-doped carbon (BC) representing a successful carbonation procedure in the second step.
Morphology.
To find out the detailed morphology of BC, FE-SEM was applied.Figure 1 both wider compared to the smallest graphitic spacing (002) 0.34 nm [51] and 0.21 nm [52], hinting the more crystalline defects of this new boron-doped carbon compared to graphite.
Raman Spectrum.
As depicted in Figure 2(b), the Raman spectrum exhibited two distinct peaks at D banded ca.1336 cm −1 and G banded ca.1588 cm −1 representing graphitic and disordered sp 2 -carbon atoms of BC.The more intensive G bands marked the typical graphitic lattice vibration [53,54].The less intensive D bands represented the defect lattice vibration [55,56].The intensity ratio of D-band versus Gband ( D / G ) value was calculated to be 0.985 indicating the structural and intrinsic defects and amorphous disorder [57][58][59][60] for BC.
X-Ray Photoelectron Spectroscopy (XPS). XPS was mea-
sured to analyze the elemental species and their corresponding atom percentage in the obtained boron-doped carbon.As presented in Figure 3(a), BC mainly contained C, B, and O dopants with three characteristic peaks at ∼284 eV, ∼192 eV, and ∼532 eV corresponding to C 1s, B 1s, and O 1s, respectively.For BC, the total contents of C, B, and O elements were 84.06 at%, 2.03 at%, and 13.91 at%, respectively.
In Figure 3(b), the C 1s spectrum of the BC could be deconvoluted into several individual peaks.The top peak at ca. 284.4 eV was most likely ascribed to the sp 2 -C of C=C double bonds [15][16][17][18]27].However, the C 1s of B-C-X could not be observed due to the low percentage.According to the fact of boron-doping ratio, the second peak at 285.5 eV was partially ascribed to the C-B bond which was buried in the second peak with other bond species.Partial of the second peak and the last one was defined as the signal of C-O/C=O bonds [16,17].These C of BC took corresponding ratio as 56.94 at%, 21.29 at%, and 5.2 at%, respectively.In Figure 3(c), after analyzing the high resolution spectra of BC sample in the range of 180-196 eV.The high resolution B 1s peak at 191.6 eV was evidence for the existence of B species [39,46].The splitted two peaks at 191.2 eV and 192.6 eV took 0.58 at% and 1.55 at%, respectively, belonging to B 1s of B-C 3 , B-C 2 O with different bonding species [18].In Figure 3(d), the O 1s spectrum of BC was also detected and divided to be two peaks at 531.9 eV and 533.6 eV covering 2.60 at% and 11.84 at% and matched with the O 1s from O=C double bond and O-C single bond, respectively.Neither for O 1s was observed from B-O bond due to the low percentage.4(a) presents the first three voltage curves of BC electrode measured by cyclic voltammogram (CV) at room temperature from 0.005 V to 3.0 V with a 0.5 mV s −1 scan speed.The discharge curve of 1st cycle has not covered the 2nd and 3rd.The initial obvious sharp peaks at 0.01-0.7 V were most likely induced by the solid electrolyte interphase (SEI) layer [61][62][63][64][65]. Figure 4(b) represented the rate cycle curve at several increased current densities from 0.2 C, 0.5 C, 1 C, 2 C, and 5 to 10 C. At 0.2 C, there was a great decrease from the initial irreversible discharge capacity (2059 mAh g -1 ) to the initial reversible charge capacity (1030 mAh g -1 ) which was most likely induced by the SEI reaction.However, at the 1st reversible cycle, the specific discharge capacity of BC electrode reached as high as 964 mAh g -1 and stabilized at 699 mAh g -1 by the end of the 11th cycle which were nearly two times higher than the theoretical top value of graphite electrode (372 mAh g -1 ) with the LiC 6 mechanism [63][64][65].At the highest current density of 10 C, the capacity kept at ∼116.8 mAh g -1 till the 57th cycle [66][67][68][69][70].When the current density was adjusted back to 0.2 C at the 70th cycle, the discharge capacity recovered to 490 mAh g -1 for BC electrode.electrode possessed stable cycle stability with the corresponding capacity as high as 356 mAh g -1 after 79 cycles at 0.5 C. The 1st irreversible discharge capacity of BC reached 1829 mAh g -1 .Beginning with the 37th cycle, the discharge capacity stabilized between 386 and 357 mAh g -1 till the 79th cycle.However, the columbic efficiencies are almost over 90% from the 2nd cycle to the 79th cycle.As presented in Figure 5, The nano-/microstructures of BC particles determined the charge-transfer process of lithium ion insertion/extraction reaction [67,68].According the reported literature, the calculated value as 138 Ω for BC electrode was regarded as a composite resistance value determining the charge transfer of Li + ions insertion/extraction [69,70].The composite resistance value contained the Li + ions migration through the SEI film and charge-transfer resistance.As illustrated above, the carbonaceous body of BC played a role as the conductive channels [20,23,71] for electron transportation.The enlarged electrode/electrolyte interface of BC could promote the rapid absorption and release of Li + ions with fast charge-transfer process.Meanwhile, the transport distances of Li + ions were shortened on the carbon framework.
Conclusions
In summary, a novel boron-doped carbon has been obtained by a two-step approach hydrothermal reaction and carbonization treatment and explored as anode materials for Li + ion battery.The ratio of boron-doping reached as high as 2.03 at%.The morphology presented as perfect nano-/microballs ranging from 500 nm to 5 m.At a current density of 0.5 C, BC electrode exhibited good cycle ability with a discharge capacity of 356 mAh g -1 till 79 cycles.We gave a facile approach to reach a boron-doped carbon.Further investigation to much higher ratio of borondoping is undergoing for more highly performed Li + ion battery.
Experimental Section
4.1.Materials.Potato starch was purchased from supermarket and H 3 BO 3 , and ZnCl 2 [72][73][74] were purchased from Sigma-Aldrich Co., Ltd.Other reagents and solvents were purchased from Energy Co., Ltd.All solvents were used without further purification.
Synthesis of BC Nano-/Microballs
. There were mainly two steps for the synthetic routine.In the first step, the mixture of HBO 3 , starch and deionized water was treated with hydrothermal reaction under a high temperature circumstance at 180 ∘ C. In the second step, the starch particles loaded with HBO 3 were grinded with overdose ZnCl 2 to isolate the starch particles and avoid the conglutination and then carbonized at 800 ∘ C for 2 h under Ar atmosphere.The detailed procedure was carried out as 10 g potato starch, 7.14 g H 3 BO 3 , and 180 ml deionized water was added to hydrothermal reactor and heated at 180 ∘ C for 24 h.After cooling to room temperature, brown yellow powder was obtained by vacuum filtration, washed by deionized water, and dried at vacuum oven at 120 ∘ C overnight.The powder was grinded with ZnCl 2 (1 : 4, weight ratio) for 15 min and divided to be three parts which were calcined at 800 ∘ C for 2 h with a heat ascending rate of 5 ∘ C min -1 , respectively.The obtained three kinds of black powder were washed by HCl (6 mol L -1 ) and deionized water till pH = 7.0 giving target boron-doped carbon materials entitled BC.The particles were scanned by FE-SEM and confirmed to be nano-/microballs.4.3.Methods.Field emission scanning electron microscopy (Hitachi S-4800, Tokyo, Japan) was used to observe the micromorphology of particles.Bruker D8 X-ray diffractometer with Cu K Radiation ( = 1.5405Å) was used to measure the X-ray diffraction (XRD) patterns of aggregate sample.WITec alpha 300M+ micro-Raman confocal microscopy was used to test collect the Raman spectra of as-prepared sample.Thermo Scientific ESCALAB 250XI system with a monochromatic Al K X-ray source was used to carry out the XPS measurements of elemental data.
Electrochemical Tests.
The electrochemistry property of BC was tested with button cells.Pure lithium was used as the counterelectrode and reference electrode.The working electrodes were fabricated by mixing the mixture of BC and polyvinylidene fluoride (PVDF) (90 wt% : 10 wt%) in N-methyl-2-pyrrolidone (NMP).The obtained mixture was coated onto Al sheet and dried for 12 h in a vacuum oven.The electrolyte was a 1.0 mol L -1 LiPF 6 in Et 2 CO 3 /Me 2 CO 3 .The electrodes were assembled into button cells in an Arfilled glove box (moisture/oxygen < 0.1 ppm).The galvanostatic tests of the button cells were measured with a NEWARE battery-testing system.The alternative current (AC) impedance was carried out on a CHI 760D electrochemical workstation (CH Instruments, Inc.).
Figure 1 :
Figure 1: FE-SEM images of BC with different magnificence.
(a) presented with structures as imperfect round nano-/microballs spreading in the range of 500 nm-5 m with magnificent picture as shown in Figure 1(b).The low intensity meant the weak graphitization for BC.Calculated with Bragg's Law 2 sin = ( = 1.5405Å), 002 peak was 0.40 nm (22.3 ∘ ) and 2.08 nm (43.3 ∘ ) | 2,475.2 | 2018-04-23T00:00:00.000 | [
"Materials Science"
] |
Identification of hickory nuts with different oxidation levels by integrating self-supervised and supervised learning
The hickory (Carya cathayensis) nuts are considered as a traditional nut in Asia due to nutritional components such as phenols and steroids, amino acids and minerals, and especially high levels of unsaturated fatty acids. However, the edible quality of hickory nuts is rapidly deteriorated by oxidative rancidity. Deeper Masked autoencoders (DEEPMAE) with a unique structure for automatically extracting some features that could be scaleable from local to global for image classification, has been considered to be a state-of-the-art computer vision technique for grading tasks. This paper aims to present a novel and accurate method for grading hickory nuts with different oxidation levels. Owing to the use of self-supervised and supervised processes, this method is able to predict images of hickory nuts with different oxidation levels effectively, i.e., DEEPMAE can predict the oxidation level of nuts. The proposed DEEPMAE model was constructed from Vision Transformer (VIT) architecture which was followed by Masked autoencoders(MAE). This model was trained and tested on image datasets containing four classes, and the differences between these classes were mainly caused by varying levels of oxidation over time. The DEEPMAE model was able to achieve an overall classification accuracy of 96.14% on the validation set and 96.42% on the test set. The results on the suggested model demonstrated that the application of the DEEPMAE model might be a promising method for grading hickory nuts with different levels of oxidation.
. Introduction
There are more than 20 different varieties of walnut. According to FAO (2019), China produces more than half of the world's walnuts. From 2009 to 2019, China's walnut production increased by 11.3% year-on-year to 2,521,504 tons. The Hickory(Carya cathayensis Sarg.) is found mainly in Lin'an District, China. Because of the mountainous and high-altitude climate, hickory thrives in the area naturally. In Lin'an, the hickory plantation covers an area of 40,000 km 2 , with an annual production of 15,000 tons of hickory nuts. The output value of the whole hickory nuts industry is about 5 billion yuan.
There are a total of 544 kinds of lipids in mature hickory nuts (Huang et al., 2022). Furthermore, a mature hickory nut kernel contains more than 90% unsaturated fatty acids and 70% oil, which is in the top place in all oil-bearing crops (Kurt, 2018;Narayanankutty et al., 2018;Zhenggang et al., 2021). The oxidation of hickory nuts is an inescapable problem and a major contributor to a decline in the quality of the nuts. It is generally accepted that the process of lipid oxidation of nuts proceeds by way of a free radical mechanism called autoxidation (Kubow, 1992;López-Uriarte et al., 2009).
With the oxidation of hickory nuts, a series of changes in color, odor, taste, and other conditions occur. Significantly the kernels of hickory nuts change from light yellow to yellow-brown or brown, the taste gradually becomes lighter and lighter, and a strong rancid smell from the nuts (Jiang et al., 2012). Traditional methods of identifying hickory nuts are mainly manual and electronic nose screening (Pang et al., 2011). On the other hand, the former relies mainly on subjective human experience, which complicates the accuracy of screening and slows down the screening speed. In addition, electronic nose technology can detect the substance content of hickory nuts according to the degree of oxidation and acidity in different storage years (Pang et al., 2019), i.e., hickory nuts with different degrees of oxidation will produce different odors. However, electronic nose technology has a slow response time and requires special equipment, making it difficult to promote in the marketplace. Therefore, accurate identification and fine classification of hickory nuts based on color appearance could contribute to factory production and processing to safeguard consumers' food safety.
In classifying certain agricultural products, shape and color are the two fundamental characteristics. It is common knowledge that the most important distinguishing feature between naturally grown agricultural products is their appearance (Fernández-Vázquez et al., 2011;Rodríguez-Pulido et al., 2021). For instance, varied sizes, roundness, lengths, and widths distinguish walnut varieties. These characteristics are the core foundation for classification. In studies about walnuts, it is crucial to use their morphological properties for classification (Ercisli et al., 2012;Chen et al., 2014;Solak and Altinişik, 2018). Various color characteristics on the surfaces of objects are crucial for classification, and they primarily leverage RGB and hyperspectral images to generate. For example, color information in RGB images could generate a one-dimensional signal (Antonelli et al., 2004) or a matrix of signals, yielding excellent classification results for hazelnuts (Giraudo et al., 2018) and maize .
In addition, hyperspectral imaging technology can achieve the same higher level of classification accuracy (Alamprese et al., 2021;Bonifazi et al., 2021). There is also a significant distinction between RGB and hyperspectral data. RGB data contains less information than hyperspectral data. Nevertheless, the former is easier to gain and also widely popular. Although these studies above have delivered successful results in specific applications, mostly, experts manually extracted or specified features. In each of these extracted features, there are both strong and weak features, and if it is difficult to figure out the strong features of a target, it is challenging to produce very successful results.
Deep learning (LeCun et al., 2015) is a field of machine learning that has gained tremendous recognition in computer vision over the past decade. The pervasiveness of deep learning is relatively more advantageous than the above methods. Deep learning methods are mainly multi-layer artificial neural networks (ANN; like high-dimensional abstract functions) constructed by computers. In ANNs, image features can generate feedback signals that help models adjust their parameters. It is until the final ANN model contains critical features that can distinguish differences between images.
Deep learning technology has been used extensively for the classification of agricultural product quality Javanmardi et al., 2021;Bernardes et al., 2022;Mukasa et al., 2022). A Convolutional Neural Network (CNN) with a shallow depth was set up to classify four classes of tobacco with a 95% accuracy (Li et al., 2021). Nasiri et al. (2019) employed a modified version of VGG16 to identify dates, achieving an accuracy of 96.98%. Various models were created to classify the maturity of agricultural products from different perspectives (Zhang et al., 2018;Garillos-Manliguez and Chiang, 2021). Moreover, Saranya et al. (2022) was able to differentiate between four different maturity levels of bananas with an accuracy of 96.14%. Because of their shallow architecture, the networks used in the aforementioned applications may not possess the necessary generalization capabilities. Chen et al. (2022b) developed a high-performance classification model based on a 152-layer deep ResNet to identify different types of walnuts. Additionally, due to the capability of deep learning algorithms to automatically extract robust advanced features , most studies have not explicitly specified what characteristics those algorithms have learned. In this way, manual feature extraction is more conducive to explanation, such as grading based on the shape, color, and size of strawberries (Liming and Yanchao, 2010). However, Su et al. (2021) was able to successfully utilize the ResNet algorithm to effectively assess the ripeness and quality of strawberries, and noted that pigmentrelated information is essential for accurate ripeness recognition. Such explanations provide greater insight into the potential of deep learning algorithms. In addition to CNNs, deep learning is also based on VIT is developing rapidly for a variety of applications like the classification of weeds from drone images (Bi et al., 2022;Reedha et al., 2022). With the ever-growing number of emerging technologies, applied research in agricultural products is becoming increasingly feasible.
Deep learning algorithm is utilized in this paper to automatically extract the appearance features of hickory nuts, thereby avoiding the shortcomings of traditional methods while achieving more effective results. In addition, deep learning-based classification models are able to process an image in milliseconds (Lu et al., 2022), which is conducive to enhancing the automation of factory production and processing and thus improving the ability to ensure food safety. In this paper, DEEPMAE, a model algorithm based on deep self-supervised and supervised learning is constructed, enabling the identification and distinction between various levels of oxidation and sourness of hickory nuts kernels. The primary contributions of this paper are enumerated as follows: . Materials and methods
. . Samples
The hickory nuts were harvested from the well-growing and ten-year-old hickory trees in Daoshi Town, China (Lin' an, 118 • 58'11" E, 30 • 16'50" N, elevation: 120 m) in September 2021. After harvesting, the nuts were transported to the laboratory and dried in an oven at 40 • C for 72 h to maintain their moisture content below 8%.
. . Experimental details and preparation
There are several steps in the experiments of this study, and we will describe the preparation and experimental details.
. . . To control experimental conditions
The hickory nuts are physically protected by the intact woody shell, and the lipids oxidize more slowly than they would without the shell. Generally, the nuts were preserved with their shells intact. We stored the nuts with the shells intact but sought to speed up the nuts' lipids' oxidation to reduce the experiment's duration. Prior to this formal experiment, we determined through pre-experiments on small samples that the oxidation rate of hickory nuts at 35 • C was within the tolerable range for this experiment, so we decided to place the nuts in a constant temperature and humidity chamber at 35 • C and 35% to accelerate the oxidation process. Through time, the lipids within hickory nuts kernels undergo continuous oxidation. In addition, we sampled for the experiment every 30 days.
. . . To acquire RGB images of nuts kernels Samples of 280 hickory nuts per experiment were taken in this study, and the nuts kernels were separated after the shells were broken by hand. After this, RGB images of the kernels were acquired. The image acquisition system is composed by placing a smartphone connected to a computer on an experimental stand.
The smartphone is mounted horizontally on the experimental stand while keeping the vertical height constant. In addition, we use the computer to control the phone to avoid changes in the angle and position of the phone. In addition, there are two symmetrical 4W lamps to fill in the light. More specifically, the phone was a Xiaomi 6X with LineageOS, the camera software was OpenCamera, the camera parameters were 20 megapixels, the lens aperture was f/1.75, the focal length was 4.07 mm, and the ISO was set to 100.
. . . To measure the physicochemical properties of hickory nuts Immediately after completing image acquisition, we physically pressed the hickory nuts kernels to obtain the nut oil. Then we measured the oil's peroxide value (POV) and acid value (AV). POVs were determined according to the Chinese standard method GB 5009.227-2016. The peroxide test indicates the rancidity of unsaturated oils, and the POV is the most commonly used value. It measures the extent to which the oil sample has undergone primary oxidation. In addition, the AV is one of the most sensitive indicators of nut spoilage. In this study, AV was measured using the method of the Chinese standard GB 5009.229-2016. Approximately 80 mL of oil was extracted in each experiment. Of this, 36 mL was divided into three replicate experiments for POV measurement, and the remaining oil was divided into three replicate experiments for AV measurement.
. . . Summary of preparations
This experiment took four samples with different oxidation times in this paper, resulting in four sets A, B, C, and D, containing 1,090 good hickory nuts. Additionally, 13,000 RGB images of their kernels were also taken. All of them were cropped to 512 × 512 pixels. Then, we randomly chose 9,000 images as the training set, 2,800 as the validation set and the remaining 1,200 as the test set. .
. An algorithm for aggregating image values
The CIELAB color space is expressed as three values: in human vision, the L-value from low to high indicates perceived brightness from black to white, the a-value from negative to positive represents green to red, and the b-value from negative to positive represents from blue to yellow. To investigate the relationship between the features produced by the deep learning model and the visual properties of hickory nut kernels, we did targeted processing of the kernels' RGB images in the CIELAB color space.
The original image I and the image I g generated (Equation 1) by fully convolutional networks (FCNN) which were almost smoothed are first transformed from RGB to CIELAB (Figure 1). The CIELAB images are split according to the three values. The corresponding values in the CIELAB color space are combined in an "enhancement" operation to convert the CIELAB images back into RGB images. The entire process is almost identical to EdgeFool (Shamsabadi et al., 2020), except for the "enhancement." .
FIGURE
The process of aggregating an image.
FIGURE
Datasets. The four columns are the four sets of experimental images of hickory nuts kernels, A, B, C, and D; the "Original" row is the acquired original images, AL* is aggregated images from the L* channel, Ab* is aggregated images from the b* channel, and AL*b* is aggregated images from both the L* and b* channels. (1) Our enhancement method, corresponding channel enhancement of image, is an aggregation algorithm aggregating a set of data closer to a specified value β (Equation 2). In general, the β falls within that range of the set. In addition, the L-value, a-value, and b-value can each be assigned beta values separately. There is the aggregation of L-values(AL*), aggregation of b-values(Ab*), and co-aggregation of L-values and b-values(AL*b*), but no aggregation at the a-value (Figure 2).
. . Classification methods
Our final work relies on a deep-learning model for classification. Based on existing research, this study proposes a more effective and improved model, and this section describes the detailed construction of our model.
. . . VIT and MAE
The workflow of Vision Transformer (VIT; Dosovitskiy et al., 2020) firstly requires dividing the original image into several regular non-overlapping blocks and spreading the divided blocks into a sequence, after which the sequence is transmitted into the Transformer Encoder. Finally, the output features of the Transformer Encoder are handed over to the fully connected layer for classification.
Masked autoencoders (MAE; He et al., 2022) is a self-supervised learning method that infers the original image from local features strongly correlated with global information. MAE's Decoder can reconstruct the same number of features as the original image blocks, thereby reconstructing a complete image from a partial image. When applied to downstream classification tasks, the MAE can split the trained Encoder and Decoder and use only the features extracted by the Encoder for classification. That is similar to the process of a standard VIT for image classification. Compared to VIT, MAE uses only part of the image data for the classification task, which can significantly reduce computational effort. In addition, MAE's Decoder can reconstruct the original image from partial features, which also can represent feature information in the association.
. . . Re-attention
The MAE is mainly stacked by the Multi-Head Self-Attention (MHSA; Equation 3) module in the vanilla VIT. However, the structure based on the Transformer does not obtain better results by simply stacking it like the convolutional networks (CNN) structure. Instead, it quickly sinks into saturation at deeper levels. That is called attention collapse . Re-attention (Equation 4) could replace the MHSA module in the VIT and regenerate the attention maps to establish crosshead communication in a learnable way.
is multiplied by the self-attention map along the head dimension. Re-attention exploits the interactions between the different attention heads to collect complementary information, regenerating the attention graph at a small computational cost but better enhancing the features' diversity between the layers. It stands to reason that the proposed DeepVIT ) model using the Re-attention mechanism also achieves excellent performance on classification tasks. .
. . DEEPMAE
This paper proposes the DEEPMAE model with MAE and DeepVIT as the backbone ( Figure 3). Firstly, unlike VIT, MAE and DeepVIT, the blocks sequence input to DEEPMAE is not from the original image but is composed of low-level features extracted from the original image by convolutional operations. Secondly, we introduce Re-attention into MAE, reduce .
FIGURE
The architecture of DEEPMAE.
the MAE model width, and increase its depth to achieve a deeper stacking of the Transformer to obtain a more vigorous representation of some of the blocks, which can reduce the computational effort while avoiding attention collapse. In addition, unlike MAE, which uses only the trained parameters of the Encoder when processing classification tasks, our DEEPMAE always retains both Encoder and Decoder and combines the reconstruction of image features and classification into one complete model. The reconstruction is a self-supervised learning. It is done by comparing the output features of the Decoder with the original features and trying to make them as similar as possible. The classification is a supervised learning. Eventually, the complete structure of DEEPMAE contains both self-supervised and supervised processes. The blocks sequences for MAE, VIT, and DeepVIT are derived from the original images. This approach starts by slicing an original image horizontally and vertically and spreading blocks sliced sequentially into a patch embedding blocks sequence. By default, a patch, also a block, is 16 × 16 pixels, implemented by a convolutional kernel and a step size of 16. That results in many convolutional parameters and a high degree of randomness. The process of slicing also results in large random matrices, which somehow affects the stability of the patch embedding and, thus, the instability of the Transformer (Xiao et al., 2021). Before that, VGG (Simonyan and Zisserman, 2014) compared the perceptual fields of small kernels of CNNs with big kernels. They found that multi-layers successive small kernels and single-layer big kernels were similar. So VGG replaced the large convolutional kernels by stacking multiple layers of 3 × 3 small convolutional operations, and 3 × 3 small convolutional kernels also dominated the CNNs after that (Simonyan and Zisserman, 2014;Iandola et al., 2016;Howard et al., 2019;Tan and Le QV, 2020). In addition to stability, the Transformer model has properties for global attention computation. However, it lacks some inductive biases inherent to CNNs, such as translation equivariance and locality (Han et al., 2020). The Transformer model, therefore, lacks some local features from earlier layers compared to the CNNs. Therefore, we change the patch embedding of DEEPMAE to an operation with multiple small convolutional kernels and convert the low-level features of the acquired images into patches, similar to the Image-to-Tokens module (Yuan et al., 2021). In MAE, the input to the Encoder is a subset of patches, and our DEEPMAE does the same thing, using only a subset of patches composed of low-level image features as input to the Encoder. Finally, because images are inherently strong positional relativities, DEEPMAE uses a two-dimensional fixed sine-cosine to encode the position of the spreading patches.
DEEPMAE as a whole also consists mainly of two parts, an Encoder and a Decoder, but the classifier is added after the Decoder to make up the whole. The Encoder part is composed of Transformer blocks composed of Re-attention (RTB). Decoder consists of self-attention Transformer blocks (STB). It is clear that Encoder and Decoder are asymmetrical in terms of both width and depth. In addition, the classifier does not use all the information from Decoder's output; it relies only on some of the features reconstructed by the Decoder to make its classification decisions. (Equation 10) (Labatut and Cherifi, 2012;Giraudo et al., 2018;Alamprese et al., 2021;Chen et al., 2022a;Saranya et al., 2022) in this paper.
. . Performance evaluation
In addition, the reconstruction of image features by Decoder is a critical component of DEEPMAE. We use the Multi-scale Structural Similarity Index (MS-SSIM; Wang et al., 2003), the Erreur Relative Globale Adimensionnelle de Synthèse (ERGAS; Wald, 2000), and Visual Information Fidelity (VIF; Sheikh and Bovik, 2004) to measure the goodness of the reconstructed features. MS-SSIM is a multi-scale structural similarity method that considers the variation in observation conditions and provides a reliable approximation of perceived image quality. VIF is an image information metric that quantifies the fidelity of image information.
. . Lipid oxidation analysis for four samples
The quality of the oil extracted from hickory nuts was used to assess the physiological quality of the samples. The samples showed different POVs and AVs after different times of oxidation ( Figure 4).
POV is an indicative indicator of the quality of oils and fats (Beyhan et al., 2017). At 35 • C and 35% relative humidity, the POVs measured in four samples, A, B, C, and D, increased gradually with storage time. Samples A and B showed a slow increase in POVs, while experiment C exhibited a faster increment. Over the course of the four samples, the POVs consistently increased, demonstrating that the hickory nut oil was undergoing continuous oxidation.
The AV reflects the degree of fat hydrolysis and rancidity by indicating the oil's dissociative fat mass concentration level (Chatrabnous et al., 2018). The results of the four samples, measured based on differences in the time dimension of the hickory nuts, showed a significant upward trend. In samples A and B, the AVs of samples accumulated more rapidly, while in the later experiments, the AVs accumulated more slowly. Eventually, the AV in samples D exceeded 0.6 mg/g, doubling the value of samples A. The increase in AVs during the storage of hickory nuts is due to the enzymatic hydrolysis of lipids, which can adversely affect the hickory nuts.
The POVs and AVs of the hickory nut oils in the four samples suggest that the degree of oxidative deterioration of the samples was increasing in a sequential manner. This provides an objective basis for further distinguishing between samples with different levels of oxidative degradation.
. . Di erences of kernels' images for four samples
The data distribution was analyzed after the RGB images were converted to CIELAB images. More importantly, this paper Frontiers in Sustainable Food Systems frontiersin.org . /fsufs. . analyzed the relationship between changes in the exterior of hickory nuts kernels and their internal lipid oxidation and rancidity. Ortiz (Ortiz et al., 2019) expressed the L-value as the response to the browning of walnut kernels' exterior. They analyzed the correlation between changes in the exterior of walnut kernels and the rancidity and oxidation process. It is evident that the distribution of Lvalues and b-values on the appearance of hickory nuts kernels from four samples showed variability ( Figure 5). There is a large concentration of L-values around 47 in experiment A, and around 37, 38, and 31 in experiments B, C, and D, respectively. First, looking at the distribution of L-values ( Figure 5A), there is much crossover between experiments B and C. Even the mean brightness of C is slightly higher than B. However, the changes in L-values of the four experiments show an overall trend of gradual decrease. Four experiments also scored around 40, 24, 20 and 18 for the b-value. Taking experiment A as the benchmark, by observing the distribution of the L-value and b-value, it can be found that the changes in the brightness and chromaticity of the appearance of hickory nuts kernels show an uneven state of being larger first and then smaller. In the latter part of the samples, the human eye's differentiation advantage is significantly weakened. That means it will not even be possible to directly distinguish the differences between the appearance of kernels with the naked eye. This unevenness of variation is explained by Yang et al. (2022). The leading causes of pecans browning are membrane peroxidation and enzymatic browning catalyzed by polyphenol oxidase. Throughout the post-harvest storage period, hickory nuts maintained their antioxidant capacity, and the rate of browning was fastest in the early stages of storage, after which the rate of browning changed gradually and gently.
The results above indicate that there was some extent of correlation between the changes in the intrinsic oxidative rancidity of the hickory nuts and the changes in the appearance of kernels. For the same batch of hickory nuts, as the oxidation of their internal oils proceeded, the intrinsic quality of nuts would change, manifested in kernels' appearance as a decrease in L-value and a deviation from yellow in b-value. That also effectively supported the subsequent differentiation of different oxidized and acidified kernels by image features.
. . Classification results
Based on the above Analysis, we also need to classify the images of hickory nuts kernels to infer the internal quality from the appearance of kernels.
. . . General configuration
In this paper, the main optimization points of DEEPMAE based on its backbone model were previously mentioned. Ablation experiments are then conducted in order to evaluate the efficacy of the model at the three points specified.
A sequence consisting of blocks of low-level features extracted
by a convolution operation to replace the original 16 × 16-pixelsized image blocks sequence in the backbone. 2. The most critical point in MAE is using partial images to extract features, reducing the application's computational effort. DEEPMAE also retains this feature, but because the low-level features of the images are not as redundant as the original images, DEEPMAE will have a different input scale for the Encoder than MAE, and we compare three mask ratios. 3. DEEPMAE incorporates both self-supervised and supervised learning and has an Encoder and Decoder. The Decoder, a selfsupervised operation, could reconstruct the image features. That is very different from the inference process in MAE, so we want to verify the role of the Decoder in the classification process.
After establishing the core structure of DEEPMAE, some CNN models were introduced and compared to Transformer models and DEEPMAE model, and their classification effects were evaluated. The common CNNs are AlexNet (Krizhevsky et al., 2017), VGG19 (Simonyan and Zisserman, 2014), SequeezeNet (Iandola et al., 2016), MobileNetV3 (Howard et al., 2019), and EfficientNet (Tan and Le QV, 2020), respectively, and the Transformer modes are the backbones of DEEPMAE, mainly VIT (Dosovitskiy et al., 2020) and MAE . CNNs are all implemented by calling PyTorch's torchvision official interface to implement. In addition, the learning rate, optimizer, data augmentation, and other controllable hyperparameters are kept consistent across models. Training is done in the same environment for each model (Table 1).
. /fsufs. . . . . DEEPMAE: Low-level features and RGB images data Many researchers are combining convolution blocks and transformer blocks Liu et al., 2022), not least with changes to the input data. Due to the redundancy of the RGB image, MAE uses the original image blocks as input, but DEEPMAE extracts the low-level features of the image as input. Therefore, this paper will compare the patch embedding composed of the original RGB images with the patch embedding composed of low-level features. Additionally, the size of low-level features is much smaller than that of the original RGB image, which is a characteristic of the convolution operation. In comparison, the MHSA used by the Encoder and Decoder in DEEPMAE does not have to shrink the feature map, and the patterns of the layers are similar, making DEEPMAE easily scalable. Subsequently, four practical structures based on DEEPMAE are constructed for comparison ( Table 2).
The number of parameters and classification accuracy of the two types of patches embedding from four different sizes of DEEPMAEs were compared in Table 3. The accuracy improvement was 1.14-1.17% on the validation set and 1.67-2.67% on the test set. For classification, the improvement of low-level features is significant, showing that the Transformer model is very effective after adding the low-level features extracted by convolutional operations.
. . . DEEPMAE: Mask ratios of input patches
It was mentioned that the original MAE masks a certain percentage of the input patches, which reduces the number of operations and improves the model's inference time. DEEPMAE also absorbs this advantage. However, DEEPMAE's inputs are lowlevel features with less redundancy than the original images. In addition, DEEPMAE combines the whole process of classification and MAE-like pre-training. DEEPMAE needs to focus on the unmasked part of the image and the masked part. Therefore, the mask ratio of DEEPMAE will be different from that of MAE. We have done further comparison experiments.
The MAE default is 75% masking, i.e., Mask ratio = 0.75. Based on this, we compared mask ratios of 0.25, 0.5, and 0.75 on the DEEPMAE model. In addition, it can also be seen that the DEEPMAE still has an increasing trend ( Figure 6B), so the number of training epochs in this section is set to an upper limit of 300.
The size of the Mask Ratio correlates with the number of features visible in the model, with a larger Mask Ratio giving the model fewer features to learn. As Mask Ratio increases sequentially (Figure 7), it is evident that the overall loss is also higher for the latter than for the former. Looking at the loss of the Decoder reconstructed feature maps, the level of loss decline at approximately the 100th epoch for Mask ratio = 0.5 is equivalent to the loss decline for a total of 300 epochs for Mask ratio=0.75, i.e., the training time for Mask ratio = 0.5 is only one-third of that for Mask ratio = 0.75. That means that the training time for Mask ratio = 0.5 is only one-third of that for Mask ratio = 0.75, while that for Mask ratio = 0.25 is only one-third of 0.5. In classification loss, the loss for a larger mask ratio is significantly higher than for a smaller one. Therefore, a smaller Mask ratio can release more features for DEEPMAE training and achieve better results. Incidentally, our experiments achieved 97% accuracy in about the 240th epoch by deepening the Encoder depth to 32 while using a Mask ratio of 0.25. However, the smaller the Mask ratio, the more hardware, and computational resources are required. Although using a smaller Mask ratio, deepening the network and extending the training time of the model can further improve accuracy, the computational resources required are more than these accuracy improvements. Therefore, to balance the model's performance and effectiveness, a moderate Mask ratio facilitates the implementation of the model. Furthermore, the masking operation has a considerable impact on CNNs. The default Mask ratio for the experiments in this paper is 0.5 unless otherwise stated.
. . . DEEPMAE: Decoder for classification
Our DEEPMAE combines the self-supervised approach of image reconstruction used by MAE with the supervised process of classification. However, unlike MAE, which only employs pre-trained Encoder parameters for classification, DEEPMAE also uses Decoder parameters in the classification process to reconstruct some of the features for better classification. Therefore, .
/fsufs. . DEEPMAE's image reconstruction is very closely related to classification. Therefore, we still use the four different sizes DEEPMAEs in Table 2 for comparison to explore the role of the Decoder in reconstructing images. From the performance of the four DEEPMAEs (Table 4), it can be seen that the results of "classification and feature reconstruction" are higher than those of "classification only, " which indicates that the image feature reconstruction of Decoder is also a key factor in DEEPMAE.
In addition to the performance on the test set, this paper also measured the Decoder's performance after the image features. From a human visual point of view, the reconstructed feature images differ significantly from the original and appear difficult to understand (Figure 8). Therefore, The quality of the reconstructed feature images is measured using the MS-SSIM, ERGAS, and VIF metrics, and a comparison of these images from the perspective of images is carried out. Comparing the three metrics (Table 5), it is clear that the image features constructed by "classification and feature reconstruction" outperform the "classification only" image features, which is an advantage of the Decoder. That means that considering both "classification" and "image reconstruction" can improve the effect of classification and ensure the effect of "image reconstruction" at the same time. If only classifying, the classification effect is slightly lower, and the quality of the final image features extracted by the Decoder is negatively affected.
. /fsufs. . Examples of reconstructed feature images. In addition, image data must be transformed into tensors before being input into a model in PyTorch. The transformed images are from these tensors.
. . . Comparing DEEPMAE with popular models
From the accuracy performance of each model in the validation set ( Figure 6), it is easy to see that the MobileNetV3 and VGG19 models performed average level. They were slow to optimize, and their final accuracy was just over 80%. The remaining models, such as Alexnet, SqueezeNet, and EfficientNet, have high recognition and stable performance and have the advantage of fast convergence of the convolution operation.
The VIT and MAE models, which are representatives of Transformer, performed smoothly, with VIT reaching a maximum accuracy of 94.04% at the 95th epoch and MAE a maximum accuracy of 94.36%, which is not too far from the recognition of CNN models such as EfficientNet. In addition, the Transformer model has high accuracy from the beginning and gradually becomes more accurate afterwards. That is because the Transformer model uses initialized parameters, whereas the CNN models have random parameters. Initialization of the Transformer models was necessary, but this did not affect comparing the results with the CNN models. The DEEPMAE model outperformed the above models, reaching a maximum accuracy of 96.14% in the 89th epoch, which was significantly higher than the other models.
Regarding the curves ( Figure 6B), DEEPMAE shows relatively large amplitudes in the first 60 epochs and only slight oscillations afterwards. The curves still tend to increase and do not reach a bottleneck in the model's performance within 100 epochs. Regarding the performance of the models on the validation set, DEEPMAE outperforms the common ANNs and does not lose out on the CNN models in classification recognition. In addition, DEEPMAE is a sets of networks that can be effortlessly extended and fine-tuned both in terms of depth and width. Moreover, due to the global associate nature of MHSA, the connections between the layers are more adjustable than those of CNNs.
. . . Compare DEEPMAE with the backbones of DEEPMAE
The original MAE in experiments is constructed by Encoder and Decoder, which are purely stacked STB blocks. The Encoder and Decoder are pre-trained for 300 epochs, then the trained Encoder parameters are loaded and trained for classification. Because there is no generic hickory nuts dataset at the scale of ImageNet, we use the same dataset for the pre-training and classification process, also called self pre-training by Zhou et al. (2022). So MAE migrates from more extended pre-training weights in the classification process rather than using parameter initialization (Glorot and Bengio, 2010;He et al., 2015). As a result, MAE achieves an initial accuracy of over 90% on the validation set, which is far ahead of other models. However, MAE with the self pretraining approach does not improve the results significantly on the classification task, meaning that the MAE model still relies heavily on the pre-training image reconstruction process to update model parameters. Although the comparison in Figure 6 is "unfair, " pretraining based on image reconstruction is a robust functionality of MAE, so the DEEPMAE model also retains the Decoder to reconstruct images.
The MAE has precisely the same number of parameters as the VIT with the same structure during classification training. However, because the former randomly masks a certain proportion of the input patches, the original MAE's encoder input only accounts for a quarter of the initial data volume. It is faster and more accurate than the latter. In addition, the DEEPMAE model has more feature information and less redundancy for the Encoder's input of low-level features compared to the original image. Hence, DEEPMAE sets a lower masking ratio than MAE, with a masking ratio of 50%.
The confusion matrices of MAE ( Figure 9A) and VIT ( Figure 9B) on the test set show that both distinguish A images nearly completely. However, the MAE model misclassifies B images as A more often. Misclassification between B, C, and D is also inevitable with MAE and VIT. However, MAE is better at distinguishing D images. Correspondingly, VIT misidentified images from C and D more than MAE. The main reason for these significant discrimination errors may be the slight differences in the data itself. In addition, there are many similarities in the brightness and color of the hickory nuts kernels images from adjacent experiments. Furthermore, the individual difference in .
/fsufs. . kernels also unavoidably influences the results. That results in some flaws in the image data, so the differences are not absolute and complete and are understandable in agriculture. According to DEEPMAE's confusion matrix on the test set ( Figure 9C), A images were correctly classified. It also had the lowest level of misclassification of the three above models. Also most noticeable was the significant enhancement in DEEPMAE's discrimination of C and D images. That is due to DEEPMAE being the most adept of the three models at distinguishing between C and D images. From the results, DEEPMAE is as good as MAE at .
/fsufs. . identifying D, VIT at identifying B, and slightly better than both for A and C. Compared to the backbone model DEEPMAE learns more critical distinguishing features.
The specific results of MAE, VIT and DEEPMAE on the test set were compared quantitatively to objectively evaluate their performance without bias (
. . What features learned by DEEPMAE
Due to the "black box" problem of the deep learning model, this paper examines whether the features extracted by our model match the changes in the image appearance. We introduce an algorithm for aggregating images. According to this algorithm, this paper performs the corresponding aggregation operations on the L-value and b-value of the original images to demonstrate that these two values are the key factors that affect the model's differentiation of the kernels' images.
The β of our aggregation function is specified separately for each experiment for L-value and b-value, e.g., the β for A images with L-value is 47, and the rest of the experiments have β corresponding to Figure 5. The chromaticity change of enhanced images is represented in the same way as in Figure 5.
After image enhancement, the L-values of the four experiments become more aggregated and distinguishable ( Figures 10A, C). In addition, the L-values of the enhanced B and C images are slightly more discernible than those of the original images. Also, the b-values of the enhanced images are more aggregated ( Figures 10B, D). Compared to the statistical distribution in Figure 5, the images processed by the aggregation algorithm are significantly different from the previous because of the more significant differentiation of brightness and color.
We trained DEEPMAE on the original dataset and tested it on the aggregated datasets AL, Ab, and ALb. Despite the discrepancies between the original and aggregated datasets, the DEEPMAE still register some effectiveness in the test datasets. The correlation between the distribution of L-values and b-values in Figure 5 and the classification results in the confusion matrix is apparent, for instance, the overlapping areas of the distribution led to poorer performance on the AL, Ab, and ALb datasets. It shows that the range of L-values of D in AL is much smaller than in Figure 5A, resulting in images of D being largely misclassified as adjacent C. The ranges of b-values of B, C, and D are closely linked, indicating that C of Figure 11 was misclassified as B and D. After adjusting the L-value or b-value of images, the results of DEEPMAE demonstrated a strong relationship between the data distribution and the classification effect, indicating that the L-value or b-value characteristics are of great importance for the classification process of DEEPMAE. These values appear to be the main features learned by DEEPMAE to distinguish walnuts, such as their appearance brightness and color. The heat map of the features learned by DEEPMAE also confirms this conclusion (Figure 12).
. Conclusions
This study explores the link between changes in the physiological quality and appearance of hickory nuts kernels. It uses hickory nuts oxidation as the starting point and verifies through literature and experiments that oxidative changes in hickory nuts during storage cause changes in the brightness and color of the kernels. The aim of this paper is to use deep learning model optimization to distinguish nuts with different levels of oxidation and rancidity. The DEEPMAE model, a lighter deep learning model based on MAE, is designed to learn more key distinguishing features to help differentiate between varying levels of oxidation in hickory nuts. In particular, the antioxidant capacity of the nuts resulted in a slight change in the rate of browning during storage. Our DEEPMAE could distinguish hickory nuts based on the essential characteristics learned.
The results indicate that DEEPMAE achieves 96.14% accuracy on the validation set for the first 100 epochs of training and still tends to increase after that. With deeper DEEPMAE and more feature learning, it can exceed 97% accuracy on both the validation and test sets at the 240th epoch. In addition, by aggregating information from image samples, we have confirmed that the critical features learned by DEEPMAE are precisely the brightness and color of the appearance of kernels. That is the same conclusion we obtained from our physiological experiments on hickory nuts. Additionally, this paper carries out ablation experiments to confirm its efficiency from three main improvement points. Furthermore, we illustrate some differences in the topology of DEEPMAE and CNNs. In comparison, DEEPMAE shows greater flexibility, effectiveness and scalability than that of CNNs.
This study provides an accurate and valid method for distinguishing the degree of oxidative rancidity in hickory nuts. In the future, we will focus our research on the applicability of the method, longer-term hickory nuts oxidation processes, and reflections on other physiological manifestations of hickory nuts.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Author contributions
HK, DD, and JZ planned and designed the experiment. HK conducted the experiment. HK, DD, JZ, ZL, SC, and LD analyzed the data and drafted the manuscript with input from DD and JZ. All authors contributed to the article and approved the submitted version. | 9,863 | 2023-03-08T00:00:00.000 | [
"Computer Science"
] |
Biallelic TET2 mutation sensitizes to 5’-azacitidine in acute myeloid leukemia
Precision medicine can significantly improve outcomes for cancer patients, but implementation requires comprehensive characterization of tumor cells to identify therapeutically exploitable vulnerabilities. Here we describe somatic biallelic TET2 mutations in an elderly patient with acute myeloid leukemia (AML) that was chemoresistant to anthracycline and cytarabine (Ara-C), but acutely sensitive to 5’-azacitidine (5’-Aza) hypomethylating monotherapy resulting in long-term morphological remission. Given the role of TET2 as a regulator of genomic methylation, we hypothesized that mutant TET2 allele dosage affects response to 5’-Aza. Using an isogenic cell model system and an orthotopic mouse xenograft, we demonstrate that biallelic TET2 mutations confer sensitivity to 5’-Aza compared to cells with monoallelic mutation. Our data argue in favor of using hypomethylating agents for chemoresistant disease or as first line therapy in patients with biallelic TET2 -mutated AML and demonstrate the importance of considering mutant allele dosage in the implementation of precision medicine for cancer patients.
Introduction
Acute myeloid leukemia (AML) is the exemplar of how interrogation of the somatic genome has facilitated understanding of disease pathogenesis and led to the development of novel therapies and stratified treatment approaches for some disease sub-groups (1,2). For example, the outcome of t(15;17)-positive acute promyelocytic leukemia has been revolutionized by the introduction of differentiation chemotherapy targeted against the promyelocytic leukemia-retinoic acid receptorα fusion oncoprotein that defines this sub-group of AML (3,4). Despite this success, therapeutic options for the majority of AML patients are limited and outcome remains very poor, with a 5year overall survival (OS) of just 15% (5). Subsequent refinement of the AML genomic landscape has revealed a plethora of somatically mutated genes, including TET2 which has been associated with poor outcome following treatment with standard anthracycline and nucleoside analogue-based chemotherapy (1,6).
Given the ineffectiveness of standard chemotherapy for many patients and the resulting poor outcome it is essential to fully understand how somatic genetics can be utilized to identify vulnerabilities that can be therapeutically exploited using existing treatments and the expanding catalogue of new agents. To this end, we present data that biallelic TET2 mutations in AML confer sensitivity to hypomethylating chemotherapy. The TET2 enzyme catalyzes DNA demethylation by converting 5'-methylcytosine to 5'-hydroxymethylcytosine, and loss or attenuation of TET2 function leads to a somatically acquired global genomic hypermethylation and transcriptional and phenotypic re-programming underpinning the development of a leukemia phenotype (7). As such, it is mechanistically plausible that hypomethylating chemotherapy could be particularly effective in TET2 null AML. These data serve as a paradigm for clinical diagnostics incorporating comprehensive genomic analyses to implement first line therapies with a higher likelihood of response in patients with AML.
AML index case with biallelic TET2 mutations
We describe a 76 year old male who presented with AML characterized by a t(4; 12) translocation but no other cytogenetic abnormalities (46,XY,t(4;12)(q2?;q13) [12]/46,XY [10]) as identified by G-banding and confirmed by spectral karyotyping ( Figure 1A). Standard 3+7 induction chemotherapy with daunorubicin and cytarabine (Ara-C) gave rise to a reduction in white blood cell count ( Figure 1B). However, the patient became pancytopenic and developed acute septicemia requiring intensive care, intravenous antibiotics and vasopressor therapy.
Following recovery, blasts persisted in the bone marrow (BM) (29% at day 30; Figure 1C) indicating chemoresistant disease. The patient was subsequently treated with single agent 5'azacitidine (5'-Aza) monthly as palliation ( Figure 1B), which unexpectedly resulted in prolonged complete morphological remission (CR) characterized by restoration of normal morphology ( Figure 1C). The patient remained in CR for 24 months prior to emergence of relapsed AML.
Relapsed disease was treated with subcutaneous Ara-C and Sorafenib but was unresponsive to chemotherapy and the patient died 28 months after first diagnosis. Autopsy revealed subtotal AML BM infiltration and multiple extramedullary AML sites, including lymph nodes and numerous parenchymatous organs ( Figure S1).
In order to investigate the molecular basis underlying the prolonged response to 5'-Aza in this patient we performed exome sequencing, interphase FISH and single nucleotide polymorphism (SNP) array analysis of BM at AML presentation, during morphological remission and at relapse. SNP array analysis demonstrated that the major cell clone at presentation was characterized by a focal 1.1Mb deletion encompassing the TET2, CXXC4 and PPA2 genes ( Figure 1D). Interphase FISH on diagnostic BM demonstrated that approximately 95% of cells carried the TET2 deletion ( Figure 1E). Additionally, the retained TET2 allele harbored a nonsense base substitution mutation in exon 3 affecting codon 939 (c.2815C>T, Q939*; Figure S2) which was detected in 96% of cells. The presentation AML was also characterized by a heterozygous NPM1 mutation (c.863_864insTCTG; Figure S3) in all cells with TET2 deletion.
Integration of SNP array, exome and FISH data was used to infer tumor phylogeny which indicated that disease pathogenesis was initiated by the TET2 nonsense mutation with subsequent deletion of the second TET2 allele, followed by acquisition of the NPM1 mutation ( Figure 1F).
Both the TET2 gene deletion and base substitution mutation were also present at high levels in the remission BM, despite this appearing morphologically normal ( Figures 1C, 1F and S2).
Although not discernible in the remission BM, the NPM1 mutation was presumed to have persisted at levels below detection given that it was a prominent feature of relapse disease, in addition to the TET2 gene deletion and TET2 base substitution ( Figure 1F and Figures S2-S3).
The relapse was also characterized by a heterozygous FLT3 internal tandem duplication (c.1780_1800dupTTCAGAGAATATGAATATGAT; Figure S4) in 80% of cells, which was not discernible in diagnostic or remission samples ( Figure 1F and Figure S4). These data demonstrate that 5'-Aza treatment almost completely eliminated the TET2/NPM1mutated clone dominant at disease presentation. Although also reduced by 5'-Aza treatment, ancestral AML cells carrying biallelic TET2 mutations but negative for the NPM1 mutation retained viability and presumably re-acquired the ability to differentiate and recapitulate normal hematopoiesis rendering a cytomorphological remission. Based on these observations, we hypothesized that mutant TET2 allele dosage could affect cellular response and sensitivity to 5'-Aza.
Biallelic TET2 mutations result in a hypermethylation phenotype in AML cells and confer sensitivity to 5'-Aza hypomethylating chemotherapy in vitro and in vivo
In order to test whether biallelic TET2 mutations sensitize AML cells to 5'-Aza we used CRISPR-Cas9 gene editing to completely inactivate TET2 in the HEL AML cell line. HEL cells, derived from a 30-year old male with erythroleukemia, have a complex hypotriploid karyotype with 60-64 chromosomes (Table S1) and are reported to carry a monoallelic TET2 gene deletion (8). Consistently, high density SNP array analysis demonstrated that HEL cells carry a large deletion and concomitant loss of heterozygosity (LOH) affecting the majority of the long arm of chromosome 4 which includes the TET2 gene ( Figure 2A). Following transduction of HEL cells with CRISPR-Cas9 directed to TET2, Sanger sequencing revealed that the retained TET2 allele was mutated in several independent clones. In particular, a 4bp deletion in exon 6 ( Figure S5) was frequently observed. Regardless of the underlying mutation, HEL clones from independent CRISPR-Cas9 transductions were consistently null for TET2 protein expression ( Figure 2B). Biallelic TET2 mutations and consequent complete loss of TET2 protein expression did not affect proliferation kinetics of HEL clones in liquid media ( Figure 2C) nor cloning efficiency (CE) in soft agar ( Figure 2D).
When treated with 5'-Aza in vitro, HEL TET2 biallelic clones had significantly lower CE (P = 0.003) and proliferation in liquid culture (P < 0.001) compared to isogenic parental HEL TET2 monoallelic clones ( Figure 3A and 3B). In contrast, TET2 mutation load did not affect CE or cell proliferation following treatment with Ara-C or daunorubicin ( Figure 3A and 3B). Having demonstrated that HEL cells null for TET2 protein expression were sensitive to growth inhibition by 5'-Aza we sought to test the hypothesis in a second AML cell line mutant for TET2. SKM1 cells have a monoallelic TET2 mutation (c.4253_4254insTT, p.1419fsX30) (9), but are phenotypically null for TET2 protein expression ( Figure 3D), despite having an intact wild-type TET2 allele. Nevertheless, consistent with the HEL cell data, TET2 null SKM1 cells are acutely sensitive to 5'-Aza ( Figure 3D). We also quantified TET2 protein levels in a panel of 9 additional AML cells lines and determined sensitivity to 5'-Aza ( Figure 3D). Strikingly, there was significant correlation between TET2 protein levels and 5'-Aza IC90 (R 2 = 0.77, P = 0.0008) and IC50 (R 2 = 0.88, P < 0.0001) ( Figure 3D). THP1 cells had the highest TET2 protein expression and were the most resistant to 5'-Aza. In contrast, SKM1 cells had the lowest TET2 protein expression and were the most sensitive to 5'-Aza.
We next sought to determine whether biallelic TET2 mutations sensitize AML cells to 5'-Aza in vivo in an orthotopic xenograft mouse model. We used a competitive engraftment approach in which HEL cell clones with monoallelic and biallelic TET2 mutation were co-transplanted into mice and an allele-specific qPCR assay for the WT and CRISPR-Cas9-modified TET2 alleles was subsequently utilized to determine preferential engraftment and/or elimination of either cell clone, with or without treatment with 5'-Aza. Specifically, following prior validation of the qPCR assay ( Figure S7), HEL TET2 monoallelic and HEL TET2 biallelic cell clones were coinjected intrafemorally (IF) in a 1:1 ratio into Rag2 −/− Il2rg −/− mice (day 0; Figure 4A). Physical symptoms associated with the proliferation of AML cells (including weight loss, limited mobility and growth of leg tumors in some animals) became apparent approximately 4 weeks post-IF injection. On day 28 mice began once daily treatment with 5 mg/kg 5'-Aza (or vehicle only as control (VC)) for a total of 5 days and were then euthanized 3 days later (day 35 after engraftment and day 8 after the initiation of treatment) for sample collection and qPCR analysis ( Figure 4A). Tissue samples were collected from six 5'-Aza-treated and nine VC-treated mice.
We were able to consistently amplify human TET2 DNA in tissue obtained from injected femurs, as well as non-injected femurs, spleens and other organs showing evidence of AML infiltration, yielding a total of 55 individual samples (32 from VC-treated mice and 23 from 5'-Aza-treated mice). There was no overall mean preferential amplification of either the intact WT or CRISPRmodified TET2 allele in all 32 tissue samples from VC-treated mice (median inverse Log2 [ΔCt] = 0.81; Figure 4B), although the CRISPR-modified TET2 allele slightly dominated in spleen samples (P = 0.047; Figure S8) suggesting preferential engraftment of TET2 null cells specifically in this tissue. Conversely, in 5'-Aza-treated mice the WT TET2 allele was dominant in 19 of 23 (83%) tissue samples (median inverse Log2 [ΔCt] = 16.81; Figure 4B), demonstrating significant negative selection against TET2 null cells (P = 3.6 x 10 -4 ) as a result of 5'-Aza treatment. There was also significant negative selection of TET2 null cells specifically in the BM of 5'-Aza-treated mice (median inverse Log2 [ΔCt] = 2.09 and 79.34 for VC-and 5'-Aza-treated mice, respectively; P = 0.014) with 2 (of a total of 12 femurs from 5'-Aza-treated mice) completely negative for the CRISPR-modified TET2 allele ( Figure 4B).
Taken together, these data demonstrate that TET2 null cells are sensitive to the hypomethylating agent 5'-Aza both in vitro and in vivo, consistent with the response to 5'-Aza observed in the index AML patient.
We next investigated whether knockdown or knockout of TET2 affects sensitivity to 5'azacitidine in AML cell lines derived from primary AML that was not TET2 mutant. shRNAmediated knockdown of TET2 in THP1 AML cells (10,11) conferred sensitivity to the growth inhibitory effects of 5-Aza (P=0.005), although the phenotype was relatively weak ( Figure S9).
In contrast, CRISPR-mediated knockout of TET2 in KG1 AML cells (11) conferred resistance to the growth inhibitory effects of 5-Aza (P<0.0001; Figure S9). It should be noted that neither of these cell models are null for TET2 and both retain some residual protein expression ( Figure S9), which is possibly due to incomplete shRNA-mediated knockdown in THP1 cells and incomplete CRISPR-targeting in KG1 cell populations, respectively.
Gene expression analysis identifies downregulation of small nuclear ribonucleoprotein complex components and ABCB1 drug efflux in cells with biallelic TET2 mutations
RNA sequencing analysis was performed to identify differentially expressed genes and potential mechanisms responsible for 5'-Aza sensitivity in HEL TET2 biallelic cell clones. Using unsupervised hierarchical clustering of transcript data, HEL cell clones clustered broadly by TET2 genotype (Figure 5A), suggesting that complete loss of TET2 protein significantly impacted on transcription. Differential expression analysis identified 695 significantly differentially expressed transcripts (Padj < 0.05; |Log2FC| ≥ 0.3) in HEL TET2 biallelic clones compared to HEL TET2 monoallelic clones ( Figure 5B; Table S3). Gene ontology analysis identified several significantly affected cellular components (Table S4), of which the spliceosomal small nuclear ribonucleoprotein (snRNP) complex (GO:0097525), was the most significantly affected (Padj = 8.7 x 10 -4 ). In differential expressional analysis, the 12 RNA genes and 1 protein coding gene (LSM8) which make up this complex were all significantly downregulated in TET2 null cells ( Figure 5C; Table S5). Likewise, spliceosomal tri-snRNP complex assembly (GO:0000244) was identified as the most significantly affected biological process (Padj = 3.4 x 10 -4 ; Table S6). Downregulation of protein expression in TET2 null cells was confirmed for LSM8, consistent with transcript expression data ( Figure 5D). Significant differences in expression were also identified for other genes that could potentially affect cellular response to 5'-Aza (Table S3). These included ABCB1 (MDR1) (12), encoding a member of the ATP-binding cassette (ABC) family of drug transporters, which was down-regulated in TET2 null cells at the transcript (Padj = 8.1 x 10 -4 ) and protein level ( Figure 5D).
In order to further investigate the role of ABCB1 as a determinant of sensitivity to 5'-Aza, HEL AML cell clones were treated with ABCB1 inhibitors verapamil or tariquidar. When used in combination with 5'-Aza, both agents sensitized HEL AML cells to the growth inhibitory effects of 5'-Aza. Moreover, co-treatment with either ABCB1 inhibitor and 5'-Aza was synergistic in HEL cell clones with monoallelic TET2 mutation, which express high levels of ABCB1, but not in HEL cell clones with biallelic TET2 mutation, which express low levels of ABCB1 (Figures 6A and 6B; Figures S10 and S11). We next cloned HEL AML cells in soft agar supplemented with 10 M 5'-Aza. Colonies that survived 5'-Aza exposure were expanded and all shown to be resistant to 5'-Aza ( Figure 6C). Regardless of TET2 mutant allele dosage, eight of nine 5'-Aza resistant HEL cell clones had up-regulated ABCB1 protein expression relative to their respective parental cells from which they were derived (P=0.0069, Figures 6D and 6E). Taken together, these data demonstrate a role for ABCB1 as a determinant of sensitivity to 5'-Aza.
Biallelic TET2 alterations in AML patients with cytogenetically discernible chromosome 4 aberrations
The TET2 locus can be somatically affected via numerous mechanisms, including point mutations as well as gains and losses of material, although how these give rise to biallelic TET2 mutation remains unclear. In order to investigate we screened the Study Alliance Leukemia (SAL) biobank for AML patients presenting with a cytogenetically discernible aberration affecting chromosome 4; a population likely to be enriched for structural TET2 alterations.
However, because TET2 base substitutions are reported with high frequency in cytogenetically normal AML(1) our approach of selecting cases with chromosome 4 abnormalities does not inform on the overall frequency of alterations in AML. Thirty cases recruited to the SAL biobank had a chromosome 4 aberration visible cytogenetically and had sufficient material for SNP array analysis and sequencing (Table S7).
Gains affecting TET2 were discernible by SNP array in 6 patients (all with trisomy 4 visible cytogenetically), which included two cases with homozygosity affecting most of the long arm of chromosome 4 ( Figure 7A). One of these two patients (UPN25) also had a TET2 base substitution (c.4133G>A, p.Cys1378Tyr) carried by almost 100% of the cells ( Figure 7A and Figure S12) and as such has biallelic mutations affecting TET2. Six cases had loss of genetic material affecting TET2 discernible from SNP array data, which included three cases with large deletions (UPN09, UPN10 and UPN18) and two cases with a focal deletion in one allele and a nonsense mutation in the other allele (index case (UPN01) and UPN30)) ( Figure 7A and Figure S12). The sixth case (UPN28) had trisomy 4 but with a focal 585Kb deletion affecting the entire TET2 gene, resulting in copy number reduction (< 2 copies) and loss of heterozygosity ( Figure 7A and Figure S12). A further 6 cases had copy number alterations (5 with loss, 1 with gain) on chromosome 4 which did not affect the TET2 locus ( Figure 7A) and the remaining 12 cases had no evidence of TET2 base substitution or gain/loss of material on chromosome 4. Although two of these 12 cases had trisomy 4 (UPN26 and UPN29) and one case had monosomy 4 (UPN13) visible cytogenetically, these aneuploidies were present in a minor sub-clonal population (Table S7), which explains why they were not visible in the SNP array data. These data demonstrate that TET2 alterations are complex, often involving gains or losses of material in combination with base substitution mutations.
Biallelic TET2 alterations in AML patients treated with 5'-Aza
Having described a single patient with biallelic TET2 mutation (index case UPN01) who responded very well to 5'-Aza we sought to determine whether biallelic TET2 mutation was also associated with a favorable response in other patients treated with 5'-Aza. AML patients over the age of 65 years were recruited to the PETHEMA FLUGAZA phase 3 clinical trial and were randomized to receive either 5'-Aza or low-dose Ara-C plus fludarabine (FLUGA) (13). Fifty patients had a TET2 mutation identified by targeted sequencing which included 6 patients with a mutant allele frequency >85% indicative of biallelic TET2 mutation (3 patients were randomized to each arm of the trial) ( Table S8). None of the three patients with biallelic TET2 mutation randomized to the FLUGA arm (UPN68, UPN73, UPN78) achieved CR and all had relatively short OS (111, 45 and 17 days) ( Figure 7B, Table S8). In contrast, 2 of the 3 patients with biallelic TET2 mutation randomized to the 5'-Aza arm achieved CR (UPN31 and UPN33) and had prolonged OS (767 and 579 days) ( Figure 7B, Table S8). The third patient treated with 5'-Aza (UPN47) failed to achieve CR and died (day 62) after cycle 1 with progressive disease 14 ( Figure 7B, Table S8). Furthermore, all three patients with biallelic TET2 mutation treated with
In summary, we have identified 3 patients with biallelic TET2 mutation who had a favorable response to single-agent 5'-Aza, including the index case (UPN01) who had disease resistant to standard daunorubicin and Ara-C remission-induction chemotherapy.
Discussion
Up to 30% of AML patients present with a somatically acquired TET2 mutation (14)(15)(16)(17)(18)(19)(20), with biallelic mutations representing a minority of all TET2-mutated AML cases (20)(21)(22). The prognostic effect of TET2 mutation in AML treated with anthracycline and nucleoside analoguebased regimens remains controversial (16,18,19), although meta-analyses suggest an association with poor prognosis (23,24). As such, there is an urgent clinical need to identify novel therapeutic approaches to improve outcome of TET2-mutated AML. Some studies have reported an association between TET2 mutation and favorable outcome of myelodysplastic syndrome (MDS) following treatment with hypomethylating chemotherapy such as 5'-Aza (25)(26)(27)(28)(29), although other studies have not replicated these findings (30). Our data demonstrate that single-agent 5'-Aza treatment of AML harboring biallelic TET2 mutations can give rise to long term CR, including disease otherwise refractory to standard 3+7 induction chemotherapy with daunorubicin and Ara-C and also in patients with adverse risk disease or poor performance status. Furthermore, using an isogenic model system, we demonstrate that biallelic TET2 mutations confer cellular hypersensitivity to 5'-Aza in vitro, as well as significant negative selection when competitively xenografted with monoallelic TET2-mutated cells into the bone marrow of mice.
It should be noted that our data were primarily generated using HEL AML cells, which were derived from primary AML with monoallelic TET2 mutation. We also observed acute sensitivity to 5'-Aza in TET2 null SKM1 cells, which were also derived from TET2 mutant primary AML.
We also investigated 5'-Aza sensitivity in AML cell lines not derived from TET2-mutant disease and although we observed sensitivity in THP-1 cells with TET2-knockdown we did not see the same phenotype in KG-1 cells with TET2 knockout, which were relatively resistant to 5'-Aza compared to TET2 wild-type KG-1 cells. These data suggest that complete loss of TET2 expression might be required for sensitivity to 5'-Aza or that sensitivity is modified by other somatic mutations. For example, HEL and SKM1 cells are wild-type for TP53 whereas THP-1 and KG-1 are both mutant for TP53, which is associated with poor outcome in AML (31,32).
Although we report one patient with TET2 biallelic-mutant TP53-mutant AML who responded well to 5'-Aza (UPN33) all other patients with biallelic TET2 mutation were wild-type for TP53.
As such, further work is warranted to understand the impact of TP53 and other somatic mutations in determining response to 5'-Aza in TET2 null AML.
An effect of mutant TET2 gene dosage on response to therapy is perhaps not surprising given the evidence demonstrating that mutant allele dose also affects disease development. Specifically, monoallelic (Tet2 +/-) and biallelic Tet2 deletions (Tet2 -/-) both result in myeloid malignancy in animal models, but the latency and OS are significantly shorter in Tet2 null animals (33,34).
Furthermore, Tet2 null (Tet2 -/-) mice with myeloid disease also have more pronounced splenomegaly compared to heterozygous (Tet2 +/-) littermates (33), and splenomegaly (and extramedullary disease) is a general feature of myeloid disease developing in Tet2 knockout mouse models (10,34). Consistent with this, young healthy mice null for Tet2 have elevated extramedullary hematopoiesis in the spleen, which develops into splenomegaly concomitant with the onset of myeloid dysplasia (35). These observations are consistent with our data demonstrating a significant competitive advantage of TET2 null human cells to populate the spleen of engrafted animals and also that the index patient (UPN01) reported herein presented with splenomegaly and extramedullary disease. Taken together, these data suggest that TET2 loss could predispose to myeloid disease characterized by splenomegaly and extramedullary disease in general, which is mutant TET2 gene dosage-dependent, although investigation in large patient cohorts is warranted.
Our data suggest that complete loss of TET2 renders cells more sensitive to the anti-proliferative effects of 5'-Aza, rather than enhancing susceptibility to drug-induced apoptosis, consistent with the observed negative selection against cells with biallelic TET2 mutation observed in vivo.
Despite this, 5'-Aza treatment rarely resulted in the complete elimination of TET2 null cells in mice, consistent with data from the index patient in whom 5'-Aza-induced morphological remission was characterized by the persistence of cells with biallelic TET2 mutations. Mutation persistence in morphological remission has been reported for several leukemia driver genes, including those characteristic of age-associated clonal hematopoiesis such as TET2, DNMT3A, SRSF2, RUNX1 and ASXL1 (36)(37)(38)(39). Likewise, persistence of Tet2-mutated cells has also been reported in animal models treated with 5'-Aza (40). Targeting two different epigenetic layers in monoallelic TET2 mutated AML with 5'-Aza and LSD1 inhibition has been demonstrated to be effective in primary AML cells ex vivo (41). However, a model analyzing responsiveness of biallelic TET2 mutated AML to 5'-Aza has not been reported thus far.
Our data demonstrate that cells with monoallelic and biallelic TET2 mutations have significantly different genomic methylation profiles, and although we observed a genome-wide shift towards hypermethylation in cells with biallelic TET2 mutation, the effect was relatively modest and there were also large numbers of CpG sites that became hypomethylated. Consistent with this, we also noted up-regulated transcript levels for numerous genes. As such, it seems unlikely that global genomic DNA methylation and concomitant global loss of expression is responsible for the observed sensitivity to 5'-Aza. Rather, the prevailing evidence suggests that the underlying mechanism conferring sensitivity to 5'-Aza is gene/pathway specific, and our investigations identified significant down-regulation of spliceosomal small nuclear ribonucleoprotein (snRNP) complex components in cells with biallelic TET2 mutations. The snRNP pathway has previously been implicated as a determinant of cellular sensitivity to 5'-Aza (42), although the underlying mechanisms remain to be fully deciphered. We also show that ABCB1 is down-regulated in AML cells with biallelic TET2 mutation and that inhibition of this efflux transporter sensitizes to the growth inhibitory effects of 5'-Aza. Inhibition of ABCB1 leads to increased intracellular accumulation of 5'-Aza in SKM1 AML cells (43,44), providing further evidence that ABCB1 is involved in 5'-Aza efflux. Consistent with our data, treatment with a combination of 5'-Aza and erlotinib, which antagonizes ABCB1, is synergistically cytotoxic in several AML cell lines, including SKM1, MOLM-13, HL-60 and MV4-11 (44). We also demonstrate significant up- in IDH-mutated AML (48,49). It will therefore be important to determine whether mutations in other members of the hydroxymethylation pathway confer sensitivity to 5'-Aza in AML, as we report here for biallelic TET2 mutation. TET2 mutations have also been reported in up to 28% of MDS and MPN (15,20,50) and up to 50% of angioimmunoblastic T-cell lymphoma (AITL), where they are associated with poor response to anthracycline-based chemotherapy (51).
Models for reliably predicting response to 5'-Aza in AML would be of clinical benefit. Our study suggests that TET2 mutational profiling or TET2 protein expression analysis could potentially identify a subgroup of patients with disease that was null for protein expression and acutely sensitive to hypomethylating therapy, suggesting an alternative first line therapy for frail AML patients or salvage therapy for patients with chemoresistant disease. There is potential value in advocating TET2 mutational or protein expression profiling in elderly patients with AML, where disease is more likely to have evolved from TET2 clonal hematopoiesis and therefore likely to be enriched for AML with biallelic TET2 mutations and null for expression (20). Indeed, clinical studies in elderly AML have already documented excellent responses to 5'-Aza in some patients (53), although the impact of TET2 status would need to be confirmed in prospective studies in all age groups. Likewise, there is a case to be made for implementing TET2 mutational or expression profiling in AML patients with extramedullary disease, and particularly splenomegaly, given our data linking biallelic TET2 mutation with colonization of the spleen in conjunction with data from mouse models showing a proclivity of Tet2 mutation to drive extramedullary hematopoiesis and myeloid disease.
In summary, the prevailing evidence argues in favor of investigating mutant TET2 allele dosage and TET2 protein expression as a determinant of sensitivity to 5'-Aza in large prospective studies of AML and other hematological conditions characterized by TET2 loss of function.
However, comprehensive TET2 mutational profiling that includes both sequence and copy number analysis would be required to identify patients with potentially complex alterations affecting the TET2 locus. Furthermore, TET2 expression profiling could identify patients with disease that is null/low for protein expression regardless of gene mutation status, and who might also benefit from 5'-Aza treatment.
Patients
AML patients with an abnormal chromosome 4 (UPN01-UPN30) were recruited to the Study Alliance Leukemia AML registry biobank in Dresden (Germany) (institutional review board (IRB) number EK98032010).
BM morphological assessment of the AML index case (UPN01)
For morphological analyses at AML presentation and during follow up, smears were prepared from BM aspirates, stained with Giemsa and visualized according to routine diagnostic protocols.
Cytogenetic analyses of UPN01
G-banding analysis of metaphase chromosomes from short-term cultures established from presentation BM aspirate was performed using well-established techniques. Interphase FISH was performed using the XL TET2 kit (Metasystems, Germany). Spectral karyotyping (SKY) was performed using the SKYPaint probe mixture kit (Applied Spectral Imaging, Israel) according to the manufacturer's protocol, with the exception that the hybridization time was extended from two to three days. (Table S9). Control cell clones (with monoallelic TET2 mutation) were derived from parental HEL cells transduced with virus carrying an empty vector. All of the clones used in experiments were generated independently and there is some heterogeneity in expression profiles and phenotype. Moreover, it should be noted that several independent cell clones were generated for each TET2 genotype and not every clone was used in every experiment.
Nucleic acid preparation
DNA was extracted from BM mononuclear cells (BMMNCs), peripheral blood (PB) or methanol:acetic acid-fixed cells using an appropriate Qiagen kit or from saliva using an Oragene kit (DNA Genotek, Ottawa, Canada).
SNP array genotyping
SNP array genotyping was performed on DNA from BMMNCs using the OmniExpressExome
Whole exome sequencing
Exome capture (using Agilent SureSelect Protocol v1.2), library preparation and sequencing of pooled DNA samples (from PB or saliva) was carried out by Oxford Gene Technology (Oxfordshire, UK) on the Illumina HiSeq2000 platform. Reads were mapped to human genome build 37 (hg19) using the Burrows-Wheeler Aligner MEM package (54) and local realignment of mapped reads around potential insertion/deletion (indel) sites was carried out using Genome Analysis Toolkit (55) (GATK; v1.6). Duplicate reads were marked using Picard (v1.98) and excluded from analysis. SNPs and indels were called using GATK HaplotypeCaller, with SNP novelty determined against dbSNP release 135. Variants were annotated with gene data from Ensembl. A read depth of at least 20x was achieved for a minimum of 95.61% of on-target regions.
RNA sequencing and differential gene expression analysis
Total RNA was extracted using the RNeasy micro kit (Qiagen) and quantified using a Qubit 2.0 Sequencing reads were mapped to human genome build 37 (hg19) and annotated using STAR aligner (56). Aligned reads were summarized over gene features using the Rsubread package (57) (using featureCounts function) in R (v3.5.1). Read counts were normalized by expressing as CPM. Gene level differential expression analysis was performed on normalized read counts using DESeq2 (version 1.16.1) (58). Resulting P-values were adjusted to control for the false discovery rate (FDR; 5%) (59) and significantly differentially expressed genes were defined as those with FDR-adjusted P value < 0.05 and |Log2FC| ≥ 0.3. Table S9. PCR products were purified using the QIAquick® PCR Purification kit (Qiagen) and sequenced using the indicated primers (Table S9) by Source BioScience (Nottingham, UK). Mutation in TET2 was determined on the PETHEMA-FLUGAZA AML clinical trial patients from whole exome sequencing, as previously described (13).
Western Immunoblotting
Cellular proteins were extracted using Phosphosafe reagent (Millipore Ltd, Watford, UK) and quantified by Pierce BCA assay (Thermo Fisher Scientific). Proteins were separated using
Cell proliferation, drug sensitivity and cloning efficiency assays
Cytotoxic agents were purchased from Sigma-Aldrich. Ara-C was reconstituted in DMSO and daunorubicin or 5-Aza in dH2O and aliquots were prepared and stored at -80°C. Stocks were diluted in CM immediately prior to use in cytotoxicity assays.
To compare cell proliferation between parental and CRISPR-Cas9-mutated HEL clones, exponentially growing cells were seeded at low density (2x10 4 cells ml -1 ) in CM and counted using a hemocytometer at regular intervals up to 192 hours post-seeding. Cell growth at each timepoint was calculated relative to initial seeding density. Two-way ANOVA was used to test for significant differences in relative cell growth based on TET2 mutation status.
For drug sensitivity experiments, cells were incubated in CM supplemented with appropriate concentrations of cytotoxic agent (5'-Aza, daunorubicin or Ara-C) or relevant vehicle control (VC) for 96 hours, after which viable cells were identified by trypan blue dye exclusion and counted using a hemocytometer. Survival fractions were determined at each drug concentration relative to VC-treated controls. Two-way ANOVA was used to test for significant differences in drug sensitivity based on TET2 mutation status. Inhibition of proliferation in drug-treated cultures was compared to VC-treated cultures and used to calculate the IC50 and IC90 values in GraphPad Prism (PRISM 6.0.2, Graphpad Software).
For determination of CE, exponentially growing cells were seeded in soft agar [CM supplemented with 0.2% agarose] supplemented with cytotoxic agent (5'-Aza, daunorubicin or Ara-C) or VC. Macroscopically visible colonies were counted on day 30 and CE was calculated relative to number of cells initially seeded. Student's t-tests (2-tailed) were used to identify significant differences in CE based on TET2 mutation status.
In order to determine the effect of ABCB1 inhibition on 5'-Aza sensitivity, cells were incubated in CM supplemented with increasing doses of 5'-Aza and an ABCB1 inhibitor (Verapamil or Tariquidar) or VC. After 96 hours of incubation, viable cells were identified using CellTiter-Glo Luminescent Cell Viability Assay (Promega) and the surviving fraction was determined at each drug concentration relative to VC-treated controls. The resulting dose-response matrix was used to calculate drug synergy using SynergyFinder 2.0. The student's t-test was used to identify significant differences in synergy scores based on TET2 mutation status.
All assays were performed in triplicate at a minimum and means ± SD were calculated.
Colonies were picked after 28 days and were subsequently expanded and maintained in CM supplemented with 5'-Aza in order to establish putative 5'-Aza-resistant clones. Following expansion each cell clone was tested for sensitivity to 5'-Aza along with the parental cell line from which the 5'-Aza-resistant clone was developed. Student's t-test was performed to test for significant differences in IC50 values between 5'-Aza-resistant clones and parental cells.
For determination of ABCB1 protein expression in 5'-Aza-resistant clones, western immunoblotting was performed and the resulting ABCB1 band intensities in 5'-Aza-resistant clones were normalized to the ABCB1 band intensity in the respective parental cells. Student's ttest was performed to test for significant differences in ABCB1 protein expression between 5'aza-resistant clones and parental cells. and VC-treated mice for each tissue type using the Mann-Whitney test.
Statistics
All statistical tests were performed using GraphPad Prism, or R in the case of large-scale array data analysis. Specific tests and corrections applied, as well as details of experimental replicates and summary statistics are given above and in relevant figure legends. For all analyses, a P value ≤ 0.05 was considered statistically significant unless otherwise stated.
Study approval
For human studies, approval was received from institutional review boards and/or ethics committees at all sites and written informed consent was received from all participants prior to inclusion. In addition, written informed consent was provided for pictures appearing in the manuscript.
For animal studies, experimental procedures were approved by the Animal Welfare Ethical Review Body at Newcastle University and the UK Home Office and were performed in compliance with the UK Animals (Scientific Procedures) Act 1986 and its associated Codes of Practice.
Data Availability
Genome-wide methylation data described in Figure 2e and 2f has been deposited at the Gene Expression Omnibus with accession numbers GSE217940 and GSE218228. RNA-seq gene expression data described in Figure 5a, 5b and 5c has been deposited at the Gene Expression Omnibus with accession numbers GSE218227 and GSE218228. monoallelic (open symbols) and HEL TET2 biallelic (filled symbols) cell clones. Cells were seeded at low density and growth (relative to initial density) was determined at regular intervals. Data represents mean and SD of indicated number of clones from three independent experiments. P value calculated by one-way ANOVA. (D) CE was calculated for HEL TET2 monoallelic (open squares) and HEL TET2 biallelic (filled squares) clones after 30 days culture in soft agar. Mean and SD of indicated number of clones from seven independent experiments are shown. P value calculated by 2-tailed Student's t-test. (E) Volcano plot demonstrating differences in CpG methylation between HEL TET2 monoallelic (n=2) and HEL TET2 biallelic (n=4) clones. Plot was constructed using fold-change (Log 2 FC) values and adjusted P-values and points represent individual CpG probes, colored such that significantly differentially methylated probes (P<0.05 and |Log 2 FC|≥2) are in red. Orange points represent probes which reach significance (P<0.05) but are not differentially methylated (|Log 2 FC|<2) and black points represent non-significant (P≥0.05) probes. (F) Unsupervised hierarchical clustering of the top 1,500 differentially methylated CpG probes across all samples resulted in distinct clustering of parental HEL TET2 monoallelic (n=2) and HEL TET2 biallelic (n=4) cell clones. Rows in the heatmap represent CpG probes and vertical columns are cell clones. Color key indicates level of methylation at CpGs. (A) Mean synergy scores for verapamil and 5'-Aza combination and for (B) tariquidar and 5'-Aza in combination, stratified by TET2 mutant allele dosage. Synergy scores for verapamil/5'-Aza and tariquidar/5'-Aza were significantly higher for TET2 monoallelic mutant HEL cell clones compared to TET2 biallelic HEL cell clones (P = 0.0003 and P < 0.0001, respectively, paired t-test). Data are derived from six independent experimental replicates using two independent cell clones for each TET2 genotype. Data represents the mean and SD of indicated number of clones. (C) HEL AML cells clones were exposed to escalating dose of 5'-Aza to generate significantly resistant sub-clones (P = 0.0008, unpaired t-test). Data shows IC50 values for 5'-Aza-resistant sub-clones derived from parental cells with either TET2 monoallelic mutation (open symbols) or TET2 biallelic mutation (closed symbols). Data represents the mean and SD of indicated number of clones. (D) Western blots show ABCB1 protein levels in three representative 5'-Aza-resistant derivatives derived from cells with either TET2 monoallelic mutation (left panel) or TET2 biallelic mutation (right panel). (E) ABCB1 protein levels were quantified in parental HEL cells and nine independent 5'-Aza-resistant derivatives with either TET2 monoallelic mutation (open symbols) or TET2 biallelic mutation (closed symbols). ABCB1 protein levels were quantified and normalized to GAPDH with the expression in each parental cell given a nominal value of 1. ABCB1 protein levels of 5'-Aza-resistant derivatives were significantly higher than their respective parental cells (P=0.0069, paired t-test). Solid horizontal line represents the median fold change in ABCB1 protein expression in 5'-Aza-resitant sub-clones relative to their respective parental cells (represented by the dashed horizontal line). | 8,702.4 | 2021-07-18T00:00:00.000 | [
"Biology",
"Medicine"
] |
Clinical Outcomes of Posterior Lumbar Interbody Fusion versus Minimally Invasive Transforaminal Lumbar Interbody Fusion in Three-Level Degenerative Lumbar Spinal Stenosis
The aim of this study was to directly compare the clinical outcomes of posterior lumbar interbody fusion (PLIF) and minimally invasive transforaminal lumbar interbody fusion (MIS-TLIF) in three-level lumbar spinal stenosis. This retrospective study involved a total of 60 patients with three-level degenerative lumbar spinal stenosis who underwent MIS-TLIF or PLIF from January 2010 to February 2012. Back and leg visual analog scale (VAS), Oswestry Disability Index (ODI), and Short Form-36 (SF-36) scale were used to assess the pain, disability, and health status before surgery and postoperatively. In addition, the operating time, estimated blood loss, and hospital stay were also recorded. There were no significant differences in back VAS, leg VAS, ODI, SF-36, fusion condition, and complications at 12-month follow-up between the two groups (P > 0.05). However, significantly less blood loss and shorter hospital stay were observed in MIS-TLIF group (P < 0.05). Moreover, patients undergoing MIS-TLIF had significantly lower back VAS than those in PLIF group at 6-month follow-up (P < 0.05). Compared with PLIF, MIS-TLIF might be a prior option because of noninferior efficacy as well as merits of less blood loss and quicker recovery in treating three-level lumbar spinal stenosis.
Introduction
Degenerative lumbar spinal stenosis (DLSS) is the most common type of lumbar stenosis and increasingly being diagnosed in elderly with an approximate incidence of 5% in public [1]. Neurovascular structures compressed by the lumbar canal include various conditions, such as disc herniation, facet hypertrophy, bulging of the annulus, and thickening of the ligamentum flavum [2]. DLSS remains the most common indication for lumbar surgery in patients over 65 years old, and the main goal of surgical management is to decompress the spinal canal [3]. In addition, when the lumbar spine of DLSS patient is unstable, an instrumented fusion is usually recommended.
There were several surgical techniques available for decompression and augmented fusion, and each operative technique has its own merits and limitations. Posterior lumbar interbody fusion (PLIF) and transforaminal lumbar interbody fusion (TLIF) are two commonly used surgical techniques recommended for DLSS patients who fail conservative care to achieve spinal fusion [4]. Unlike the direct anterior approach, PLIF and TLIF reduce the risks of complications related to the vascular, abdominal, and reproductive systems [4]. Current indications for PLIF and TLIF include spondylolisthesis, degenerative scoliosis, severe instability, pseudarthrosis, recurrent disk herniation, and painful degenerative disk disease [4]. Long-term clinical results have confirmed the efficacy of PLIF and TLIF with high rates of fusion [5], and they all have the merit of adding anterior column support through a posterior only approach [4]. In addition, PLIF and TLIF were observed with better clinical outcomes than posterolateral spinal fusion alone in select patient populations [6].
Despite the established success of open PLIF and TLIF, open interbody fusions were observed with deleterious effects of prolonged paraspinal muscle retraction and extensive subperiosteal dissection [7]. These complications make it very difficult for surgeons to select an appropriate surgical procedure for multilevel DLSS. On the other hand, minimally invasive TLIF (MIS-TLIF) has become a popular technique for degenerative lumbar disease with merits of small incision, little lumbosacral muscle dissection scope, less bleeding, rapid recovery, and so forth [8]. In addition, many studies have confirmed that MIS-TLIF was associated with both cost saving and noninferior outcomes compared with an open approach [9][10][11]. Both MIS-TLIF and PLIF have been reported to be associated with favorable outcomes in treating DLSS, but we are not aware of any published literature directly comparing the clinical outcomes between open PLIF and MIS-TLIF for three-level DLSS. Therefore, the purpose of this study was to compare the clinical outcome of MIS-TLIF with PLIF in three-level DLSS.
General Information.
The study was approved by the Local Institutional Review Board of Shanghai Tenth People's Hospital prior to data collection. Patients with spinal stenosis receiving three-segmental PLIF or MIS-TLIF in our hospital between January 2010 and February 2012 were included in this retrospective study. The investigators who conducted the data analysis were independent of the surgeons who conducted the surgeries. Basic characteristics of the participants such as age, gender, body mass index, and lesion segments were extracted and analyzed.
Surgical Techniques.
In MIS-TLIF group, all participants received general anesthesia before MIS-TLIF surgery. C-arm X-ray machine (Biplanar 500e; Swemac Medical Appliances AB, Sweden), METRx Quadrant System, and percutaneous pedicle screw (Sextant; Medtronic, Minneapolis, MN) were prepared before the operation. The patient was placed in a prone position on a radiolucent operating table. The waist bowed a bit, making the intervertebral space open and expanding Kambin triangle. The iliac crests were preoperatively palpated, and lines connecting to the uppermost margin of both iliac crests were marked ( Figure 1). Under C-arm fluoroscopy, the targeted levels were confirmed according to our self-made locator. On the basis of the spatial relationship, the intervertebral spaces and the pedicle positions were marked on the body surface. An incision was planned by connecting a line between the outer portions of both ends pedicles (approximately 3.0 cm off midline). Then a skin incision about 3.0 to 4.0 cm was made on the more symptomatic side or more severe pathology side according to the imaging. The paravertebral muscles were split and retracted laterally to the outer edge of the facet joint, and the zygapophysis was confirmed. Expansion tube was then inserted and Quadrant System was placed. X-ray examination was repeated to confirm the target segments and the placement of Quadrant System. We conducted the decompression by cutting the inferior portion of the lamina, hypertrophied superior and inferior articular processes, and ligamenta flava. Then we enlarge the intervertebral space with distractor followed by PEEK cage (Capstone Medtronic Sofamor Danek, Memphis, TN, USA). After that, the percutaneous pedicle screw fixation was conducted ( Figure 2).
In PLIF group, the patients were placed prone on the operating table after general anesthesia and tracheal intubation. After routine disinfection and draping, G-arm machine was used to confirm the targeted segment. Then a longitudinal incision was made in the middle of the spine, and muscular fasciae were cut apart. Musculus sacrospinalis were then bluntly dissected until lumbar transverse process was exposed. Pedicle screws were placed into the upper and subjacent vertebral pedicle of the segmental lesions. Spinous process, lamina, hyperplasia of yellow ligament, and interior zygapophysis were removed according to the scope of the lesions, and lateral recess as well as nerve root canal was enlarged with the protection of dural sac and nerve tissue. After that, fibrous rings were cut and nucleus pulposus was removed, and intervertebral space was open. The removed laminar and zygapophysis were crushed into smaller pieces for incorporation as autograft, and then the cage with crushed bones was also inserted. Next, titanium rods were used to connect the screws and fixed. G-arm machine was used to further confirm the pedicle screw placement and suture was conducted layer by layer after hemostasis. Finally, negative pressure drainage was placed and the incision was sewed up.
Clinical Assessment.
The clinical outcomes were evaluated based on the improvement of back and leg pain, the disability, life quality, and the complications. Back and leg pain was measured using a ten-point visual analog scale (VAS) before surgery, postoperatively at six months and twelve months. Disability was assessed using the Oswestry Disability Questionnaire (ODI) before surgery and postoperatively at six months and twelve months. Health status of the patients was also evaluated with Short Form-36 (SF-36) scale before surgery and postoperatively at six months and twelve months. In addition, operating time, hospital stay, and the amount of blood loss during surgery were recorded, and the complications between the groups were also compared. All the above-mentioned measurements were routinely collected in our department as a research purpose, and we retrospectively analyzed these prospectively recorded data.
Statistical
Analysis. The software package SPSS 12.0 (SPSS Corporation, USA) was used for statistical analysis. The statistic was demonstrated as mean ± SD. Independent student -test was used to compare the difference of continuous variables between the two groups. Chi-square test was used to compare the difference of categorical variables between the two groups. < 0.05 was regarded as statistical significance.
Results
In PLIF group, 17 patients were male and 19 were female with a mean age of 64.4 years. In the MIS-TLIF group, 14 patients were male and 10 were female with a mean age of 65.9 years. The lesion segments of L1-4 were 8 in PLIF group and 6 cases in MIS-TLIF group, while L2-5 were 16 versus 10 and L3-S1 were 12 versus 8 cases, respectively. In PLIF group, there were 13 cases with single-segmental fusion plus three-segmental fixation, 9 cases with two-segmental fusion plus three-segmental fixation, and 16 cases with threesegmental fusion plus three-segmental fixations. In MIS-TLIF group, there were 7 cases with single-segmental fusion plus three-segmental fixations, 8 cases with two-segmental fusion plus three-segmental fixation, and 9 cases with threesegmental fusion plus three-segmental fixations. The mean follow-up was 13.4 months for PLIF group and 14.2 months for MIS-TLIF group. There were no statistical significances in age, gender, lesion segments, fusion segments, and follow-up between the two groups (Table 1).
Radiographic examination did not detect any nonunion signs in both groups at final follow-up. Other clinical outcomes were demonstrated in Table 2. The operation time was 227.5 ± 17.1 min in PLIF group and 270.8 ± 33.7 min in MIS-TLIF group ( = 0.01). The estimated blood loss was 908.3 ± 242.9 mL in PLIF group and 666.7 ± 314.3 mL in MIS-TLIF group ( = 0.04). There were no significant differences in VAS of back pain between the two groups, either preoperatively or at 12 months after operation ( > 0.05). However, significant differences were observed when comparing preoperative VAS of back pain with 6 months or 12 months after operation ( < 0.05). Moreover, patients undergoing MIS-TLIF had significantly lower back VAS than those in PLIF group at 6-month follow-up ( = 0.03). Similarly, there were no significant differences in VAS of leg pain between the two groups, either preoperatively or at 6 months or 12 months after operation ( > 0.05). However, significant differences were observed when comparing preoperative VAS of leg pain with 6 months or 12 months after operation ( < 0.05). Additionally, there were no significant differences in SF-36 scores between the two groups, either preoperatively or at 6 months or 12 months after operation ( > 0.05). However, significant differences were observed when comparing preoperative SF-36 scores with 6 months or 12 months after operation ( < 0.05). Moreover, there were no significant differences in ODI between the two groups, either preoperatively or at 6 months or 12 months after operation ( > 0.05). However, significant differences were observed when comparing preoperative ODI with 6 months or 12 months after operation ( < 0.05). As for complications, 3 patients were found with cerebral fluid leakage in PLIF group, and one patient was found with cerebral fluid leakage and two with superficial surgical site infection in MIS-TLIF group ( > 0.05).
Discussions
The clinical outcome of MIS-TLIF compared with PLIF in three-level DLSS still requires clinical evidence. We retrospectively analyzed the patient-reported outcomes and identified that there were no significant differences between VAS of back pain, VAS of leg pain, SF-36 scores, and ODI at 12-month follow-up. To the best of our knowledge, this was the first clinical study with direct comparison of the clinical outcomes between PLIF and MIS-TLIF for three-level DLSS.
Lumbar interbody fusion is a well-validated technique with several different approaches such as anterior, lateral, transforaminal, and posterior approaches for a variety of conditions requiring spine stabilization [7]. Among them, PLIF is frequently used and may provide a higher immediate stability compared with that of MIS-TLIF, especially in lateral bending [12]. However, it may be limited by the thecal and nerve root retraction [13]. MIS-TLIF provides a minimal access through a paramedian approach with unilateral facetectomy, which offers the advantage of avoiding an anterior approach as needed for an anterior lumbar interbody fusion and limits the amount of neural retraction when compared to PLIF [14,15]. There were a number of updated literatures supporting the use of MIS-TLIF with less intraoperative blood loss and decreased postoperative pain with lower overall complications rates [10,11]. In addition, biomechanical analysis also demonstrated that MIS-TLIF, with one cage or two cages, provides reliable spinal stability [16]. However, in theory, PLIF may have the merits of adequate decompression over MIS-TLIF in three-level spinal stenosis. In contrast, patients undergoing MIS-TLIF may have quicker recovery and less blood loss due to minimal tissue injury. In practice, we did observe significantly less intraoperative blood loss and hospital stay of MIS-TLIF when comparing with PLIF. However, we did not observe lower overall complications rate, which might be due to the small sample size. Additionally, the VAS of back pain and leg pain were also significantly reduced and SF-36 were significantly improved after surgery at oneyear follow-up. Furthermore, patients undergoing MIS-TLIF had significantly lower back VAS than those in PLIF group at 6-month follow-up, indicating MIS-TLIF might induce quicker improvement of back pain due to less tissue injury. Nevertheless, MIS-TLIF had significantly longer operation time due to more delicate surgical procedure.
Complications are nightmares for spine surgeons. The most common complications associated with PLIF and TLIF are intraoperative neurologic injury, interbody implant or bone graft migration, dural tear, surgical site infection, and so on. The overall reported complication rates of PLIF and TLIF range from 8% to 80%, which did not include potential pseudarthroses [17][18][19][20][21]. Mehta et al. found that neurologic injury was higher with PLIF than with TLIF (7.8% and 2%, resp.) [22]. Implant migration may be uncommon but very tricky. Aoki et al. [23] reported a series of three patients with posterior migration following TLIF. Dural tear or cerebral fluid leakage is a common complication whether during classic PLIF, open TLIF, or minimally invasive TLIF, which varies from 2% to 14% [24]. As for surgical site infection, it is reported in zero to 9% of patients [25]. We did not observe intraoperative neurologic injury, interbody implant, or bone graft migration, but three patients in PLIF group were found with cerebral fluid leakage and one patient was in MIS-TLIF group. Another two patients were identified with superficial surgical site infection in MIS-TLIF group but were all cured by routine administration of antibiotics. This condition might be due to the longer operation time in MIS-TLIF group and the bias from the small sample, although we admitted numerous factors such as exposure area and operation manners might also contribute to surgical site infection. However, we should also notice that the difference in infection rate between MIS-TLIF and PLIF group was not significant. Anyway, a prospective randomized control study with more subjects and longer follow-up would be in need to clarify this issue.
Conclusions
Current evidence indicated that MIS-TLIF was equivalent to PLIF in treating three-level lumbar spinal stenosis. Moreover, MIS-TLIF has the merits of less blood loss, quicker improvement of back pain, and shorter hospital stay. Prospective, randomized control study with larger participants and longer follow-up is needed to confirm this evidence. | 3,536.4 | 2016-09-26T00:00:00.000 | [
"Engineering",
"Medicine"
] |
Detecting quadrupole: a hidden source of magnetic anisotropy for Manganese alloys
Mn-based alloys exhibit unique properties in the spintronics materials possessing perpendicular magnetic anisotropy (PMA) beyond the Fe and Co-based alloys. It is desired to figure out the quantum physics of PMA inherent to Mn-based alloys, which have never been reported. Here, the origin of PMA in ferrimagnetic Mn$_{3-{\delta}}$Ga ordered alloys is investigated to resolve antiparallel-coupled Mn sites using x-ray magnetic circular and linear dichroism (XMCD/XMLD) and a first-principles calculation. We found that the contribution of orbital magnetic moments in PMA is small from XMCD and that the finite quadrupole-like orbital distortion through spin-flipped electron hopping is dominant from XMLD and theoretical calculations. These findings suggest that the spin-flipped orbital quadrupole formations originate from the PMA in Mn$_{3-{\delta}}$Ga and bring the paradigm shift in the researches of PMA materials using x-ray magnetic spectroscopies.
INTRODUCTION
Perpendicular magnetic anisotropy (PMA) is desired for the development of high-density magnetic storage technologies. Thermal stability of ultrahigh density magnetic devices is required to overcome the superparamagnetic limit 1−3 . Recently, research interests using PMA films have focused on not only magnetic tunnel junctions 4−7 toward the realization of spin-transfer switching magneto-resistive random-access memories but also antiferromagnetic or ferrimagnetic devices 8,9 . To design PMA materials, heavy-metal elements that possess large spin-orbit coupling are often utilized through the interplay between the spins in 3d transition-metals (TMs) and 4d or 5d TMs. The design of PMA materials without using the heavy-metal elements is an important subject in future spintronics researches. Recent progress has focused on the interfacial PMA in CoFeB/MgO 10 or Fe/MgO 11,12 . However, a high PMA of over the order of MJ/m 3 with a large coercive field is needed to maintain the magnetic directions during device operation 13 . Therefore, the materials using high PMA constants and without using heavy-metal atoms are strongly desired.
Mn-Ga binary alloys are a candidate that could overcome these issues. Mn 3−δ Ga alloys satisfy the conditions of high spin polarization, low saturation magnetization, and low magnetic damping constants 14−18 . Tetragonal Mn 3−δ Ga alloys are widely recognized as hard magnets, which exhibit high PMA, ferromagnetic, or ferrimagnetic properties depending on the Mn composition 15 . Two kinds of Mn sites, which couple antiferromagnetically, consist of Mn 3−δ Ga with the D0 22 -type ordering. Meanwhile, the L1 0 -type Mn 1 Ga ordered alloy possesses a single Mn site. These specific crystalline structures provide the elongated c-axis direction, which induces the anisotropic chemical bonding, resulting in the anisotropy of electron occupancies in 3d states and charge distribution. There are many reports investigating the electronic and magnetic structures of Mn 3−δ Ga alloys to clarify the origin of large PMA and coercive field 19−21 . To investigate the mechanism of PMA and large coercive fields in Mn 3−δ Ga, site-specific magnetic properties must be investigated explicitly.
X-ray magnetic circular/ linear dichroism (XMCD/ XMLD) may be a powerful tool to study the orbital magnetic moments and magnetic dipole moments of higher order term of spin magnetic moments 22,23 . However, the difficulty in the deconvolution of two kinds of Mn sites prevents site-selected detailed investigations. Within the magneto-optical spin sum rule, using the formulation proposed by C.T. Chen et al. 24 , the orbital magnetic moments are expressed as proportional to r/q, where q and r represent the integral of the x-ray absorption spectra (XAS) and XMCD spectra, respectively, for both L 2 and L 3 edges. In the cases of two existing components, the orbital moments are not obtained from the whole integrals of spectra; by using each component r 1 , r 2 , q 1 , and q 2 , the value of (r 1 /q 1 ) + (r 2 /q 2 ) should be the average value. The value of (r 1 + r 2 )/(q 1 + q 2 ) does not make sense as an average in the case of core-level atomic excitation, leading to the wrong value in the XMCD analysis. As a typical example, for the mixed valence compound CoFe 2 O 4 , the Fe 3+ and Fe 2+ sites can be deconvoluted by the ligand-field theory approximation 25 . However, the deconvolution of featureless line shapes in a metallic Mn 3−δ Ga case is difficult by comparison with the theoretical calculations. To detect the site-specific anti-parallel-coupled two Mn sites, systematic investigations using Mn 3−δ Ga of δ = 0, 1, and 2 provide the information of site-specific detections. In previous reports, although Rode 26 , which enables discussion of the electronic structures of Mn 3−δ Ga in δ = 0, 1, and 2. We adopted this growth technique and performed XMCD. By contrast, the XMLD in Mn L-edges enables detection of the element of quadrupole tensor Q zz by adopting the XMLD sum rule 27 . Although many reports of XMLD for in-plane magnetic easy axis cases exist, perpendicularly magnetized cases are attempted firstly in Mn 3−δ Ga using perpendicular remnant magnetization states.
The first-principles calculations based on the density functional theory (DFT) suggest the unique band structures in the Mn sites of mixed spin-up and -down bands at the Fermi level (E F ), which allow the spin transition between up and down spin states, resulting in the stabilization of the PMA 28−30 . Usually, the PMA originates from the anisotropy of orbital magnetic moments in the large exchange-split cases, such as Fe and Co, using second-order perturbation for spin-orbit interaction 31,32 . Meanwhile, for Mn compounds, the contribution from orbital moment anisotropy for PMA is smaller than the spin-flipped contribution to PMA 33−35 . However, this picture has not been guaranteed completely from an experimental viewpoint until now.
In this study, we performed the deconvolution of each Mn site using the systematic XMCD and XMLD measurements for different Mn contents in Mn 3−δ Ga. We discuss the site-specific spin and orbital magnetic moments with magnetic dipoleterm, which corresponds to electric quadrupoles. These are deduced from the angular-dependent XMCD and XMLD and compared with the DFT calculations to understand the PMA microscopically. To deconvolute the MnI and MnII sites in the XMCD spectra, we performed the subtraction of XMCD between Mn 1 Ga and Mn 3 Ga. Figure Mn 1 Ga, if m Tz is negative, resulting in Q zz > 0 in the notation of m Tz = −Q zz ·m s , which exhibits the prolate shape of the spin density distribution; the second term favors PMA because of the different sign for the contribution of orbital moment anisotropy in the first term. Since 7m Tz is estimated to be in the order of 0.1 µ B from angular-dependent XMCD between surface normal and magic angle cases, Q zz is less than 0.01, resulting that the orbital polarization of less than 1% contributes to stabilize PMA. In this case, the contribution of the second term in eq.(1) is one order larger than the orbital term, which is essential for explaining the PMA of Mn 3−δ Ga. Third, in a previous study 19 , quite small ∆m orb and negligible m Tz were reported for Mn 2 Ga and Mn 3 Ga. Their detailed investigation claims that ∆m orb of 0.02 µ B in MnI site contributes to PMA and MnII site has the opposite sign.
These are qualitatively consistent with our results. The difference might be derived from the sample growth conditions and experimental setup. Fourth, the reason why H c in Mn 1 Ga is small can be explained by the L1 0 -type structure, due to the stacking of the Mn and Ga layers alternately, which weakened the exchange coupling between the Mn layers. Finally, we comment on the XMCD of the Ga L-edges. This also exhibits the same sign as the MnI component, suggesting that the induced moments in the Ga sites were derived from the MnI component ( Fig. S1), which was substituted by the MnII for Mn 2 Ga and Mn 3 Ga.
To determine the effect of m Tz , we performed XMLD measurements. Figure 3 shows the difference between the L 2 x and L 2 z terms through the spin-flipped transitions between the occupied (o) to unoccupied (u) states is significant for the gain of the PMA energy.
The matrix elements of u↑|L x 2 |o↓ were enhanced in the spin-flipped transition between yz and z 2 , and those of u|L z 2 |o were enhanced in the spin-conserved case between xy and x 2 − y 2 28 . These transitions favor the magnetic dipole moments of prolate shapes ( Q zz = 3L 2 z −L 2 >0) described by the Mn 3d each orbital angular momenta. We emphasize that the signs of ∆m orb and Q zz are opposite, which is essential to stabilize the PMA by the contribution of the second term in Eq. (1). The PMA energy of FePt exhibits around MJ/m 3 and the contribution of the second term in Pt is four times larger than the Fe orbital anisotropy energy 28 . Therefore, MnGa has a specific band structure by crystalline anisotropy elongated to the c-axis and intra-Coulomb interaction in Mn sites to enhance the PMA without using heavy-metal atoms.
In conclusion, we investigated the origin of PMA in Mn 3−δ Ga by decomposing into two kinds of Mn sites for XMCD, XMLD, and the DFT calculation. The contribution of the orbital moment anisotropy in Mn 3 Ga is small and that of the mixing between the Mn 3d up and down states is significant for PMA, resulting in the spin-flipped process through the electron hopping between finite unforbidden orbital symmetries in the 3d states through the quadratic contribution. Composition dependence reveals that the orbital magnetic moments of the two antiparallel-coupled components in Mn sites were too small to explain the PMA.
These results suggest that the quadrupole-like spin-flipped states through the anisotropic
XMCD and XMLD measurements
The XMCD and XMLD were performed at BL-7A and 16A in the Photon Factory at the High-Energy Accelerator Research Organization (KEK). For the XMCD measurements, the photon helicity was fixed, and a magnetic field of ±1.2 T was applied parallel to the incident polarized soft X-ray beam, defined as µ + and µ − spectra. The total electron yield mode was adopted, and all measurements were performed at room temperature. The XAS and XMCD measurement geometries were set to normal incidence, so that both the photon helicity and the magnetic field were parallel and normal to the surface, enabling measurement of the absorption processes involving the normal components of the spin and orbital angular momenta 36 . In the XMLD measurements, the remnant states magnetized to PMA were adopted. For grazing incident measurements in XMLD and XLD, the angle between incident beam and sample surface normal was kept at 60 • tilting as shown in the inset of Fig. 3. constructed the scenario of quadrupole physics. All authors discussed the results and wrote the manuscript.
COMPETING INTERESTS
The authors declare no competing financial interests.
DATA AVAILABILITY
The data that support the findings of this study are available from the corresponding author upon reasonable request. The inset shows an illustration of the XMLD measurement geometry. The angle between sample surface normal and incident beam is set to 60 • . All measurements were performed at RT. | 2,651.6 | 2020-05-07T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Opposite Transcriptional Regulation in Skeletal Muscle of AMP-activated Protein Kinase γ3 R225Q Transgenic Versus Knock-out Mice*
AMP-activated protein kinase (AMPK) is an evolutionarily conserved heterotrimer important for metabolic sensing in all eukaryotes. The muscle-specific isoform of the regulatory γ-subunit of the kinase, AMPK γ3, has an important role in glucose uptake, glycogen synthesis, and fat oxidation in white skeletal muscle, as previously demonstrated by physiological characterization of AMPK γ3 mutant (R225Q) transgenic (TgPrkag3225Q) and γ3 knock-out (Prkag3-/-) mice. We determined AMPK γ3-dependent regulation of gene expression by analyzing global transcription profiles in glycolytic skeletal muscle from γ3 mutant transgenic and knock-out mice using oligonucleotide microarray technology. Evidence is provided for coordinated and reciprocal regulation of multiple key components in glucose and fat metabolism, as well as skeletal muscle ergogenics in TgPrkag3225Q and Prkag3-/- mice. The differential gene expression profile was consistent with the physiological differences between the models, providing a molecular mechanism for the observed phenotype. The striking pattern of opposing transcriptional changes between TgPrkag3225Q and Prkag3-/- mice identifies differentially expressed targets being truly regulated by AMPK and is consistent with the view that R225Q is an activating mutation, in terms of its downstream effects. Additionally, we identified a wide array of novel targets and regulatory pathways for AMPK in skeletal muscle.
AMP-activated protein kinase (AMPK) 2 is a critical regulator of carbohydrate and fat metabolism in eukaryotic cells (reviewed in Refs. 1 and 2). AMPK is a heterotrimer that consists of ␣-, -, and ␥-subunits, all of which are required for its activity. The catalytic ␣-subunit contains a conventional serine/threonine protein kinase domain, and phosphorylation of Thr-172 residue within the activation loop of the ␣-subunit by upstream kinases is essential for the activity of the heterotrimer (3)(4)(5)(6). Once phosphorylated at Thr-172, AMPK can be further activated by allosteric binding of AMP to the evolutionary conserved cys-tathionine -synthase domains in the regulatory ␥-subunit (7). The AMPK -subunit acts as a scaffold for binding of the ␣and ␥-subunits (8). The -subunit also contains a glycogen-binding domain, and recent findings provide evidence that this motif is involved in targeting the AMPK complex to cellular glycogen stores (9,10). The mammalian genome contains seven AMPK genes encoding for two ␣-, two -, and three ␥-isoforms. Thus, there are 12 possible combinations of heterotrimeric AMPK, and the physiological function of the AMPK holoenzyme depends on the particular isoforms present in the complex.
We have provided evidence that AMPK ␥3 is the predominant ␥-isoform expressed in glycolytic (white, fast-twitch type II) skeletal muscle (11). In contrast, it is expressed at low levels in oxidative (red, slowtwitch type I) skeletal muscle and is undetectable in brain, liver, heart, or white adipose tissue (11). Thus, the AMPK ␥3-subunit is the only isoform exhibiting tissue-specific expression. Furthermore, the AMPK ␥3-subunit primarily forms heterotrimers with the ␣2and 2-isoforms in glycolytic skeletal muscle (11).
The functional significance of the AMPK ␥3-subunit has been demonstrated by phenotypic analysis of animal models carrying a mutated form of the gene. The dominant Rendement Napole (RN) phenotype identified in Hampshire pigs is caused by a single missense mutation (R225Q) in the AMPK ␥3-subunit (12). RN pigs have a 70% increase in glycogen content in skeletal muscle, whereas liver and heart glycogen content remains unchanged (13). Furthermore, RN carriers are also characterized by a higher oxidative capacity in white skeletal muscle fibers (14,15). Conversely, a second mutation (V224I) identified in pigs at the neighboring amino acid residue of the ␥3-protein is associated with an opposite phenotype compared with the RN allele, resulting in reduced skeletal muscle glycogen content (16). Characterization of transgenic mice with skeletal muscle-specific expression of the mutant (R225Q) form of the mouse AMPK ␥3-subunit, as well as AMPK ␥3-subunit knock-out mice, provided further evidence that the ␥3-subunit plays a key role in skeletal muscle carbohydrate and lipid metabolism. Glycogen resynthesis after exercise was impaired in AMPK ␥3 knock-out mice but was markedly enhanced in transgenic mutant mice. An AMPK-activator failed to increase skeletal muscle glucose uptake in knock-out mice, whereas insulin-mediated glucose uptake was unaltered. When fed with a high fat diet, ␥3 R225Q transgenic mice were protected against excessive triglyceride accumulation and insulin resistance in skeletal muscle, presumably due to an increase in fat oxidation (17). Additionally, skeletal muscle from ␥3 R225Q mutant mice is characterized by enhanced work performance, whereas knock-out mice are fatigue-prone (18).
To further characterize the role of AMPK ␥3 in skeletal muscle and to uncover molecular mechanisms explaining phenotypic consequences of the mutations in this isoform, we have studied AMPK ␥3-dependent gene transcription by a systematic approach, using global analysis of the mRNA expression pattern in the skeletal muscle of ␥3 R225Q mutant and ␥3 knock-out mice. Here we describe distinct biomarker patterns, comprising AMPK ␥3-dependent transcriptional changes of genes involved in glucose and lipid metabolism and skeletal muscle ergogenics.
EXPERIMENTAL PROCEDURES
AMPK Knock-out (Prkag3 Ϫ/Ϫ ) and R225Q Transgenic (TgPrkag3 225Q ) Mice-The Prkag3 Ϫ/Ϫ and TgPrkag3 225Q mice used in this study have been previously described (17). Prkag3 Ϫ/Ϫ mice were created by conventional gene targeting techniques. TgPrkag3 225Q mice express the mutant ␥3 R225Q subunit under the control of mouse myosin-light chain promoter and enhancer elements. Mice used in the study were bred into the C57BL/6 genetic background. Mice were maintained in a 12-h light-dark cycle and were cared for in accordance with regulations for the protection of laboratory animals. The study was performed after prior approval from the local ethical committee. Gene expression profiles were characterized in male mice fasted overnight (food was removed 16 h prior to study). The white portion of the gastrocnemius muscle was dissected from anesthetized mice, cleaned of fat and blood, and quickly frozen in liquid nitrogen as described (17).
Preparation of Total RNA-Total RNA was isolated from the white portion of the gastrocnemius muscle using the RNeasy Fibrous Mini Kit (Qiagen) applying Mixer Mill MM 301 (Retsch) followed by a DNase digestion step using RNase-Free DNase Set (Qiagen) according to the manufacturer's instructions. The RNA yield was quantified by spectrophotometric analysis and the RNA purity was determined based on the A 260 /A 280 ratio. The quality of the RNA was confirmed by Agilent 2100 Bioanalyzer analysis (Agilent Technologies) using the RNA 6000 Nano Assay Kit (Agilent Technologies).
Preparation of cRNA, Gene Chip Hybridization-10 g of total RNA spiked with poly-A controls (pGIBS-TRP, -THR, and -LYS, American Type Culture Collection) was converted to cDNA utilizing a T7 promoter-polyT primer (Affymetrix) and the reverse transcriptase Superscript II (Invitrogen), followed by a second strand cDNA synthesis (Invitrogen). Double-stranded cDNA was in vitro transcribed to biotinylated cRNA (Enzo) and then fragmented (Invitrogen). The fragmented cRNA was mixed with control oligonucleotide B2 (Affymetrix) and a hybridization control cRNA mixture (BioB, BioC, BioD, and Cre, Affymetrix). Aliquots of each sample were hybridized (16 h at 45°C) to GeneChip Mouse Expression Set 430A arrays (Affymetrix). The arrays were subsequently washed, stained, and scanned according the manufacturer's instructions (GeneChip Expression Analysis Technical Manual, Affymetrix).
Quantitative Real-time PCR-Quantification of mRNA levels for selected genes was performed by qRT-PCR as described (19) using acidic ribosomal phosphoprotein P0 (Arbp) as endogenous control (see supplemental Table SI for primer information). qRT-PCR was performed on extended set of samples including 7 Prkag3 Ϫ/Ϫ mice with 8 wild-type littermates and 13 TgPrkag3 225Q mice with 10 wild-type littermates, while RNA from 6 animals in each group was used in gene array analysis.
Histochemistry-Enzyme activity staining for succinate dehydrogenase and cytochrome c oxidase was done on serial cross-sections (10-m thickness) of frozen gastrocnemius muscle as described previously (20,21). For succinate dehydrogenase activity staining, sections were incu-bated for 4 min in a 0.1 M phosphate buffer (pH 7.6) containing 5 mM EDTA, 45 mM disodium succinate, 1.2 mM nitro blue tetrazolium, 1 mM potassium cyanide, and 1 mM phenazine methosulfate. Cytochrome c oxidase activity staining was performed by incubating sections for 1 h in a 50 mM phosphate buffer (pH 7.6) containing 0.22 M sucrose, 2.3 mM 3,3Ј-diaminobenzidine tetrahydrochloride, 1 mM cytochrome c, and 1300 units of catalase.
RESULTS
Microarray Analysis of the mRNA Expression in the Skeletal Muscle of AMPK ␥3 Knock-out (Prkag3 Ϫ/Ϫ ) and R225Q Transgenic (TgPrkag3 225Q ) Mice-To determine the role of ␥3-containing AMPK complexes in regulation of gene expression in the skeletal muscle, we utilized mouse models that either lack the AMPK ␥3-protein (Prkag3 Ϫ/Ϫ ) or express a R225Q mutant form of this protein in skeletal muscle (TgPrkag3 225Q ) (17). In Prkag3 Ϫ/Ϫ mice, AMPK ␥3-protein expression is completely ablated, and importantly, no compensatory increase in ␥1or ␥2-isoform is detected (17). Equally important, AMPK expression in TgPrkag3 225Q mice resembles the expression pattern in wild-type mice, both with regard to tissue distribution and protein expression, with the mutant (R225Q) form replacing the endogenous AMPK ␥3-protein (17). Global gene expression profiles in the white portion of gastrocnemius muscle of Prkag3 Ϫ/Ϫ and TgPrkag3 225Q mice were compared with the corresponding wild-type littermates using oligonucleotide microarrays. The expression of 167 genes was significantly (p Յ 0.05) changed by a factor of 20% or more, in TgPrkag3 225Q and/or Prkag3 Ϫ/Ϫ mice, relative to the wild-type controls ( Table 1). Applying the same filtering criteria on randomly created groups within the Prkag3 Ϫ/Ϫ dataset and TgPrkag3 225Q dataset resulted in six genes determined as differentially expressed. This indicates that the rate of false positives is low. Consequently, the vast majority of the genes appearing differently expressed in Prkag3 Ϫ/Ϫ and/or TgPrkag3 225Q mice can be considered as truly regulated.
Of the 167 differentially expressed transcripts, the identity of 148 genes is known and represents proteins of different functional classes, whereas 19 transcripts only show homology to sequences in the EST or genomic databases. Interestingly, the expression level of 16 genes was significantly changed in both AMPK ␥3 R225Q transgenic and AMPK ␥3 knock-out mice, compared with their respective wild-type littermates (Table 1). For these 16 transcripts, the direction of the observed change was the opposite in knock-out versus mutant transgenic mice. Furthermore, most of the genes that were significantly changed exclusively in Prkag3 Ϫ/Ϫ mice tended to be regulated in an opposite manner in TgPrkag3 225Q mice, even though this difference did not reach statistical significance and/or meet the -fold change criteria. Correspondingly, the vast majority of transcripts, which were differentially regulated exclusively in R225Q transgenic mice, exhibited an opposite trend in knock-out mice. The striking pattern of opposing transcriptional changes in the AMPK ␥3 R225Q transgenic versus knock-out mice, as compared with their wild-type littermates, is illustrated (Fig. 1).
Many of the genes, which were found to be differentially expressed in Prkag3 Ϫ/Ϫ and/or TgPrkag3 225Q mice, are previously undescribed as being regulated by AMPK. Full functional significance of these changes in global transcriptional profile remains to be addressed in further experiments. To determine the possible mechanistic explanations for previously described physiological differences between Prkag3 Ϫ/Ϫ and TgPrkag3 225Q mice (17,18), we performed a more detailed analysis of gene expression changes for targets that are known to be involved in lipid and carbohydrate metabolism and muscle ergogenics. The expression of several genes involved in these functions depends on the skeletal Global mRNA expression pattern was characterized in the white portion of the gastrocnemius skeletal muscle in male mice of C57BL/6 genetic background. The filtering criteria were set to a mean absolute -fold change Ͼ1.2 and a p value Յ0.05. In addition the mean intensity in the group showing highest expression should be Ͼ75. muscle fiber type, and, correspondingly, any alterations in skeletal muscle fiber type composition might contribute to expression differences. However, relative expression of slow and fast myosin and troponin isoforms remained unchanged in TgPrkag3 225Q or Prkag3 Ϫ/Ϫ mice versus their respective wild-type littermates (data not shown). Moreover, enzyme activity staining for succinate dehydrogenase and cytochrome c oxidase (markers for oxidative energy metabolism that stain red muscle fibers containing high levels of mitochondria more intensively than white fibers with fewer mitochondria) did not show any clear alteration in fiber type composition between the different genotypes (supplementary Fig. S1). Taken together, these data indicate that differences in the transcriptional profile we describe are independent of skeletal muscle fiber type changes. qRT-PCR Validation of Differentially Expressed Genes-To minimize erroneous conclusions due to technical variability of the microarray technology, qRT-PCR analysis was applied to validate expression profiles of 13 genes selected on the basis of biological relevance (Fig. 2). For all the transcripts examined, qRT-PCR data verified the significant differences in gene expression (-fold change Ͼ1.2; p Յ 0.05) detected by gene array analysis. Furthermore, individual animal-to-animal comparison of the expression profiles for these genes showed close to perfect correlation comparing the two techniques (data not shown). The high level of correlation between the expression profiles generated by microarray versus qRT-PCR approach illustrates the reliability of the gene array results. However, for 11 of 13 transcripts examined the microarray data tended to underestimate the expression change compared with qRT-PCR results.
Public ID Gene symbol Gene title
Altered Expression of Components of the Glycogen Synthesis Pathway in AMPK ␥3 Knock-out and R225Q Transgenic Mice-AMPK function is closely connected to glycogen storage. In human and rat skeletal mus-cle, high glycogen content represses AMPK activity (22,23). Concomitantly, there is also genetic evidence that AMPK regulates glycogen levels, because mutations of the ␥3or ␥2-subunit affect glycogen storage in skeletal muscle of RN pigs or in human cardiac muscle, respectively (12,16,24). Furthermore, AMPK ␥3 R225Q transgenic and ␥3 knock-out mice have a respective increase or decrease in the rate of glycogenesis in the recovery phase following exercise (17). Notably, the expression of two transcripts encoding proteins involved in glycogen synthesis, Ugp2 and Gbe1, was significantly up-regulated in TgPrkag3 225Q mice, while being significantly down-regulated in Prkag3 Ϫ/Ϫ mice, as determined by the microarray analysis ( Table 1). The observed change in the expression pattern was further confirmed by a qRT-PCR approach (Fig. 2). Ugp2 codes for UDP-glucose pyrophosphorylase 2 (EC 2.7.7.9), an enzyme catalyzing the synthesis of UDP-glucose, a common substrate for glycogenin and glycogen synthase. Glycogenin catalyzes the first step in glycogen synthesis: a selfglycosylation reaction to form an oligosaccharide chain of around eight residues in length (25). Secondly, glycogen synthase (EC 2.4.1.11), with the participation of the glycogen branching enzyme, Gbe1 (EC 2.4.1.18), elongates the oligosaccharide chain to form a mature glycogen molecule. Thus, coordinated changes in Ugp2 and Gbe1 expression are likely to contribute to the differences in the glycogen synthesis rate observed comparing Prkag3 Ϫ/Ϫ and TgPrkag3 225Q muscle.
Coordinated and Reciprocal Changes in the Expression of Map2k6 and Dusp10 Genes in Prkag3
Ϫ/Ϫ and TgPrkag3 225Q Muscle-Based on correlative evidence, stimulation of glucose uptake has been reported to be partly regulated by Mapk14 (also known as p38 MAPK), a downstream target of AMPK (26 -30). However, the exact mechanism of how the activation of AMPK would lead to an increase in Mapk14 phosphorylation remains obscure. Interestingly, two transcripts encoding proteins involved in regulation of Mapk14 activity, Map2k6 and Dusp10, were coordinately regulated in ␥3 R225Q transgenic and knock-out mice, as shown by microarray analysis as well as qRT-PCR (Fig. 2). A protein encoded by Map2k6 (mitogen-activated protein kinase kinase 6) is known to activate Mapk14 by dual phosphorylation of specific threonine and tyrosine residues (31). Dusp10 (dual specificity phosphatase 10), on the other hand, down-regulates the enzymatic activity of MAPKs by dephosphorylating the threonine and tyrosine residues, with selectivity toward Mapk14 (32). Thus, the finding of an up-regulation of Map2k6 in combination with a suppression of the Dusp10 transcript in TgPrkag3 225Q mice, and the reversed pattern of changes seen in Prkag3 Ϫ/Ϫ mice, suggests that Mapk14 is a target of AMPK ␥3-containing trimers in the skeletal muscle. An up-regulation of Map2k6 mRNA in TgPrkag3 225Q muscle was accompanied by an increase in the protein level, as seen by Western blot analysis (data not shown).
␥3-Containing AMPK Heterotrimers Regulate Lipid Metabolism Gene Expression in Skeletal
Muscle-The AMPK ␥3-subunit has previously been shown to be involved in regulation of fat oxidation. Pigs and mice carrying the R225Q mutation in AMPK ␥3-gene are characterized by increased lipid oxidation in skeletal muscle (14,15,17). In the microarray and qRT-PCR analysis, several genes involved in fat metabolism were differentially expressed in TgPrkag3 225Q and Prkag3 Ϫ/Ϫ mice, suggesting that ␥3-containing AMPK complexes are involved in the regulation of lipid oxidation in skeletal muscle at the transcriptional level. mRNA for Srebf1 (sterol regulatory element binding factor 1), implicated in lipogenic gene expression (33), was down-regulated in ␥3 R225Q mutant mice, whereas mRNA encoding for Ppargc1a (peroxisome proliferative-activated receptor, ␥ coactivator 1 ␣), known to increase the expression of both nuclear and mitochondrial-encoded genes of oxidative metabolism (34), was up-regulated. Additionally, a key gene integral to free fatty acid uptake (Cd36 (35)), as well as genes involved in use of fat-derived energy (3-oxoacid-CoA transferase 1, Oxct1, EC 2.8.3.5 and carboxylesterase 3, Ces3, EC 3.1.1.1) were up-regulated in TgPrkag3 225Q mice. The opposite pattern of changes was observed in ␥3 knock-out mice, with differential expression of mRNA for Cd36, Oxct1, and Ppargc1a reaching statistical significance (Fig. 2). Possible Role for the AMPK ␥3-Subunit in the Regulation of Cellular Iron Homeostasis-Expression of several genes involved in iron metabolism was coordinately changed in AMPK ␥3 R225Q transgenic mice, compared with the wild-type littermates. The transferrin receptor, ferritin, and aminolevulinic acid synthase (a key enzyme for heme synthesis) regulate the uptake, storage, and use of iron in cells, respectively. Expression of these markers is known to be coordinately controlled by cellular iron supply such that, under conditions of iron starvation, the transferrin receptor is up-regulated, whereas ferritin and aminolevulinic acid synthase are down-regulated (36). In TgPrkag3 225Q mice, the transcription of the transferrin receptor, Tfrc was decreased, while mRNAs for ferritin light chain 1 and aminolevulinic acid synthase 1 were increased, suggesting the possibility of improved iron status compared with the wild-type mice (Fig. 2). Improved iron status may increase skeletal muscle capacity for aerobic metabolism (37,38). In line with this, the level of skeletal muscle myoglobin, a heme-carrying protein, transporting oxygen to the mitochondria, was non-significantly increased in TgPrkag3 225Q mice, as determined by Western blot analysis (data not shown).
AMPK ␥3-Containing Complexes Regulate Transcription of the Nos1 Gene-Nitric oxide synthase 1 (Nos1 also known as nNos) was significantly up-and down-regulated by ϳ2-fold in ␥3 R225Q transgenic and knock-out mice, respectively (Fig. 2). The family of Nos enzymes catalyzes formation of endogenous nitric oxide, a mediator implicated in regulation of skeletal muscle contractility, mitochondrial function, as well as glucose uptake (39,40). Thus, the direction of differential expression of this gene was in complete agreement with the observed phenotype in the mouse models. Although Nos1 is considered a predominant isoform expressed in skeletal muscle fibers, endothelial Nos3 (Nos3 or eNos) and inducible Nos2 (Nos2 or iNos) are also expressed in this tissue (41)(42)(43). Interestingly, Nos2 and Nos3 were represented on oligonucleotide microarray and were detected in the skeletal muscle samples. However, mRNA levels for these two genes were unaltered.
DISCUSSION
The present study provides the first systematic characterization of the role of the ␥3-containing AMPK heterotrimers in transcriptional regulation in skeletal muscle. An oligonucleotide microarray technology was used to compare global transcriptional profiles in white skeletal muscle from AMPK ␥3 R225Q transgenic (TgPrkag3 225Q ) and knockout (Prkag3 Ϫ/Ϫ ) mice and their respective wild-type littermates. Collectively, evidence is provided for an important role of the AMPK ␥3-subunit in the transcriptional regulation of divergent groups of genes in glycolytic skeletal muscle. The expression of 167 genes was significantly changed (-fold change Ͼ 1.2 and p Յ 0.05) in skeletal muscle from ␥3 R225Q transgenic and/or knock-out mice. A number of genes from the same biological pathways was coordinately regulated. Changes in levels of mRNA of particular interest, including genes involved in glycogen and lipid metabolism, as well as iron homeostasis and Mapk14 signaling, were selected to be further verified by qRT-PCR (Fig. 2). For all of 13 selected genes, the qRT-PCR results were consistent with the gene array data, which emphasizes the quality of the microarray results. Of note is that most of the genes differentially regulated in TgPrkag3 225Q mice were changed in an opposing manner in AMPK ␥3 knock-out mice (Fig. 1). The reciprocal and coordinated expression pattern observed increases the confidence in the gene-array data and demonstrates that the majority of differentially expressed genes are true positives.
This reverse transcription pattern seen in TgPrkag3 225Q versus Prkag3 Ϫ/Ϫ mice is intriguing, considering that the mechanism of action of R225Q substitution in AMPK ␥3-subunit is still unresolved. A previ-ous report provided evidence that the kinase activity of AMPK was reduced by ϳ3-fold in skeletal muscle of ␥3 R225Q mutant pigs when measured in the presence and absence of the allosteric activator, AMP (12). Nevertheless, in resting muscle from TgPrkag3 225Q mice, AMPK activity was unaltered (18). Excessive glycogen content characterizing skeletal muscle of mice and pigs carrying the ␥3 R225Q mutation may inhibit AMPK activation by a feedback mechanism. Consistent with this hypothesis, AMPK activity and phosphorylation of the Thr 172 residue in the ␣-subunit was elevated in Cos cells transiently transfected with plasmids encoding ␣22␥3 R225Q, compared with the cells transfected with plasmids encoding wild-type trimers (17). Interestingly, the introduction of the R225Q mutation at the corresponding site of the AMPK ␥1 (R70Q) or ␥2 (R302Q) leads to increased or decreased kinase activity, respectively, when measured in the presence of AMP (44,45). Our data support a role for the ␥3 R225Q as an activating mutation, as judged by its biological effects, because this substitution rendered the opposing changes in the gene expression profile compared with the changes resulting from genetic ablation of the AMPK ␥3-subunit (Fig. 1).
The wide array of differentially expressed genes reported in this study has not previously been identified to be regulated by AMPK. Moreover, the biological function of a number of the regulated transcripts is poorly described in skeletal muscle and 19 of the differentially expressed genes remain unknown. Therefore, additional molecular and functional studies will be required to understand the full biological significance of the transcriptional changes described here. Importantly, the expression of several genes with known function in fat and carbohydrate metabolism and skeletal muscle ergogenics was coordinately and reciprocally changed in AMPK ␥3 R225Q transgenic and knock-out mice. Thus, the transcriptional regulation by AMPK ␥3-containing complexes could lead to at least some of the physiological differences observed in skeletal muscle from TgPrkag3 225Q and Prkag3 Ϫ/Ϫ mice (Fig. 3). and Prkag3 Ϫ/Ϫ mice (indicated by blue arrows), compared with wild-type littermates, and the predicted physiological response (B). The role of AMPK␥3 in regulation of functions like glycogen deposition, glucose uptake, oxidative metabolism, and muscle fatigue has previously been described (17,18). The current study provides a transcriptional mechanism for the physiological differences between the genotypes.
One of the most distinct physiological differences between skeletal muscle from AMPK ␥3 R225Q transgenic and knock-out mice is the change in glycogen metabolism (17). Glycogen content in skeletal muscle from TgPrkag3 225Q mice is 2-fold higher under either fed or fasted conditions. Furthermore, skeletal muscle glycogen re-synthesis after exercise is markedly enhanced in TgPrkag3 225Q mice. In contrast, glycogenesis after exercise was impaired in Prkag3 Ϫ/Ϫ mice (17). Glycogen synthase catalyzes a rate-determining step in the glycogen biosynthesis pathway. However, at least in situations where glycogen synthase is activated, the reactions mediated by other enzymes in synthesis pathway can become rate-limiting (46). Interestingly, the microarray data and qRT-PCR analysis revealed a significant up-and down-regulation of Ugp2 and Gbe1 mRNAs (encoding two key enzymes in glyocogen synthesis) in skeletal muscle from AMPK ␥3 R225Q transgenic and knockout mice, respectively. In agreement with these results, the protein level and enzyme activity of Ugp2 and Gbe1 is increased in RN pigs carrying an R225Q mutant form of the AMPK ␥3-subunit (14,47). Based on these observations, we hypothesize that the differences in the skeletal muscle glycogen re-synthesis rate displayed by ␥3 R225Q transgenic and knock-out mice may be at least partly explained by coordinated differential transcription of Ugp2 and Gbe1.
Activation of AMPK either by physiological stimulation such as muscle contraction or by the pharmacological activator 5-amino-4-imidazole-carboxamide riboside (AICAR) leads to an increase in skeletal muscle glucose transport by promoting Glut4 translocation to the cell surface (48), as well as increasing Glut4 gene transcription (49). Importantly, the AICAR effect on glucose uptake is completely abolished in AMPK ␥3 knock-out mice, suggesting that the ␥3-subunit is essential for AICAR-induced glucose transport in skeletal muscle (17). Nonetheless, our microarray data did not detect any difference in the expression of the Glut4 gene when comparing skeletal muscle from TgPrkag3 225Q and Prkag3 Ϫ/Ϫ mice. Mapk14 (also known as p38 MAPK) has been implicated as a downstream mediator of AMPK signaling to glucose transport in response to AICAR. In this context, it is interesting to note that mRNAs encoding two proteins involved in regulation of the phosphorylation pattern of Mapk14, Map2k6 and Dusp10, were coordinately and reciprocally regulated in AMPK ␥3 R225Q and knock-out mice. Another mediator implicated in AMPK-regulated glucose transport is nitric-oxide synthase, Nos (50). The microarray and qRT-PCR analyses revealed the mRNA encoding Nos1, the predominant nitric-oxide synthase isoform in skeletal muscle, was significantly up-and down-regulated by ϳ2-fold in TgPrkag3 225Q and Prkag3 Ϫ/Ϫ muscle, respectively. Thus, the present study provides the first evidence of Nos1 being a transcriptional target of AMPK signaling.
Previously, the AMPK ␥3-subunit has been shown to influence muscle ergogenics (18). In response to electrically stimulated muscle contraction, isolated EDL muscle from TgPrkag3 225Q mice is resistant to fatigue. Conversely, skeletal muscle from Prkag3 Ϫ/Ϫ mice is fatigue prone (18). The increase in glycogen content in skeletal muscle of TgPrkag3 225Q mice may have a direct positive effect on muscle performance. Additionally, skeletal muscle is known to adapt to endurance exercise by increasing the oxygen carrying capacity. Our microarray data demonstrate a coordinated up-regulation of genes involved in the storage (Ftl1) and use (Alas1) of cellular iron in TgPrkag3 225Q mice, which may indicate improved iron status and correspondingly, increased oxidative capacity in skeletal muscle. Accordingly, the level of skeletal muscle oxygen carrying protein, myoglobin, was increased in TgPrkag3 225Q mice, compared with the wild-type mice.
Endurance exercise is dependent upon oxidation of fatty acids as a major source of ATP. Two transcription factors implicated in the reg-ulation of fatty acid homeostasis, Ppargc1a and Srebf1, as well as several genes necessary for the transport (Cd36) and utilization (Ces3 and Oxct1) of fatty acids were differentially expressed in skeletal muscle from TgPrkag3 225Q mice, suggesting increased fatty acid availability and oxidation. In agreement with these observations, elevated reliance of fat-derived energy has been described in ␥3 R225Q transgenic mice during exercise, as well as after challenge with high fat diet (17,18).
We have applied global transcriptome analysis by oligonucleotide microarrays to systematically characterize the role of the AMPK ␥3-subunit in the regulation of gene expression. Recently, a number of studies have used qRT-PCR analysis to characterize the role of AMPK complexes in transcription regulation (51,52). However, gene array approaches allow for concurrent analysis of global mRNA profiles and offer an advantage of reducing bias in data collection, compared with the candidate gene-based approaches. Taken together, our data indicate that a number of genes involved in carbohydrate and fat metabolism, as well as skeletal muscle ergogenics are coordinately and reciprocally regulated in the skeletal muscle from AMPK ␥3 R225Q transgenic and knock-out mice and provide a molecular mechanism for the previously described physiological differences between the models. Furthermore, the current study identifies that AMPK ␥3-containing complexes play an important role in regulation of novel downstream targets and pathways in skeletal muscle.
AMPK has been identified as an attractive therapeutic target in the treatment of obesity and type II diabetes (53). However, the ideal therapeutic small molecule should modify the activity of AMPK heterotrimers in metabolic tissues such as skeletal muscle to increase fatty acid oxidation and glucose uptake without exerting any effect on organs including central nervous system, heart, pancreas, or lung, where AMPK activation potentially would lead to harmful consequences (54 -57). Therefore, therapeutic agents that selectively modulate the activity of ␥3-containing AMPK heterotrimers could potentially provide a way to specifically target skeletal muscle and thereby prevent any adverse effects in other organ types. The current study contributes to the understanding of the role of the AMPK ␥3-subunit in the regulation of gene transcription and defines several sets of potential biomarkers to characterize the molecular effects in response to the administration of lead substances ideally mimicking the TgPrkag3 225Q phenotypes in mice and pigs. | 6,664 | 2006-03-17T00:00:00.000 | [
"Biology"
] |