text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Large‐Scale Multiplexed Azopolymer Gratings with Engineered Diffraction Behavior The diffraction of polychromatic light from periodic superficial structures is often responsible for the structural colors observed in Nature. Similarly, engineered microtextures fabricated on metallic or dielectric surfaces can be used to design diffracted optical patterns with desired shapes and colors. To this aim, advanced diffraction gratings with exceptional design and functionality are continuously proposed, and new fabrication methods follow to stay abreast with the improving design capabilities. Multiplexed surface reliefs, acting as complex gratings with tunable diffraction behavior, can be readily produced on films of azobenzene containing materials by exposing the surface to controlled sequences of holographic interference patterns. This work fully investigates, both theoretically and experimentally, the use of light‐induced surface relief on azopolymers for the realization of large‐scale multiplexed gratings with optimized diffraction performances. The reconfigurable diffraction gratings able to diffract polychromatic light in the same direction with controllable relative color intensities by tuning exposure parameters in a switchable two‐beam interference setup are designed and fabricated. The results can be generalized to more complex diffractive devices, usable in emerging display application areas. The fabrication of this class of advanced diffraction gratings requires highly accurate surface structuration techniques. Standard fabrication methods for diffraction gratings include direct machining technology, in which a ruling engine [9] or an ion beam [21] makes grooves on a grating plate, and holographic photolithography, in which the periodic structure of sinusoidal diffraction gratings is produced by interference lithography on a photoresist, [22] that is successively developed and transferred on a dielectric or metallic support via chemical etching or vapor deposition. However, grating multiplexing with these standard methods is either impossible or it requires multiple complex lithographic steps. [10] Recently, holographic inscription of sinusoidal gratings, directly achieved as surface reliefs on azobenzene-containing polymers (azopolymers), has been demonstrated to be a powerful approach for the realization of multiplexed gratings with engineered directional [23][24][25] and chromatic [19] diffraction behavior. The azopolymer structuration process is sensitive to intensity and polarization distributions of the irradiated field, [26,27] so that both intensity and polarization interference pattern can be used to produce the sinusoidal surface relief gratings (SRGs). [28][29][30] The dimension of the structured area depends only on the diameter of the interference beams, and SRGs on areas as large as several cm 2 can be realized in single lithographic steps. The use of azopolymers as materials for diffraction components [26,31] is further incentivized by the possibility to make the grating reconfigurable and then usable for multistep structuration processes. The surface structuration of azopolymers is a nondestructive process that, conversely to the chemical etching of standard photolithography, involves a lightdriven directional [32][33][34][35] polymer mass transport, [36,37] that generates surface relief with the same geometry of the illumination pattern (e.g. sinusoidal for the two-beam interferogram). [38,39] The polymer can be transported back to restore the original state both optically or thermally, erasing then the surface pattern, or moved again by a new lithographic illumination step which combines the preexisting texture with a new one. [29,40,41] We recently explored the light-induced dynamical and reversible behavior of azopolymer surfaces by directly structuring small areas of a polymer film through structured digital holography illumination and optical erasing, achieving advanced state-of-the art reconfigurable diffractive gratings and lenses, which we applied in operating devices as motionless monochromators and reconfigurable imaging systems. [16] Here we extend the use of dynamical surface reliefs on the azopolymer to design and characterize large-scale multiplexed gratings realized via sequential two-beam interference lithography. To this aim, we first analyze the theoretical aspects of the diffraction from a general multiplexed grating made of superposition of sinusoidal reliefs, providing a solid and practical framework for the design of these devices. Then we specify our analysis, by means of theory, simulations, and experiments, to a large-scale three-component 1D multiplexed grating that diffracts red (R), green (G), and blue (B) light, in a fixed common diffraction order (RGB grating). A similar grating design has been demonstrated suitable for the realization of pixelated azopolymer surfaces with apparent structured colors. [19] We experimentally realize the designed SRGs with high structural control and optimized inscription performances by using a switchable interference setup able to inscribe, monitor in real time, and eventually erase gratings of different periodicity on the azopolymer surface. The results shown here further enlarge the panorama of applications for reconfigurable SRGs, paving the way toward the design and realization of large-scale, flat, and lightweight optical components with engineered diffraction capability, [42] usable in the emerging fields of displays technologies for augmented [43] and virtual reality, [8] as well as wearable optical devices. Diffraction from Sinusoidal Multiplexed Gratings A typical multiplexed grating, realized as a surface relief on an azopolymer film, can be described as a general structured dielectric surface that modulates the wavefront of an incoming light field. In the scalar theory of light diffraction, [11] the morphological surface profile, e.g., sinusoidal as the one shown in Figure 1A, entirely determines the shape of the emerging wavefront. The resulting modulation is a consequence of differences in the optical path lengths for light travelling in the surface medium of a local thickness h(x,y) and refractive index n, in respect to same travel length in the medium surrounding the surface (typically air, whose refractive index is n 0 = 1). For a monochromatic plane wave of wavevector k 0 = 2π/λ 0 , propagating along the z axis and incident perpendicularly on the dielectric grating, the phase delay resulting from the modulation reads as: Equation (1) is at the basis of the design general diffractive optical components and constitutes the operative relation between the geometry h(x,y) of the dielectric surface and the optical functionality for any generic diffractive optical element (DOE), [10] including single or multiplexed sinusoidal surfaces. The case of a DOE made of a 1D single sinusoidal surface relief is of particular interest for the present analysis, also because some of the relevant properties in the diffraction behavior of generic DOEs are easily generalized from this simple case. Figure 1A shows the schematic representation of a dielectric sinusoidal grating of period Λ and amplitude modulation depth h, described by the one-dimensional surface relief function ( ) This grating diffracts the incoming light field into a finite number of propagating diffraction orders (plus an infinite set of evanescent orders), identified by an integer number m = 0, ±1, ±2, …, emerging from the surface at the angles θ m determined by light wavelength λ 0 and grating periodicity Λ ( Figure 1B) via Bragg's relation (see also Supporting Information): For this grating, the diffraction efficiency (DE) η m , which is a measure of the amount of the incident light power converted in the m th propagating order, is related to the maximum phase modulation depth β = k 0 (n − 1)h induced by the sinusoidal surface relief according to the relation ( Figure 1C): Here J m are the Bessel functions of first kind of order m. For a given material and light wavelength, the parameter β is directly proportional to the grating amplitude h, which is then the actual structural parameter to be eventually tuned to achieve specific distribution of light power among the diffraction orders in the pattern generated by a sinusoidal grating ( Figure 1B). For example, the behavior of η ±1 is approximately linear in respect to h for relatively small modulation depths ( Figure 1C). Bragg's relation in Equation (2) can be alternatively interpreted in terms of the direction of the wavevector k m = (k x,m , k z,m ) for the m th diffraction order emerging from the dielectric grating ( Figure 1B). Diffraction theory (see also Supporting Information) requires indeed that a propagating order m must have an in-plane wavevector component k x,m = k 0 sinθ m that is an integer multiple of the grating vector g = (g x ,g z ) = (2π/Λ, 0) of the diffraction grating or, equivalently, borrowing wellknown concepts of solid state physics, that k x,m must be a vector of the reciprocal lattice of the surface. [25] As the reciprocal lattice only depends on the in-plane surface morphology (for a 1D sinusoidal grating, the reciprocal lattice is entirely known by specifying only the grating periodicity Λ), the geometry of the entire diffraction pattern is completely defined by the grating vector g (which is a base in the reciprocal space). Such considerations, apparently more complicated than Bragg's law for a single sinusoidal grating, are particularly useful in the analysis of the diffraction patterns produced by more complex gratings, like the diffraction gratings obtained as superposition of N sinusoidal gratings (multiplexed grating) each having periodicity Λ l (grating vector g l = (2π/Λ l ,0)) and surface modulation amplitude h l . While the results for the scalar diffraction theory could be extended to diffractive surfaces described by Equation (4) (see also Supporting Information), the relevant features of their diffraction patterns can be understood by generalizing the concepts developed for the single sinusoidal grating in the interpretation of Equations (2) and (3). Diffraction orders produced by a multiplexed grating propagate, following a generalized Bragg's law, at directions θ α (where α = (α 1 ,α 2 ,…, α N ) is an array of integer numbers) which make the in-plane component of the diffracted wavevector k x,α a vector of the reciprocal lattice of the total surface: Properly tuning the periodicities Λ l , which, analogously to the case of the single sinusoid, completely defines the spatial distribution of the spots in the diffraction pattern, and the modulation amplitudes h l , that control the relative intensity of those spots, complex diffraction functionalities (and eventually engineered chromatic behaviors) can be directly encoded in the geometry of a multiplexed grating designed as the superposition of Equation (4). These concepts are here used for the design of the RGB multiplexed grating, which generates different perceived light color in a specific direction as weighted superposition of three diffraction orders of a three-components multiplexed grating simultaneously illuminated with light at red, green, and blue wavelengths. Figure 2 describes the design of a RGB multiplexed grating that diffracts light of wavelengths λ 1 = 633 nm (red), λ 2 = 532 nm (green), and λ 3 = 488 nm (blue) at the common angle, arbitrary chosen as θ target = 18.5° (see also Figure S2, Supporting Information). The periodicities Λ 1 = 2.00 μm, Λ 2 = 1.68 μm, and Λ 3 = 1.54 μm of the three sinusoidal diffraction gratings composing the multiplexed grating are calculated by solving the equation sinθ target = λ l /Λ l , obtained from Equation (2) with m = 1, and graphically represented in Figure 2A. Design of a RGB Multiplexed Grating The effect of multiplexing for these sinusoidal gratings is simulated in Figure 2B, in which the morphology of the single components G l s is rendered by grayscale topographic images, that overlay in a linear even superposition ( h 1 = h 2 = h 3 = h/3) to compose the multiplexed surface of the RGB grating according to Equation (4). For clearer visualization, the simulated surface profiles are also presented in Figure 2C. According to Equation (5), the diffraction pattern produced by the multiplexed grating is characterized, even under illumination with monochromatic light, by several diffraction spots spatially distributed in the directions of the reciprocal lattice. For a total grating amplitude the incident light wavelength (e.g. h = 500 nm) and refractive index n = 1.7, most of the diffracted light is contained in three first orders (defined as the orders for which |α| = 1 in Equation (5)), one of which is the target RGB order of our design (see Figure S2, Supporting Information). To better visualize the spatial distribution of light in the diffraction pattern produced by the multiplexed RGB grating under simultaneous illumination of the three design wavelengths, Figure 2E shows the simulated diffraction pattern that the dielectric surface produces, on a screen placed at large distance (z s = 50.0 mm), in the configuration schematized in Figure 2D. Details about the simulation are given in Experimental Section. The simulated diffraction pattern clearly shows the presence of a white diffraction spot, in which the three colors are spatially superimposed, confirming the design principle of the RGB grating and, more in general, of the multiplexing approach to engineer a complex diffraction pattern via the tailored superposition of sinusoidal surfaces. The simulation is also able to calculate the effects of having eventual different amplitude weights for h l . Additionally, it should be noted the presence in the pattern also of the other diffraction orders predicted by the reciprocal lattice. In principle, these orders could be spatially filtered out because they propagate at different angles in respect to the design RGB direction. However, their presence should not be neglected in the definition of the operation performances of real devices based on the multiplexed design, because, even not compromising the overall device functionality, they necessarily reduce the DE of the grating in the target diffraction order. Large-Scale Dynamical and Multiplexed Azopolymer Gratings To experimentally implement the designed multiplexed RGB grating over large scales, we realized a switchable two-beam interference setup able to sequentially irradiate an azopolymer film with p-polarized sinusoidal intensity interferograms having accurately controlled periodicities. This illumination configuration, largely used for efficient surface relief inscription in azopolymers, [26,31,[44][45][46][47] has been demonstrated to provide very accurate structural control and high inscription efficiency for our azopolymer. [25] The principle of the design of our setup is to have a stable interference system, in which the desired interferogram periodicity can be chosen by alternatively selecting one couple of interfering beams between three possible configurations with minimum mechanical movement, while simultaneously monitoring the dynamical diffraction behavior of the developing surface grating by means of a diffracting probe beam. The schematic representation of the setup is presented in Figure 3A. A horizontally polarized beam from a solid-state laser (Cobolt Calypso) at 491 nm was divided a first time by 70:30 (R:T) beamsplitter (BS1 in Figure 3A). The direction of beam reflected by BS1 (Beam 0) defines the optical axis of the system. Before impinging orthogonally over an area of ≈2 mm in diameter on the azopolymer surface placed in the sample plane, Beam 0 was divided a second time by a 50:50 beamplitter (BS2). To produce the three interferograms of different periodicity in the sample plane, Beam 0 was alternatively recombined with one of the three other beams (namely, Beam 1; Beam 2; Beam 3) represented in Figure 3A. The angle γ i between Beam 0 and each of the three beams was accurately controlled by means of micrometric rotating mirrors. A movable mirror (MM in Figure 3A) was used to switch between the configurations with Beam 1 and Beam 3. Tunable neutral density filters were finally used to equilibrate the incident intensity for each beam and improve the visibility of the p-polarized interference pattern in the sample plane. A constant total average intensity of 0.14 W cm −2 in the interferogram was used in the experiments. To monitor in real-time the diffraction produced by the developing surface relief gratings during the inscription process, a horizontally polarized He-Ne laser beam at wavelength of 633 nm was irradiated at normal incidence (transmitted through BS1 and BS2) in the structuring sample area. Three photodiodes (PD1; PD2; PD3), properly placed along directions predicted by Bragg's law, were used to record the time evolution of probe light diffracted in the +1 order for each of the three periodicities. DE was calculated by dividing the timedepended photodiode signal produced by the evolving grating by the signal transmitted through the flat sample surface before starting the writing process. A notch filter, placed right after the sample, discards light of the writing beam from the detection space. An additional circularly polarized collimated beam from a diode laser at the wavelength of 405 nm (referred to as assisting beam), and intensity ≈0.4 W cm −2 incident on the sample from the substrate side, was also included in the structuration process. This beam, highly absorbed by our azopolymer, [34,38] improves the grating inscription rate by redistributing the orientation of azobenzene molecules, which otherwise tend to be realigned perpendicular to the polarization of the writing beams, with a resulting gradual reduction of absorption probability and overall relief inscription efficiency. [26,[48][49][50] We extensively studied the influence of this beam on the writing dynamics of our azopolymer in our recent work. [16] Figure 3B-D shows the diffraction curves recorded in 240 s inscription experiments of the gratings G1, G2, and G3, realized to have the periodicities Λ l (l = 1, 2, 3) calculated above. The approximate linear behavior observed in the rising of the first diffraction order efficiency can be used to eventually control the height of the surface relief gratings, by properly tuning the exposure time. [16,46] Additionally, the stable signal recorded also when the writing interferogram is switched off (Figure 3B-D) demonstrates a diffraction behavior dominated by the stable surface relief grating, with only small contributions from the eventual birefringence grating typically observed in the interference-based photostructuration of azopolymers due to the photo alignment of the chromophores. [19,26,31] Figure 3E-J reports the atomic force microscope (AFM) images and the relative topographic profiles of the three azopolymer surfaces at the end of the grating inscription process. The periodicities of the sinusoidal gratings measured by the AFM were in perfect agreement with the design (see also Figure S3, Supporting Information), confirming the highly accurate control over grating periodicities achievable in our switchable interference configuration. In the experimental configuration of Figure 3A, both reconfigurable and multiplexed large-scale surface relief gratings can be easily realized and eventually tuned in real-time by using the monitored diffraction of the probe beam as indirect measure of the surface relief amplitude. Figure 4 presents the results of a dynamical experiment in which the grating G2 was inscribed, erased, and re-inscribed on the surface before a multiplexed grating was realized by adding to it the grating G1 in a successive exposure step. The dynamical evolution of the surface was characterized by simultaneously monitoring the signal of the two photodiodes PD2 and PD1, detecting the +1 diffraction order of G2 and G1, respectively. The diffraction curves are reported in Figure 4A, together with the AFM micrographs (in the insets) of the azopolymer surface after the first writing of G2 (i), after its erasing (ii) and after its rewriting (iii), which clearly show the connection between the surface topography and its dynamical diffraction behavior. The erasure of the surface from (i) to (ii) was realized by means of the 405 nm laser beams in the configuration of Figure 3A, but with higher intensity (≈0.9 W cm −2 ), as characterized in detail in our previous work. [16] At the instant (iii), the interference pattern was switched to the configuration for G2, so that a multiplexed grating with the geometry G2 + G1 started to develop on the azopolymer surface. The exposure time in the final step was chosen to make the diffraction efficiencies for G2 and G1 approximately the same (difference less than 1%), which, according to the diffraction analysis in this surface modulation regime, would correspond also to approximated similar weight h 1 and h 2 for the sinusoidal components of the multiplexed grating in the Equation (4). This is confirmed by the AFM analysis presented in Figure 4B, where a very good agreement between the experimental topographic profile and the theoretical profile, obtained by overlaying the two sinusoidal functions with experimental periodicity and equal amplitudes, is observed. The possibility of empirically selecting the appropriate exposure time to realize a target balance in diffraction efficiencies (in our case, approximately equal DE) is a clear advantage of direct dynamic efficiency monitoring during grating inscription, which avoids a priori calibration of surface relief heights in respect to the exposure time. This calibration has been demonstrated to be a difficult task for 1D topographies realized as sequences of exposures on azopolymer films [19,51] because . Reconfigurable and multiplexed azopolymer surface relief gratings (SRGs). A) Dynamical diffraction curves recorded, by the photodiodes PD1 and PD2, during a surface reconfiguration experiment in which the grating G2 is first inscribed, erased, and re-inscribed (steps i to iii) and then combined with the grating G1 to realize a multiplexed G2 + G1 grating. Insets in (A) show the AFM micrographs and profiles of the surface at the relative reconfiguration step (scale bars 5 μm). B) AFM micrograph of the multiplexed G2 + G1 grating. Exposure times for G2 and G1 were 300 and 270 s, respectively. C) Comparison between AFM and theoretical profiles of the multiplexed grating. pre-existing grooves on the surface can affect the inscription efficiencies for gratings in the following steps of the multiplexing sequence. Additionally, we also observed a possible influence of inscription efficiency in subsequent exposures on the specific order of the sequence (see also Figure S5, Supporting Information), which further weakens the feasibility of an approach based on a priori grating depth calibration for the realization of reliable and repeatable multiplexed gratings. RGB Grating on Azopolymer Film For the realization of the designed multiplexed RGB grating, three sequential exposures of the polymer film were used. In our experiment, we aimed not only at achieving the correct superposition of sinusoidal reliefs that produces the common (white) diffraction spot for the design wavelengths simulated in Figure 2E, but also at realizing multiplexed structures with controllable relative weight of the superimposed components. For the latter goal, real-time monitoring of the probe beam DE for the three gratings and empirical tuning of the exposure time for each component were used. Figure 5A shows the time evolving diffraction curves recorded during the inscription of a RGB grating, designed to have approximately equal final DE in the first orders of the superposed sinusoidal reliefs at λ = 633 nm, which again directly correlates with having approximately equal amplitude weights in the multiplexed structure. In the inscription process described in Figure 5A, after the first structuration step (grating G2) lasting until the instant t 1 , the interference configuration was switched in the second one (grating G1) and the irradiation of the sample continued until the empirically selected instant t 2 , when the DE for the second grating reached approximately the same maximum level of the first exposure step. At that instant, the last grating component (grating G3) was superimposed in the third illumination step, which continued until all the three first-order diffraction efficiencies were similar at the time t 3 . The AFM micrograph of the final azopolymer surface is show in Figure 5B, while the comparison of its experimental profile with the theoretical profile (calculated as equal-weight superposition of the sinusoidal components) is presented in Figure 5C. Similar to the analysis of two-component multiplexed grating of Figure 4B,C, the empirical tuning of exposure times in the multiplexed superposition by real-time diffraction monitoring, produced also for the three component RGB grating an experimental profile in very good agreement with the target one. It should be noted that a similar exposure sequence, with a-priori definition of exposure times [19] (for example equal exposure times for all Gi), provided a worst structural result in terms of final component balance (see Figure S5, Supporting Information). Finally, the diffraction pattern produced by the azopolymer multiplexed RGB grating under simultaneous irradiation of three collinear laser beams at design wavelengths (see Experimental Section), is shown in Figure 5D. The spatial distribution of the diffraction orders quantitatively matches the simulated pattern calculated for the ideal multiplexed surface in Figure 2 (see also Supporting Information), with the presence of a white diffraction spot in which the three diffraction orders at the three colors ( Figure 5E) are angularly superimposed. This confirms the validity of diffraction analysis, design principle, and experimental implementation we used for the realization of the RGB azopolymer grating, which could be eventually extended to other multiplexed large-scale diffraction devices, realized as superposition of sinusoidal surface reliefs with controlled periodicity and relative amplitude weights. Conclusions In this work, we used sequential inscription of sinusoidal surface relief gratings on the surface of an azopolymer film to realize diffraction gratings with overlayed morphology, having engineered structural and chromatic behavior. A multiplexed grating that diffracts polychromatic light in the same direction has been designed from the accurate analysis of the results that scalar diffraction theory provides for light modulation from dielectric surfaces. The periodicity of the superimposed gratings plays the crucial role in the definition of the diffraction pattern produced by the multiplexed grating: excellent agreement between theory and experiments has been obtained by taking advantage from a stable switchable interference setup realized to inscribe SRGs with accurate periodicity control. Real-time diffraction monitoring has been used to optimize the superposition of the single sinusoidal components in the experiment, providing an empirical but powerful approach for the tuning of relative sinusoidal weights of the superposition. Our results could be used to engineer perceived color saturation in RGB diffractive devices designed as multiplexed diffraction gratings, offering new paths for future flat and light-weight optical components. Experimental Section Azopolymer Synthesis, Characterization, and Film Preparation: The material used in this work for the realization of dielectric surface relief gratings is an azobenzene-containing polymer (azopolymer) in amorphous state. The details about the synthesis and structural, thermal, and optical characterizations have been extensively reported in the previous works. [25,34,38] The solution for film fabrication was prepared by dissolving 70 mg of the polymer in 0.50 mL of 1,1,2,2-tetrachloroethane and filtered on 0.2 μm PTFE membrane filters. Amorphous thin films were prepared by spin coating the solution on 24 × 60 mm cover slides at 300 rpm for 4 min, obtaining typical film thickness of 1.0 ± 0.1 μm. Before photostructuration experiments, the samples were kept under vacuum at room temperature for 24 h to remove solvent traces. Refractive index of the fabricated film was measured via ellipsometry. Measured values at the device operating wavelengths (633, 532, 488 nm) are: n 633 = 1.70; n 532 = 1.74; n 488 = 1.78. After synthesis and holographic structuring, the azopolymer samples are stored at room temperature. The topological analysis and optical analysis were repeated also after several months from fabrication, showing no degradation effects for both the material and the surface structure over time. Morphological Surface Characterization: The topographic analysis of the structured surfaces was performed by AFM (WITEC Alpha RS300) operating in tapping mode with a cantilever of 75 kHz resonance frequency. Analysis and elaboration of AFM data was accomplished by means of the open-source software "Gwyddion." Simulations of Diffraction Patterns: To simulate the diffracted field from the structured azopolymer surfaces, incident plane waves of unitary amplitudes were considered. In the calculations the incident field is assumed to be phase modulated in the plane at z = 0, in which the structured dielectric surface is placed. The field transmitted just behind the phase mask is equal to U out (x,y) = e iφ(x,y) . The diffracted field U(x, y, z) in each transverse plane, orthogonal to the optical axis z, behind the surface is evaluated by solving the Helmholtz scalar equation (see also Supporting Information), considering the Rayleigh-Sommerfeld diffraction integral in Fresnel approximation: [10,52] , , , A discretized form of this integral was implemented coding a MATLAB script. In the case of RGB diffraction, the Fresnel integral was calculated for each wavelength and then the results were summed according to optical field superposition principle. Experimental Imaging of RGB Diffraction Patterns: The experimental diffraction patterns produced by the azopolymer single (see Figure S4, Supporting Information) and multiplexed RGB gratings were obtained by illuminating the structured sample with three collimated and collinear laser beams, incident normally from the substrate side. The beams had the same wavelengths of the theoretical analysis (λ 1 = 633 λ 2 = 532, and λ 3 = 488 nm) and were produced by three different laser sources (He-Ne, Nd: YVO4 frequency-doubled laser, and a Cobolt Obis diode laser, respectively) which were made propagating along a common optical axis by a proper combination of mirrors and beam splitters. The three beams were tuned to have the approximately same intensity before the sample (9.5 × 10 −3 Wcm −2 ). Low intensity ensures that the green and blue light (which are absorbed by the azopolymer) do not significantly affect the surface pattern. The transmitted diffraction pattern was collected on an opaque screen placed at 205 mm from the sample and recorded by a color reflex CCD camera. Supporting Information Supporting Information is available from the Wiley Online Library or from the author.
6,675.2
2021-10-11T00:00:00.000
[ "Materials Science", "Physics" ]
The hydromechanical behaviour of unsaturated loess in slopes, New Zealand . Unsaturated loess and loess-derived soils in the Akaroa harbour area of New Zealand are vulnerable to shallow landsliding during rainfall events. Laboratory testing and long-term field instrumentation has been conducted to characterise the water retention and unsaturated shear strength of these materials, and better understand temporal changings in slope stability. Laboratory test results indicate that the same soil-water characteristic curve can be applied to both recompacted and intact loess when suction is normalised by the air entry value. Conversely the stress-strain behaviours of the recompacted and intact loess were different due to the unique microstructure of the intact loess that contributes to its shear strength. Long-term field instrumentation data showed that, for the duration of the monitoring period, the hydraulic state of the loess remained on a scanning curve. These data, combined with the laboratory testing, confirm that temporal variation in slope stability can be attributed to seasonal variability in suction and its contribution to unsaturated shear strength. These hydromechanical variabilities, resulting from wetting and drying, are affected by rainfall intensity and duration that occurs at the site. Introduction Aeolian loess deposits cover 10 % of the South Island of New Zealand [1][2][3][4][5]. When unsaturated, these soils are stable in near vertical cut slopes and tend to exhibit a strength typical of weak rock [6,7]. During rainfall, water infiltrates into the soil mass increasing the water content and decreasing suction leading to a reduction in shear strength and initiation of shallow slope instabilities. Previous research has identified a relationship between a shear strength reduction and an increase in water content and how that gives rise to slope instability [7][8][9]. However, these studies have not quantified the direct effect of suction on unsaturated shear strength and have predominantly focused on the behaviour of recompacted loess overlooking the contribution of intact microstructural to strength and hydromechanical properties. Laboratory testing has been conducted to determine the soil-water characteristics of intact and recompacted unsaturated loess from Banks Peninsula, New Zealand. These data are presented alongside triaxial test results on saturated and unsaturated samples. Fractal properties of the loess are used to describe the soil-water characteristic curve [10][11][12][13]. In situ field instrumentation data shows how the hydraulic state of a natural loess slope changes when subject to climatic wetting and drying periods. These data indicate that hydraulic wetting occurs when rainfall intensity and duration are >1.5 mm/hr and 400min respectively. * Corresponding author<EMAIL_ADDRESS> Field instrumentation The Akaroa Harbour region was selected for field instrumentation due to the frequent occurrence of rainfall-triggered slope failures and increasing demand for rural and residential development. In this region the loess is ≤ 30 m thick and covers Miocene age volcanic rock. Field instrumentation was installed across a ~ 95 m 2 sloped monitoring site in the Akaroa harbour region and used to examine the hydraulic response of loess, while in situ, to climatic wetting and drying processes. The volumetric water content ( ) of the loess was measured using twelve time domain reflectometry (TDR) probes which were calibrated for the loess prior to deployment. Soil suction and temperature were measured using twelve dielectric water potential sensors. These sensors measure the dielectric permittivity of two ceramic disks when in hydraulic equilibrium with the soil, allowing estimation of suction in the surrounding soil matrix. These sensors were selected for their field durability and measurable suction range. They are limited to measuring suctions that must be larger than 9 kPa (the air entry suction of their ceramics) and have been calibrated by the manufacturer for drying conditions only. The TDR and dielectric water potential sensors were installed at the base of individual machine augured holes that were backfilled with loess. Instruments were installed in pairs (one TDR sensor adjacent to one dielectric water potential sensor) in four horizontal arrays at depths ranging between 0.5 -2.0 m below ground level (bgl). The horizontal and vertical distribution of the sensors allowed the variability of the hydraulic response across the site and progression of the wetting fronts to be examined. Rainfall precipitation was measured using a tipping bucket rain gauge with a 0.2 mm resolution. The instrumentation system was powered by one deep cycle battery charged by a solar panel. Measurements from each sensor were recorded by the data logger at tenminute intervals from November 2017 to April 2019. As expected, field monitoring data showed that reduction in soil suction coincide with increase in volumetric water content when rainfall percolated into the soil (Figure 1, Figure 2). Periods of drying, where soil water content decreased and suction increased, occurred as ground temperatures increased and rainfall was less frequent (summer). The greatest range in hydraulic state ( , ) was observed < 0.5 m bgl where the soil profile was increasingly exposed to evapotranspiration, precipitation and change in air temperature and humidity. Conversely, less fluctuation was observed in the hydraulic state of loess > 1.0 m bgl across the monitored seasons. Typically, progression of the wetting front during rainfall events was limited to < 2.0 m bgl. Minimal (if any) wetting front change was observed > 2.0 m bgl for specific events. Little variation in hydraulic state was observed throughout the soil profile during winter months (May to August) when remained relatively high (18 -20 %). In summer (December to February) θ reduced to 11 %. Soil water characteristic curve The soil-water characteristic curve (SWCC) was determined using two methods: pressure plate testing (ASTM D6836 -16) and WP4C dewpoint potentiometer testing. Due to the low total soluble salts (TSS) measured in the loess (TSS < 0.05%) osmotic suction is negligible and matric suction and total suction may be taken as being equal. Both intact loess and recompacted loess samples were tested to determine the SWCC. Compacted samples were prepared by moist tamping to a target density range of 1.5 to 1.8 g/cm 3 which is representative of typical densities observed in Canterbury loess deposits [5]. Intact loess samples for laboratory testing were carved from larger block samples that had been hand excavated from a loess cutting near the instrumentation site. Results from both pressure plate and dewpoint testing were combined to form the SWCC ( Figure 3). Good agreement was observed between the two test methods. Linearity observed in the double logarithmic − plane confirms that the SWCC, and thus the pore size distribution, can be characterised using the mathematics of fractals [12,14]. Notably, the fractal nature allows the influence of void ratio ( ) on the SWCC to be removed by normalising suction using the air entry value ( ) [15]. The for Akaroa loess can be related to using: And the air expulsion suction ( ) is defined as: . 3. Soil-water characteristic curve for intact and recompacted loess. From Figure 3, the main drying curve of the SWCC is defined as: Bounding the wetting data the main wetting curve is defined as: Finally, the top scanning line is defined as: In this study, the slope of the main wetting curve ( = -0.27) is less than that of the main drying curve ( = -0.33). This is unique in that for many soils, the main drying and wetting curves have the same slope in the double logarithmic − plane. In practice this means that, as the loess reaches a drier hydraulic state, the scanning path required to move between the drying and wetting curves becomes shorter. Shear strength testing Triaxial tests were performed on both intact and recompacted samples, each approximately 50 mm in diameter and 100 mm in height. A Bishop-Wesley triaxial apparatus modified for testing saturated or unsaturated samples was used. The intact samples were hand carved out of the loess blocks. Six tests were performed on saturated samples, three intact and three recompacted, under drained conditions. Six tests were performed on unsaturated samples, three intact and three recompacted, while holding suction constant at either 200 kPa or 290 kPa. The suction was attained by subjecting the samples to drying paths from an initially saturated condition so the hydraulic states were located on the top scanning curve. Suction was applied using the axis translation technique and controlled by applying a water and air pressure differential across the sample. Another three tests were performed on unsaturated intact samples while holding the water content constant and equal to the in situ field value, which was approximately 2.5%. The initial suctions in these samples were measured using the WP4C device and were found to be approximately 125 MPa. The total confining stresses 3 used varied from 20 kPa to 150 kPa. The variations of deviatoric stress q with axial strain are plotted in Figure 4. As expected, the stress-strain curves show an increase in maximum deviatoric stress as suction increases. Brittle responses were observed for the intact samples when suction was approximately 125 MPa. Also, when suction was 200 kPa or 290 kPa, or when saturated, the intact loess samples exhibited a tendency to strain-soften while most of the recompacted samples exhibited a tendency to strain-harden. Bishop's (1959) effective stress for unsaturated soils was used to account for the contribution of suction in unsaturated triaxial testing: where ′ is the effective stress, is the total stress, is the pore air pressure, is the pore water pressure, = − and is the effective stress parameter ( = 1 for saturated soils). The shear strength ( ) may then be approximated using: = ′ + ′tan ′ = ′ + tan ′ + tan ′ (7) in which ′ is the friction angle and ′ is the cohesion. The quantity ′ + tan ′ can be treated as an equivalent cohesion, combining the true cohesion with a suction dependent component that varies in magnitude. The location of the hydraulic state on the SWCC, and thus the drying or wetting history, must be considered in the determination of . The relationship developed by [16] is used here when the hydraulic state is located on the main drying curve. For that case is determined using: for which Ω = -0.55 is assumed as it is a best fit value for many soils. When the hydraulic state is located on the top scanning curve is determined using: (9) and when on the main wetting curve is determined using: To ensure compatibility between the expressions defining the SWCC and the equalities /Ω = / , /Ω = / and Ω /Ω = / must apply [14], leading to Ω = −0.45 and = −0.2. Using these relationships, the contribution of suction to the shear strength of loess was found to be adequately defined using Bishop's theory of effective stress. These data informed the derivation of peak and critical state ′ and ′ values for saturated intact and remoulded specimen ( Table 1). The contribution of suction to the shear strength of the loess highlights the association between seasonal changes in slope stability that occurs as the in situ soil mass gets wetter or drier. Characterisation of the soil's mechanical response to wetting and drying can be used to inform preliminary analysis of the implications of wetting events on loess slope stability. Hydromechanical state of in situ loess Comparison between laboratory and field data indicates that the hydraulic states of the in situ loess remained on scanning curves throughout the monitoring period ( Figure 5). Figure 5 presents field data measured at 0.5 -1.0 m below ground level during drying periods between 6 January to 20 February in 2018, and 20 January to 22 February in 2019. These data are normalised by and compared with the laboratory determined SWCC. Soil density measurements of field samples were used to inform void ratios at each probe depth along the four instrumentation arrays. Data from sensors installed at 2.0 m have not been included because of the minimal variation in the hydraulic state observed in them. Field and laboratory data sets show that the slope of the field scanning line is the same as the slope of the top scanning curve defined from laboratory data. Furthermore, during the monitoring period, the hydraulic state of the in situ loess did not extend beyond the main drying curve derived in the laboratory. Conversely the field hydraulic data did travel to the left of the laboratory determined main wetting curve. Several factors may contribute to the extension of field data to left of the main wetting curve. Firstly, the influence of heterogeneity of the loess in situ (e.g., due to root holes and defects) may not be captured in the laboratory testing due to the small sample sizes used. Secondly, there is a possible presence of entrapped air in the in situ loess that is not removed by percolation of rainfall. This means that the way in which air occupies pore spaces in the field may differ from the laboratorycontrolled samples. In addition, while field data measurements have been interpreted as point data, the hydraulic states the sensors measure are influenced by a large soil mass undergoing non-uniform hydraulic changes. Finally, the accuracy of the sensor calibrations during the early stages of a drying process (immediately following a wetting process) is unknown. As such, if the ceramic calibrations are invalid for wetting, then they are likely to be invalid for the initial stages of subsequent drying as the hydraulic state at the commencement of the drying has a combination of suction and volumetric water content that cannot be reliably determined. Climatic variables and soil hydraulic change Field instrumentation data from periods of hydraulic change has informed the approximation of climatic thresholds that give rise to wetting of this loess slope. This was achieved by temporally categorising field monitoring data into periods of hydraulic drying, wetting or no change. This allowed the corresponding climatic variables during these periods, such as rainfall intensity, rainfall duration, total rainfall, and temperature, to be observed when changes to volumetric water content and matric suction occurred ( Figure 6). These data indicated that wetting was evident at the depth of the shallowest sensors (~ 0.5 m) when the average rainfall intensity ( ) was > 1.5 mm/hr or the rainfall duration ( ) was > 400 minutes. When and were below these thresholds, the distinction between wetting and drying was less clear. It was observed that provided a clearer threshold than the maximum rainfall intensity ( ) due to the natural variability rainfall intensity that is observed during a natural rainfall event. For example, during the soil wetting that occurred on 20 th February 2018 (Figure 1 and 2), reduced from > 3000 kPa to < 20 kPa at 0.5 m bgl. Correspondingly, volumetric water content ( ) increased from 12 % to 21 %. During this rainfall event the rainfall intensity peaked at 9.6 mm/hr, and averaged at 3.6 mm/hr and the maximum duration of precipitation during the wetting event was 1340 min (e.g. = 9.6 mm/hr, = 3.6 mm/hr, and = 1340 minutes). Conversely, rainfall with the same maximum intensity was observed between 15 March 2018 and 27 April 2018, but negligible change to and was observed. This may be attributed to the shorter duration of rainfall during this period ( = 420 min ) and a lower average rainfall intensity ( = 1.6 mm/hr). There was no clear threshold for total rainfall ( ) that could be correlated to the occurrence of wetting during the monitoring period. Wetting generally occurred when the maximum air temperature ( − ) was < 15 °C. When 25 °C > − > 15 °C overlap is observed between wetting and drying periods, and periods of no hydraulic change. Higher air temperatures coincided with decrease in and increase in due to evapotranspiration. During these times an upward percolation of water occurred which reduced the and increased at shallower depths. The of deeper loess did not reduce concurrently. During periods of drying some rainfall was recorded, however, a wetting front did not propagate. This may be due to evapotranspiration of water from the surface vegetation and a low unsaturated hydraulic conductivity ( ) as increases. Fig. 6. Changes in hydraulic state (drying, wetting) compared with climatic variables including average rainfall intensity ( ), maximum rainfall intensity ( ), maximum rainfall duration ( ), total rainfall ( ), and air temperature ( − ). Conclusions The unsaturated behaviour of loess from the Akaroa Harbour region, Banks Peninsula was investigated through laboratory testing and long-term monitoring of field-based instrumentation. The combination of these research methods enabled the soil-water characteristics of loess to be compared at laboratory scale and in an in situ loess slope. Agreement between laboratory test results of intact and recompacted loess and in situ loess monitoring results was observed, considering the potential limitations of the sensors used to make the field suction measurements and soil variabilities which may be present at the field site and sensor locations. The combination of these data indicates that the in situ loess remained on a scanning curve for the duration of the two-year monitoring period. Change in hydraulic state of the in situ loess during the monitoring period highlighted the contribution of a range of climatic variables that contribute to wetting of the soil mass. It was observed that the average rainfall intensity had a stronger correlation with the occurrence of wetting than the maximum rainfall intensity. In general, wetting of the in situ loess occurred during rainfall events where > 1.5 mm/hr or > 400 minutes.
4,058.6
2023-01-01T00:00:00.000
[ "Geology" ]
Search for a generic heavy Higgs at the LHC A generic heavy Higgs has both dim-4 and effective dim-6 interactions with the Standard Model (SM) particles. The former has been the focus of LHC searches in all major Higgs production channels, just as the SM one, but with negative results so far. If the heavy Higgs is connected with Beyond Standard Model (BSM) physics at a few TeV scale, its dim-6 operators will play a very important role - they significantly enhance the Higgs momentum, and reduce the SM background in a special phase space corner to a level such that a heavy Higgs emerges, which is not possible with dim-4 operators only. We focus on the associated VH production channel, where the effect of dim-6 operators is the largest and the SM background is the lowest. Main search regions for this type of signal are identified, and substructure variables of boosted jets are employed to enhance the signal from backgrounds. The parameter space of these operators are scanned over, and expected exclusion regions with 300 fb$^{-1}$ and 3 ab$^{-1}$ LHC data are shown, if no BSM is present. The strategy given in this paper will shed light on a heavy Higgs which may be otherwise hiding in the present and future LHC data. Effective couplings of a heavy Higgs It is not very natural that the SM has only one fundamental scalar field -the Higgs field. If Nature really chooses this way, there must be something else unknown to us as yet. An alternative, and natural, way is that the 125 GeV Higgs boson discovered at the LHC [1]- [2] may be the lightest Higgs scalar field, among many that have yet to be found. Heavy Higgs particles are predicted in many BSM theories, such as the two-Higgsdoublet models, the minimal supersymmetric extension of the SM, and the left-right symmetric models. In a multiple Higgs field theory, the original Higgs fields are Φ 1 , Φ 2 , · · · · · · 1 . The multi-Higgs potential will cause mixing among them to form the mass eigenstates. Let Φ h and Φ H be the two doublets containing the lightest (h) and next to lightest (H) neutral Higgs respectively. The couplings to the SM gauge bosons will be scaled due to the mixing compared with SM gauge coupling. At leading order, the dim-4 operators can be written as L (4) hWW = ρ h gm W hW µ W µ , L (4) hZZ = ρ h gm W 2 cos 2 θ W hZ µ Z µ , L (4) HWW = ρ H gm W HW µ W µ , L (4) Email addresses<EMAIL_ADDRESS>(Xin Chen<EMAIL_ADDRESS>(Yue Xu<EMAIL_ADDRESS>(Yongcheng Wu) 1 In general, they can be in any allowed SU(2) L representations. For simplicity, we will just illustrate the case where all fields are doublet. where θ W is the weak mixing angle, m W the W boson mass, ρ h and ρ H are the scaling factors. In the simplest 2HDM example, we will have ρ h = cos(β − α), ρ H = sin(β − α). For a SM-like light Higgs, ρ h is not far away from 1. Generally, for a heavy Higgs H, there could also be dim-6 effective operators which is related to an even higher energy scale BSM physics [3]: where Λ is the scale below which the effective Lagrangian holds. It is set to 5 TeV in this work, since BSM at this scale is hard to be probed directly in general. Similar operators also exist for the SM Higgs h. As mentioned in [3], the dim-6 operators that are not constrained by precision electroweak (EW) data and relevant for the heavy Higgs are whereB µν = i g 2 B µν andŴ µν = i g 2 σ a W a µν . After the EW symmetry breaking, the effective Lagrangian terms involving the heavy Higgs and W/Z bosons are where s = sin θ W and c = cos θ W . Similar terms exist for Hγγ and HZγ vertices, but relatively suppressed by s and s 22 . In addition, to simplify the parameter space, we also neglect terms of O(s 2 ) and O(s 4 ) in Eq. 4, which involve coefficients f B and f BB , as done in Ref. [3]. Main search channels Heavy Higgs have been intensively searched for at the LHC in the H → ZZ → 4 decay [4]- [5] and in the diboson final state [6]- [7], with negative results so far. The main production channel is gluon-gluon fusion (ggF). It is reasonable to assume that the Yukawa coupling between the heavy Higgs and fermions is small, or the Higgs is even fermi-phobic, so that it can escape the direct detection in the ggF channel. The remaining production channels are associated VH (V=W/Z) and Vector Boson Fusion (VBF), which only involve the interactions between heavy Higgs and W/Z bosons. Different from [3,8] where final states with just one lepton and multiple jets are used, we start with at least two leptons. Specifically, the following channels are investigated in this work: • V H → + − j j j j, where the two leptons ( ) are of opposite sign (OS) charge and same flavor (e or µ), which originate from a Z boson decay, and a number of jets denoted by j. This is called the 2 OS channel. • V H → ν + − j j, where one pair of lepton originate from a Z boson decay. This is called the 3 channel. • V H → ± ν ± ν j j, where the two leptons are of same sign (SS) charge. This final state originates from the WH → W ± W ± W ∓ decay mode. This is called the 2 SS channel. In the 2 SS and 3 channel, the heavy Higgs mass can be reconstructed. It is not possible in the 2 SS channel, but the signal sensitivity is the highest in this channel due to the low background. In principle, the channel WH → W ± W ± W ∓ → 3 3ν can be also used, and we can additionally require no pair of leptons with OS charge and same flavors to suppress the Z+X background. However, the signal yield of this channel is only about 10% of that in the 2 SS channel. Therefore, we do not consider this channel here. We also checked the V H → ν j j j j channel as used in [3,8]. Although the signal yield is about a factor 10 larger than in the 2 OS, the W+jets background is also about ten times larger than Z+jets, and the tt → W( ν)b+jets can be another major background even after the b-jet veto 3 . Therefore, the sensitivity of V H → ν j j j j is not expected to be much higher than 2 OS, which is the least sensitive in the three channels considered in this work. In general, the cross section of VBF is about an order of magnitude higher than VH in the high mass region, so it seems that VBF is the best channel to look for a heavy Higgs, and to suppress backgrounds by the presence of leptons in the final state, the decay modes of H → ZZ → j j and H → ZZ → 4 can be used. However, the former is accompanied by large SM backgrounds, and the yield of the latter is too small to be detected in the high momentum region. Therefore, we focus on the VH production mode, with the heavy Higgs decaying into two W/Z bosons, and final states with at least two leptons from the three bosons' decays, as listed above. Figure 1 shows the leading order (LO) cross section of signal with different parameters as a function of the heavy Higgs mass. It is evident that when dim-6 operators are present, both VH and VBF production cross sections increase significantly, and VH increases much more than the VBF process. In addition, some traditional VBF variables such as ∆η j j may stop working for dim-6 operators. A comparison of two benchmark signals in the VBF H → ZZ → 4 channel is made in Fig. 2, one with and another without the dim-6 operators. The presence of these operators enhances the Higgs p T , but also makes ∆η j j background-like. Both signals have a yield of no more than 0.5 event at 300 fb −1 after object selection cuts, since their cross sections are already at O(10 −3 − 10 −2 ) fb level before any detector level cuts, as indicated in the caption of Fig. 1(b). As a result, the 4 channel significances are much lower than 2 and 3 , and we do not included it in the final result. The extra derivatives in Eq. 4 will not only increase the heavy Higgs process cross section substantially, but also make the heavy Higgs and associated boson have high momenta. Combined with the large Higgs mass, this means that all three bosons present in the process are boosted, which leads to boosted boson jets in the final state. We can use both the high p T and substructure features of these jets to suppress the backgrounds. For large ρ H , the contribution from off-shell V H * production can be also sizable. The dilepton and leading jet (with 70 GeV < m j < 150 GeV) p T for VH production in the 2 channel are shown in Fig. 3. Indeed, the bosons in signals with dim-6 operators have higher p T 's. Simulation of signal and background events The effective interactions in Eq. 4 are modeled by FeynFules [9] and passed to MadGraph5 [10] for the heavy Higgs production and decay, and the partons are showered and hadronized by Pythia8 [11]. The free parameters that can be set freely in this model are m H , ρ H , f W and f WW . In the 2 channels, single boson background of plus up to four QCD partons and diboson process + j j (where the two j's come from EW vertices) plus up to two QCD partons, are generated at the matrix element level with MadGraph5 and matched to the parton showers with the MLM method [12]. The triboson process + 4 j (where the four j's come from EW vertices), tt with t → νb, and ttV with two leptons from the decays of top or V(W/Z), are also generated with MadGraph5 4 . In the 3 channel, the diboson process 3 + ν plus up to two QCD partons are generated with MadGraph5 and matched to the parton showers with the MLM method. The triboson process 3 + ν + j j (where the two j's come from EW vertices), and ttV with three leptons in the final state are also generated with MadGraph5. Our triboson events include the off-shell effect of the bosons, thus the SM VBF process is also included. All background samples are showered and hadronized by Pythia8 too. Both signals and backgrounds are generated at LO in QCD and EW at √ s = 13 TeV, and the PDF set NNPDF23LO [14] is used for all the samples. The events are afterwards passed through DELPHES [15] simulating the detector response of the ATLAS detector [16]. The tracking range is defined to be within |η| < 2.5, where η is the pseudorapidity. The electron tracking and identification 4 It is worthwhile to note that ATLAS sees evidence of the SM triboson process with partial 13 TeV data [13]. The minimum p T for an electron (muon) is 15 GeV (10 GeV). The normal jets are clustered with the anti-k t algorithm [17] with a cone parameter 0.4. To account for the boosted bosons and the Higgs, anti-k t fat jets with a cone parameter 1.0 are also used. If any jet (fat jet) overlaps with a lepton within ∆R < 0.4 (∆R < 1.0) 5 , this jet (fat jet) is removed in the event from consideration. A jet and a fat jet should also have ∆R > 1.4 to be considered as non-overlapping. The normal (fat) jets are required have p T > 30 GeV (p T > 50 GeV), and with |η| < 4.0. Search in the 2 OS and SS channels To search for a heavy Higgs with boosted bosons in the 2 OS channel, four signal regions are defined as shown in Tab. 1. The event topology is characterized by a high momentum boson recoiling against two other bosons that come from a heavy Higgs decay, as schematically displayed in Fig. 5. In region (1), the associated Z → recoils against a high momentum Higgs decaying into four jets ( denotes the combined 4-vector of two leptons). The momentum is so high that the four jets form a fat jet (denoted by J). A a parameterless k t algorithm is run on the fat jet to exclusively cluster up to two subjets [18]. Exactly two such subjets are required, and each one's mass should be consistent with a vector boson. To further suppress backgrounds, the N-subjettiness variables τ 1,2 are used [18]. They are jet substructure variables calculated using exclusive k t axes, indicative of the subjet multiplicity in a parent jet. Similar topology to (1) exists in region (2), except that one boson from the heavy Higgs forms a boosted boson jet (a single normal jet denoted by j 1 , which is leading in p T ), and the other with a lower p T splits into two normal jets ( j 2 and j 3 , 2 nd and 3 rd leading in p T , and j 23 denotes the combined 4-vector of these two jets). In region (3) and (4), the leading jet is the associated boson. One boson from Higgs decay forms two jets (region 3) or a single jet (region 4), and the other decays into dilepton. The ∆R cuts are applied to imposed correct topologies in different regions. The distributions of τ 2 /τ 1 for the boson jets j 1,2 in region (2)(3)(4) are shown in Fig. 4. For signals with m H = 300 GeV, the regions definitions are similar to m H = 600 GeV, but due to the lower Higgs mass, events have less number of boosted boson jets. A bit tighter mass window cut is applied on the bosons, and region (4) is removed due to poor signal significance. Conversely, in signal with m H = 900 GeV, events have much larger number of 5 The ∆R is defined as ∆R = ∆η ⊕ ∆φ. Search in the 3 channel In the 3 channel, six signal regions are defined as shown in Tab. 3, and schematically displayed in Fig. 7. Regions (1-3) are characterized by a W( ν) boson recoiling against a heavy Higgs ( ν denotes the combined 4-vector of and ν from W), from which a boson decays into dilepton, and the other forms a normal jet, a fat jet or two normal jets ( j 12 denotes the combined 4-vector of j 1 and j 2 ). Regions (4)(5)(6) are similar to (1)(2)(3), except that the roles of W → ν and Z → are swapped. The three leptons should have a net charge of ±1. For 3e and 3µ final states, the opposite-charged lepton pair with a smaller ∆R is regards as from Z → , while for eeµ and µµe, the correct combination is obvious. To suppress the fake leptons from jets (not modeled in this work) which generally have low p T , the lepton not coming from Z → is required to have p T > 50 GeV. The momentum of neutrino from W → ν is calculated from the E miss T vector and W mass constraint. When the transverse mass m T ≥ m W 7 which happens only because of the E miss T resolution and W boson width effect, p ν z is calculated as E miss T · p z /p T . In the case that m T < m W , the solutions to p ν z are two-fold. The solution with the a smaller absolute value of |p ν z | is chosen. For signal with m H = 300 GeV or m H = 900 GeV, the region definitions are not changed, except that the boson p T and mass window cuts are adjusted, and the fractions of signal events in different regions will be different. In all the channels, events are sequentially selected following the order of region numbers (only events not present in the previous region are selected in the next), so there is no overlap between different regions. channel are shown in Fig. 8 with the benchmark signal B as in Fig. 2-3. Good heavy Higgs candidate mass can be reconstructed in the 2 OS and 3 channels, while in the 2 SS channel, only the hadronic V boson mass can be shown. The right tail in the signal in the bottom plot of Fig. 8 is due to the mis-matched jets illustrated in Fig. 6(3), and a small W boson mass peak is also visible in the "other" component (dominated by ttV) of the background. As evident from Tab. 4, the 2 SS channel provides the best sensitivity among all channels. Sensitivity in the model parameter space To extract the signal sensitivity, mass window cuts are applied to the distributions shown in Fig. 8, and number counting is performed. The sensitivity is based on ratios of Poisson likelihoods, and toy distributions are obtained for background-only and signal+background hypotheses. In this work, three mass parameters: m H = 300, 600, 900 GeV, are investigated. Since ρ h 1 from current Higgs measurement, it is expected that ρ H is not very large. Hence ρ H = 0.05 is taken as a benchmark value, and the two dimensional parameter space of f W and f WW is scanned. Since the Higgs width is proportional to ρ 2 H m 3 H , the small ρ H also makes the Higgs width small. With ρ H = 0.05 and f W = f WW = 50, a Higgs of mass 900 GeV has a width of only 0.571 GeV. Therefore, the interference between the signal and the SM triboson background (since they have the same final state) can be safely neglected. Suppose there is no heavy Higgs signal with large dim-6 operator coefficients, the 95% Confidence Level (CL) exclusion regions for three different Higgs masses are shown in Fig. 9-11, for two integrated data luminosities: 300 fb −1 and 3 ab −1 , combining the 2 and 3 channels. The bounds based on the consideration of gauge boson scattering amplitude unitarity [3] are also shown in these figures. It is evident that a large part of the parameter space allowed by unitarity can be excluded, with just 300 fb −1 of data. It is worthwhile to note that with ρ h = 1 large values of ρ H will shift the area enclosed by the unitarity bounds away from the origin, making these signals much easier to be excluded. Conclusion In summary, a search strategy for a heavy Higgs with generic dim-6 couplings to SM gauge boson is presented in this work. We go beyond the final state studied in Ref. [3] to focus on the two and three lepton final states, where the SM background can be substantially suppressed by means of boosted boson jets and jet substructure moments. The signal we are looking at can be sparse in ggF and VBF productions (thus escaped detection so far), but can be found in the VH production with proper sets of cuts. This is a phase space corner not touched upon by LHC up to date, and searching for such a generic heavy Higgs may shed light on something new toward BSM.
4,694.2
2019-05-14T00:00:00.000
[ "Physics" ]
A Dictionary Learning Approach with Overlap for the Low Dose Computed Tomography Reconstruction and Its Vectorial Application to Differential Phase Tomography X-ray based Phase-Contrast Imaging (PCI) techniques have been demonstrated to enhance the visualization of soft tissues in comparison to conventional imaging methods. Nevertheless the delivered dose as reported in the literature of biomedical PCI applications often equals or exceeds the limits prescribed in clinical diagnostics. The optimization of new computed tomography strategies which include the development and implementation of advanced image reconstruction procedures is thus a key aspect. In this scenario, we implemented a dictionary learning method with a new form of convex functional. This functional contains in addition to the usual sparsity inducing and fidelity terms, a new term which forces similarity between overlapping patches in the superimposed regions. The functional depends on two free regularization parameters: a coefficient multiplying the sparsity-inducing norm of the patch basis functions coefficients, and a coefficient multiplying the norm of the differences between patches in the overlapping regions. The solution is found by applying the iterative proximal gradient descent method with FISTA acceleration. The gradient is computed by calculating projection of the solution and its error backprojection at each iterative step. We study the quality of the solution, as a function of the regularization parameters and noise, on synthetic data for which the solution is a-priori known. We apply the method on experimental data in the case of Differential Phase Tomography. For this case we use an original approach which consists in using vectorial patches, each patch having two components: one per each gradient component. The resulting algorithm, implemented in the European Synchrotron Radiation Facility tomography reconstruction code PyHST, has proven to be efficient and well-adapted to strongly reduce the required dose and the number of projections in medical tomography. Introduction During the past two decades X-ray Phase Contrast Imaging (PCI) has shown a remarkable enhancement of image contrast and sensitivity for soft tissue. Reducing the deposited dose during PCI-CT is a crucial step towards an eventual clinical implementation of the technique. A solution to this problem consists in applying iterative CT reconstruction schemes with a-priori knowledge of the solution. The signals occurring in Nature, when cleaned from noise, present most of the time an intrinsic sparsity when expressed in the proper basis. An image is intrinsically sparse when it can be approximated as a linear combination of a small number n of basis functions, with n=N, where N is the image dimensionality. Piece-wise constant images when they are expressed by their gradient are examples of sparse signal: they have nonzero signal only at the borders of flat regions. For piece-wise constant images one can apply very efficient methods based on minimization of a convex functional, called also convex objective function, which contains a total variation penalty term. For other classes of images, such as medical images, one has to choose different solutions which are adapted to the intrinsic sparsity of the case under study (depending on the specific organ and imaging modality). There are mainly two ways: either the sparsity structure is a-priori known and an appropriate basis of functions can be built from the beginning, or it must be automatically learned from a set with the dictionary learning technique [1]. This method consists in building an over-complete basis of functions, over an m|m domain, such that, taken an m|m patch from the studied image, the patch can be approximated with good precision as a linear combination of a small number Nvvm 2 of basis functions. The rationale for using an over-complete basis is that by increasing the basis dimension one increases the number of different patterns that can be fitted using just one or few basis functions. We can think as an example of images containing isolated and weakly curved lines: in this case we could use a basis where each function represents a line with a given intercept and slope, but other functions could be further introduced to fit other shapes. When we fit, patch by patch, a noisy image using the appropriate basis, the features of the original images will be accurately fitted with a small number of components. The noise instead has in general no intrinsic sparsity, and if it happens to have one, it is with high probability very different from the sparsity structure of the original images. Therefore the noise will be reproduced only if we allow a large number of components (the patch basis is over-complete so it can represent the noise) while it will be effectively filtered out if we approximate the noisy image with a small number of components. The dictionary learning technique has recently been applied to tomography reconstruction using the Orthogonal Matching Pursuit (OMP) denoising procedure [2]. This procedure consists in obtaining first an over-complete basis of functions and then in least-square fitting every patch of the image using at most N omp components selected from this basis. The components are heuristically selected choosing, each time, the one having the maximum overlap with the remaining error. This optimization method cannot be implemented as a convex objective function optimization problem, because the linear combination of two candidate solutions can have more than N omp components. In other words, the optimization domain is not convex. In this paper we present an advanced formalism which implements overlapping patches into a new convex functional that we describe in the section Materials and Methods. For the solution of our functional minimization problem we have applied the recently developed tools taken from the field of convex optimization [3]. Results on both synthetic and experimental data are compared to the state of the art reconstruction methods. We compared the obtained images to Equally Sloped Tomography (EST) [4], TV minimization [5]. Moreover we applied convex optimisation also to another formulation, for the dictionary learning technique, of the objective function, which has been applied, using the nonconvex OMP procedure, by Xu et al. [1]. Dictionary Learning In this section we introduce first the decomposition of an image into nonoverlapping patches and the related objective function for denoising. Then we introduce our original formalism which ensures, using overlapping patches, a smooth transition at the patches borders, and finally we apply this formalism to CT reconstruction. In order to make clearer the text we added the following sentences in the introduction of the method: In this approach an iterative loop between the sinogram space and the real space is used. A fidelity term is imposed in the sinogram space while a sparsity inducing term is introduced in the real space. We denote by 1 p the indicator function of patch p, which is equal to 1 over the patch support (typically an m*m square) and is zero otherwhere. For nonoverlapping patches, covering the whole domain, we have: where i denotes the pixel position and can be thought as a two-dimensional vector. We are looking for the ideal solution x that we express by the vector, w, of its coefficients in the basis of patch functions: where the set Q k f g is an over-complete basis of functions over the patch support; r p is the closest to the origin corner of the patch p, and w kp is the component k,p of vector w which multiplies the basis function Q k in the patch p. The denoising problem, given an image y, consists in finding the minimum of a functional F(w)~f (w)zg(w) which is sum of two terms. The term f (w)~y{x k k 2 2 links the solution to the the data y. The other term, g(w), contains the a-priori knowledge about the solution. This way of breaking the functional in two terms has his roots in the Bayesian theorem. From a probabilistic point of view the denoising problem consists in finding, given a noisy image y of an object, the most probable object x that can generate that image. We represent the object x through the patches coefficients w. The Bayes theorem, applied to denoising, states that the conditional probability of w being the exact object given a measurement y, is the product of the probability of y being the measure given the exact solution w times the a-priori probability of w. Assuming gaussian noise, the conditional probability of y being the measure given the exact solution w is exp { y{x k k 2 2 = 2s 2 ð Þ À Á , where x is expressed through the patches coefficients w by equation 2. The exact a-priori probability of w is unknown but we approximate it as exp {g(w)= 2s 2 ð Þ ð Þ . This function expresses our a-priori knowledge that a non sparse solution having a high value of the L 1 penalization term g(w)~b w k k 1 (which is a sparsity-inducing term [6]) has low probability. The most probable solution w ? is obtained by finding the F(w) minimum: w ?~a rgmin w (f (w)zg(w)); The solution can be obtained by using the iterative shrinkage thresholding algorithm [3] (ISTA): where T a is the shrinkage operator defined as and c is a positive number lesser than the inverse of the Lipschitz condition number L: The Lipschitz number L is such that: The ISTA algorithm can be accelerated by the Fast Iterative Shrinkage Thresholding [7](FISTA) method. In its non-overlapping version, the image denoising with patches is able to detect features that are within the field of the patch: if a line crosses the central region of a patch, it will be detected if the basis of functions has been trained to detect such lines. But in the situation where a line intersects only one point in a corner of the patch square, the signal of this point is indistinguishable from that of a noisy point, no matter the dictionary training. For this reason the patches denoising technique is often used with overlapping patches using post-process averaging [8]. In this case the minimization problem is solved for each patch separately first, and then the averaging is performed in the overlapped regions. In this study we do not follow this procedure but we add an overlap term into the objective function. We choose a system of patches which covers the whole domain, and we allow for overlapping. In our implementation of the algorithm the set of all patches is formed by a set of non-overlapping m*m square patches covering the image plus the translated copies that we obtain by (i x s,i y s) translational vectors, where i x ,i y ,s are positive integers, s is a constant step size selected by the user and i x ,i y vm=s. In the case of overlapping patches the sum of all indicator functions is greater or equal to one: We define the core indicator functions 1 c p , which indicate the core of the patches, and make a non-overlapping covering: For our implementation, when the translational step size s is equal to one, the core region is a pixel at the center of the patch. The core region gets larger when the step size increases. For a given point i, 1 c p (i) indicates which patch p has its center C p closest to point i: The solution x is composed as a function of w using the central part of the patches as indicated by the functions 1 c p : Now we introduce the P operator which is the projection operator, for tomography reconstruction, and is the identity for image denoising. The functional F(w) whose minimum gives the optimal solution is written, for both applications, as: where the r factor weights a similarity-inducing term which pushes all the overlapping patches touching a point i, towards the value x i (w) of the global solution x(w) in that point. For future reference we call y{P(x(w)) k k 2 2 the fidelity term. The factor r has also the role of a regularisation parameter: as an example we consider the case where the set of overlapping patches is generated with a translational step s~1. In this case the core indicator function has a 1*1 pixel domain and, without the r term, we could get a perfect fit, for an arbitrary image, by using for each patch an arbitrary component chosen randomly amongst those which are not zero at the core pixel. The solution is found with the FISTA method, using the gradient of f (w) which is easily written as: where P T is the adjoint operator of P, called back-projection operator in the case of tomography, which is again the identity for image denoising. Floating Solution Functional Xu et al. [1] have recently used an objective function which differs from ours for the fact that their global solution x is a free variable, while ours is a function of w:x(w). Their objective function is Xu et al. used the non-convex OMP procedure for minimisation of the functional. In this paper we will also compare their functional to our, using for both functionals the FISTA optimization. Total Variation penalisation In the total variation (TV) method [5] one minimises a convex functional given by the sum of the fidelity term y{P(x) k k 2 2 and of a gradient-sparsity inducing term b TV TV(x), where b TV is a regularisation parameter and TV gives the isotropic total variation of the image x: Equally Sloped Tomography Equally Sloped Tomography (EST) is a Fourier-based iterative reconstruction method that iterates back and forth between real and Fourier space, utilizing an algebraically exact Pseudo-Polar fast Fourier transform [4,9]. Measured projections (in this case obtained by the radon transform) are inserted into the Fourier space thanks to the fractional Fourier transform (FrFFT). Then the Pseudo-Polar Fast Fourier transform (PPFFT) and its adjoint are utilized to transform the images back and forth between Fourier and object space. During each iteration, physical constraints including sample boundary and the positivity of the coefficient are enforced in real space, while the measured data are applied in the Fourier space. The algorithm, monitored by an error metric, is guided towards the minimum that is consistent with the experimental data. To prevent any human intervention, the algorithm is automatically terminated when no further improvement can be made. In this case the number of iterations was 50 for a good convergence of the algorithm using 80 projections. The EST algorithm has proved to allow a drastic reduction of the number of projections for conventional CT [9] and phase contrast imaging [4]. Fahimian et al. [9] demonstrated that the image quality and contrast obtained with EST is comparable with other iterative reconstruction schemes such as TV minimization or expected minimization statistical reconstruction with a faster convergence. Numerical Experiment According to the Shannon-Nyquist criterion, to achieve a proper reconstruction in conventional CT, the number of angular projections N required is determined by where D is the thickness of the sample and P the detector pixel size. One scenario for reducing the deposited dose during a CT scan is to reduce the number of projections. To investigate the potential of the Dictionary Learning method on synthetic data, we use the standard 512|512 pixel Lena image as phantom. The dictionary is learnt from a different image that the one to reconstruct. In this case we used the boy image showed in fig. 1 a). The dictionary is shown in fig. 1b. Note that we intentionally did not use the standard Shepp-Logan phantom in this study as it is a piece-wise constant and therefore it does not reflect the complexity of a phase contrast medical image. The sinogram is obtained by projecting the image at 80 angles between 0 and 180 degrees using the radon transform. We have optimized the regularisation parameters maximizing the following improvement factor, which quantifies the improvement obtained with the TV or patches methods, with respect to a simple FBP reconstruction: wherex is the exact solution (unnoised Lena) and x fbp,TV,patches represents the reconstruction result from one of the three methods, and where S is the Structural SIMilarity index [10] which is equal to 1 when the images are identical. In fig. 2 we show the quality factor dependency versus the regularisation parameter b for our overlapping patches method (squares), and versus b TV (dots) for the total variation method. For our method we have used a similarity-inducing term (r) value fixed to 1000, the patches basis shown in fig. 1 and a step size of 3. We have performed this reconstruction also with the floating solution function. We have used the same optimisation method, FISTA, used for our functional. We optimized b and r values by scanning over a 2D grid and comparing to the ground-truth. If the ground truth is not available, statistical methods such as the discrepancy principle [11] or generalized cross-validation [12] can be used to select the optimal regularization parameter in future application of the method. We obtained no significant difference between the results obtained with our functional and the floating solution: the SSIM is the same up to the third significant digit, and no significant difference can be detected in the final images. The convergence rate of FISTA, shown for both functional (our method (blue) and [1] (red)) in fig. 3 is instead much faster using our functional form. The comparison of the results obtained by the different reconstruction algorithms is shown in fig. 4. We present the reconstruction results with four methods: filtered backprojection a), EST [9] b), the total variation penalisation [5] c) and our overlapping patches method d). These results have to be compared to the original cropped image shown in subfigure e). In this figure it is clear that the FBP images suffer a lot of streaking artefacts due to the number of projections well below the one required by the Shannon-Nyquist sampling theorem. The EST reconstruction removes some of these artefacts with the price of a blurred image with weak spatial resolution. Despite the fact that our technique gives a Q factor which is only slightly better than the one from the TV method, the obtained result looks much better by human eye inspection. The TV result shows a strong and irregular skin tessellation of those regions which have an illumination gradient. The hat feathers region (second row) is better resolved with the DL method. They look natural in the patches result, while the TV result produces strong grey levels distorsions which vary irregularly along the feathers. EST results show a really good preservation of the tiny structure of the feathers but is noisier. The hat itself looks well preserved in the patches result while, in the TV image, the hat borders have irregular shapes. The SSIM values with the original images are reported for each subfigure. These values confirm the observation. Note that the visual difference between TV and DL seems greater than their SSIM values. 1000 iterations were used for the DL method. The computation time is 27 s on a Tesla k20m gnu card using the open source PyHST code [13]. On the same GPU card the computation time for the FBP is less than 1 s and 27 s for the TV method using the same number of iterations. The EST method has not yet been implemented on GPU and is therefore much slower (5 min). Another strategy to further reduce the dose in tomography one can acquire with fewer number of photons onto the detector. To simulate this lack of photons we added a Poisson noise onto the sinogram data with a standard deviation ffiffi ffi l p equal to 0:3% of the sinogram value. We show in fig. 5 the reconstruction results for the different algorithms. The same conclusions for the not noisy case test can be drawn for the additive Poisson Noise: the dictionary learning method gives the best results. The FBP Phase Contrast Tomography In this section we apply our method to medical tomography of a human sample imaged using X-ray Phase Contrat Imaging (PCI). PCI has shown an enhancement of soft tissue visualization in comparison to conventional imaging modalities [14]. It employs the dual property of X-rays of being simultaneously absorbed and refracted while passing through tissue. Among all the phase contrast techniques, we chose to test our method on analyzer based PCI [15,16] because of the high sensitivity of the modality. Moreover, to the best of our knowledge, it is the only modality that showed results for investigating large and highly absorbing biological tissues (i.e. full human breasts) at a clinically compatible dose [4]. In the analyzer based PCI technique, the projection data contain a signal which is proportional to the gradient of the X-ray phase in one direction (i.e. the direction perpendicular to the plane formed by the incoming and diffracted Xrays on a perfect Bragg crystal which is used for analyzing the radiation passing through the sample). More details on the principles and technical aspects of PCI are available in [14]. Briefly the analyzer based imaging approach produces a mixed signal which originates from both X-ray absorption and refraction (i.e. phase derivative) [17]. The signal recorded by the detector is therefore very close to those recorded with other PCI techniques such as Grating Interferometry (GI) [18]or Edge Illumination (EI) [19]. All these methods are differential PCI methods and produce similar signals. Therefore the proposed approach can in principle be generalized. When the object is rotated around an axis (Z-axis, for instance), this signal contains contributions from the X and Y gradient components, where X and Y axes co-rotate with the sample. The two components are de-phased by a rotation angle of 90 degrees and can be reconstructed separately by multiplying beforehand the sinogram with the cosine and sine of the rotation angle. We apply our formalism considering that the reconstructed and learning images are vectorial objects: the value associated with a pixel is not a scalar but a two-component vector. The studied sample is a 7 cm human breast imaged with a pixel size of 100 mm. The experiment was conducted at the biomedical beam line of the European Synchrotron Radiation Facility (ESRF). The sample was a human breast mastectomy specimen. The study was performed in accordance with the Declaration of Helsinki. A monochromatic X-ray beam with energy of 60 keV was used. The training set is obtained from another breast sample imaged with the same technique but with high quality reconstruction. We consider a slice image for which the phase retrieval has been performed [20]. Then we apply a Sobel filter to extract the two derivative components and use the KSVD algorithm [21]. In this experiment 5 iterations were used to obtain the 100 atoms Fig. 6 shows the patches basis functions that we use to fit both components at the same time. The patches size is 7|7 pixels and each basis function is displayed as a 14|7 rectangle whose upper 7|7 part is the X component and the lower one in the Y component. In a previous work [20], it was demonstrated that the CT reconstruction of the refractive index obtained by first reconstructing the CT gradient field images and then applying a phase retrieval procedure, yields a better image quality than performing phase retrieval first and then reconstruction. The method is more robust with respect to noise, which may be a critical aspect in low dose tomography. In this case, the noise level may be such as it covers the information in the region where one gradient component of the refractive index has values close to zero. On the contrary, in those same regions the other component of the gradient of the refractive index has high values and it is thus less sensitive to noise. As a result, the information which is lost in one direction may be somehow retrieved by using the signal contained in the gradient image corresponding to the perpendicular direction. Additionally, when we use the vectorial approach, the information for the two reconstructed gradient components are intrinsically correlated by the dictionary and thus it increases the robustness of the method. The result of reconstruction obtained by using filtered back projection algorithm with 1000 projections is shown in fig. 7a. In this image, radiologists could easily identify the skin, fat and glandular tissue. Fig. 7 is the reconstruction of a 765|765 pixel slice, using only 200 projections over the 1000 available. The upper left square is a zoom in the region marked in subfigure 7. The used projections cover, with constant spacing, a 180 degree range. The right column is the reconstruction with our method for X and Y components, while the left column (subfigure 7c) and d) is reconstructed with the standard FBP using all 1000 available projections. Using our method, we can still generate a high quality image with only one fifth of projections which would otherwise be necessary to generate a high quality reconstruction with the standard FBP method. Visually, the difference between the FBP results obtained with the full data set and our method with a five-fold reduction of the data is barely noticeable. The different borders of structures like skin layers, fatty tissues, and collagen strands are easily identified. The obtained results are very promising and a systematic evaluation for clinical application is under-way. The radiation dose absorbed by the sample during 200 projections is comparable to that of a standard clinical dual view (2D) mammography (3.5 mGy). For the sake of comparison we report in fig. 8 the reconstruction obtained using the same number of projections using FBP (subfigure b,g), EST(subfigure d,i) and our method(subfigure f and k) using 200 projections in comparison with the full dose image (subfigure f and k). We report also the results obtaining penalizing the L 1 of the reconstructed result(subfigure c and h). Our signal is a derivative, therefore penalizing the derivative modulus is similar to applying the TV method to the non derived object. The top inset is in a zone close to the skin with a blood vessel. The bottom insets are zoom in a zone with microcalcifications. Note that micro-calcifications are of high interest for medical diagnoses because it may help identifying malignant masses. We report the ssim values obtained by comparing the images with the FBP reconstruction with the full set of projections In this figure it is clear that the overall image quality of the TV minimization is poor as well as the FBP with 200 images. The image quality of the EST 200 is lower than DL especially in terms of spatial resolution and sharpness. The EST image is indeed more blurred and the DL looks more similar the original full dose image. The DL image is indeed sharper with a clear delineation of the small microcalcifications or blood vessel. The EST reconstructed image does not show the little micro-calcification in the middle of the image. Moreover on the top inset small round structures disappeared with the EST reconstruction whilst they are preserved in the DL image. The SSIM values confirm the visual inspection. Conclusion For a decade Iterative CT reconstruction algorithms have demonstrated a possible dose reduction in conventional CT data. To the best of our knowledge, few works dealt with applying those algorithms to phase contrast tomography [4,22,23]. We have presented a new convex functional which implements in a mathematically pure form the concept of overlapping-patches-averaging, which was used so-far with a non-convex formalism. The resulting algorithm is efficient and well adapted to strongly reduce the noise in a natural image. A comparison with other iterative algorithms has been carried out on the Lena image showing that our method outperforms TV minimization and Equally Sloped Tomography. The method gives the best results with few projections and is less sensitive to additional noise. Compared to the state of the art dictionary learning method [1] our proposed approach converges faster to an equivalent image quality. The method was applied to a medical diagnostic case by considering phase contrast tomographic data of a whole cancer-bearing human breast acquired with phase contrast imaging. A vectorial approach consisting of reconstructing gradients of the index of refraction was adopted. We demonstrated that thanks to this approach it is possible to reduce the deposited dose in breast CT by a factor of 5 compared to the standard filtered backprojection while keeping a comparable image quality. Although we used this specific example as a proof of principle in this study, the method we developed and described can be easily applied to other tomography fields where a limited dose or a rapid acquisition time is a requirement. The Fig. 7. Reconstruction of a computed tomographic slice of the breast. The images on the first and second row are the X and Y phase gradients, respectively. In the left column the results of the reconstruction obtained with the FBP method using the full set of data are reported. In the right column the results of our method using one projection over five are shown. For these reconstructions we set b~3 Ã 10 {6 and r~10. doi:10.1371/journal.pone.0114325.g007 Dictionary Learning Based Low Dose Phase Contrast Computed Tomography numerical results have been generated with PyHST [13,24], the ESRF tomography reconstruction code which uses the GPU implementation of the presented methods. Ethics Statement The study was performed in accordance with the Declaration of Helsinki. IRBapproval was granted by the ethics committee of the Ludwig-Maximilians University. Written informed consent was gathered before enrollment within the study
6,837.6
2014-12-22T00:00:00.000
[ "Engineering", "Medicine", "Computer Science" ]
Research on Property Income Inequality E ff ect of Fiscal Finance : “Creating conditions for more people to have property income” has become a national policy after the 17th National Congress of the Communist Party of China. Based on the micro survey data from Chinese Family Panel Studies (CFPS) in 2010, 2012, 2014, 2016 and the macro panel data at the provincial level, a logarithmic linear equation was built to estimate the impact of micro and macro factors on property income. Furthermore, the contribution of fiscal expenditure and financial development on property income equality can be recognized using the regression-based inequality decomposition method. This research revealed that fiscal expenditure improves residents’ property income and slightly reduces the inequality of property income distribution. With respect to financial development, it improves residents’ property income but aggravates the inequality of property income distribution. However, there is a significant di ff erence between the di ff erent regions. In eastern and central regions, inequality of property income distribution greatly benefits from fiscal expenditure, while in northwest regions, fiscal expenditure makes property income inequality even worse. Therefore, the focus of financial sustainable development is to reduce property income inequality through the establishment of an e ff ective government and the improvement of the rule of laws. Introduction The concept of "creating conditions for more people to have property income" was first introduced at the 17th National Congress of the Communist Party of China in October 2007. The propositions related to enriching residents' property income were proposed at the 18th and 19th National Congresses of the Communist Party of China, respectively. Although residents' property income grew rapidly in the past decade in China, it has not become an important source of national income as expected, according to previous research. The purpose of this article was to analyze the impact of fiscal expenditure and financial development on property income and to measure their contributions to property income inequality. Therefore, both fiscal and financial factors are included in the framework of income distribution in this paper. With the regression-based inequality decomposition method, this paper quantitatively measured the degree of property income inequality and identifies the factors that affect residents' property income using a large number of micro investigation data and macro data. Eventually, this article provides policy recommendations for optimization and reform of fiscal, financial and taxation systems based on the results of empirical analysis. The results show that financial development can improve the level of residents' property income though it can be harmful to the equality of property income distribution. Fiscal expenditure is beneficial to residents' property income, but the regional differences are significant. Overall, fiscal expenditure slightly reduces the inequality of property income. As a result, the focus of financial sustainable development is to reduce property income inequality and build an effective government and law system. This paper provides a new perspective and research direction on the adjustment of the financial market system in the field of primary distribution and the reform of fiscal and tax systems in the field of redistribution. More importantly, this article takes the lead in analyzing macroeconomic policies in the dimension of microeconomics. With the combination of macro data and micro household survey data, the effect of fiscal expenditure and financial development on residents' property income inequality can be measured. Besides, this paper quantitatively measures the contribution of macro and micro factors to property income inequality based on microsimulation analysis. Eventually, suggestions on the reform of the tax system and the optimization of the property income distribution are provided according to the quantitative results. The rest of the paper is organized as follows. Section 2 introduces the relevant literature on residents' property income. Section 3 provides a brief description of the data derived from the Chinese Family Panel Studies (CFPS) and explains the design of the regression-based inequality decomposition method. Section 4 shows the results of the property income determining equation. Section 5 concludes the whole study and Section 6 provides policy recommendations according to the research findings. Literature Review According to the experience of developed countries, property income will become an important source of residents' income when the per capita GDP exceeds 2000 US dollars and China has reached this level as early as 2006 (Tang and Lai 2013). Based on the data released by the International Monetary Fund, the level of per capita GDP in China has reached $8643 in 2018, which is four times higher than the threshold level. According to the National Bureau of Statistics (NBS), the proportion of property income in national income increased from 2.68% at the end of 2006 to 8.11% at the end of 2017. However, a large number of empirical researches based on household surveys show that the proportion is only about 3% and property income has not become an important source of national income as expected. Although property income cannot be regarded as the primary source of national income, the growth rate of property income is 5% higher than the growth rate of disposal income in both urban and rural areas according to NBS. As the studies conducted by NBS mainly rely on samples from rural residents, data from urban areas, and the whole country needed to be supplemented. Consequently, this paper employs microdata such as the China Household Income Project (CHIP) and Chinese Family Panel Studies (CFPS) to expand the original research. Table 1 divides the property income of urban and rural residents in 2002 and 2012 into five categories. It can be noticed that 20% of low-income groups take only 0.3% of the property shares, while 20% of high-income groups occupy 77.37% of the property shares. In other words, a great amount of social wealth is concentrated in a small group of people, and the property income of residents is extremely likely to become an important factor in income inequality. In recent years, the main relevant researches focus on the following aspects: The first one is analyzing the impact of unequal property income distribution on social stability. Inequality of property income was first introduced by Lampman as a source of social problems in 1962 (Lampman 1962). Nobel Prize winner Stiglitz pointed out the inefficiency of governments in income redistribution. He stated that the existing system is continuously transferring wealth from the bottom of the society to the top and it will lead to slow growth of GDP and social instability (Stiglitz 2013). The second one is studying the impact of property income on social welfare. In 1953, Harsanyi proposed that there is a two-way effect between the level of personal welfare and social income distribution. That is, the level of personal welfare depends on the social income distribution system, and the level of personal welfare also affects the social income distribution system (Harsanyi 1953). Milanovic believes that unequal property income distribution will result in higher redistribution, which distorts the tax burden and weakens economic growth (Milanovic 2000). Similarly, Bourguignon suggests that the suitability of economic policies will lead to different levels of residents' property income (Bourguignon 2003). The third one is studying the impact of property income on economic growth. There is no consistent conclusion on whether property income has a positive or negative effect on economic growth. For example, scholars such as Galor and Zeira (1993), Persson and Tabellini (1994), Xu et al. (2003) concluded that income inequality will reduce economic growth based on empirical research. However, Fields (2003) obtained an opposite conclusion with different measurement methods, that is, the impact of income inequality on economic growth is positive. Finance is the foundation of governance. Modern finance theories suggest that finance can adjust the primary distribution of residents' property income through the indirect adjustment to realize fair distribution and ensure social stability. The government's influence on income distribution is mainly realized by adjusting the scale and structure of fiscal expenditure. In developed countries, the proportion of fiscal expenditure on GDP normally over 30%, while the percentage in China is less than 25%. With respect to the structure of fiscal expenditure, Yang and Fang found that the transfer expenditure and security spending are the main part of fiscal expenditure in the United States while fiscal expenditure in China mainly focuses on investment and government consumption (Yang and Fang 2010). In the market economy, fiscal expenditure realizes primarily through the financial system, especially for investment spending. Therefore, the development of a financial mechanism is included in the reform of the financial system in China. Liu and Fu divided the information process of China into four stages (Liu and Fu 2018). The first stage is the exploration of investment and financial system (1979)(1980)(1981)(1982)(1983)(1984)(1985)(1986)(1987)(1988)(1989)(1990)(1991), when a loan is introduced in financing infrastructural projects. The second stage is the establishment of government investment and the reform of the financial system during the period of the socialist market economy (1992)(1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000)(2001)(2002), when the Policy Banks and financial asset management companies are founded and issuing bonds becomes the main channel of infrastructure financing. The third stage is the reform of government investment and financial system during the period of the improved socialist market economy (2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012), when infrastructure construction is financed by local financing platforms. The fourth stage is the period of comprehensively deep reform (from 2013 to now), during which the local government mainly uses debt replacement and standardized PPP (Public-Private Partnership) pattern to allocate bank credit resources. Figure 1 shows the relationship intuitively. At the early stage of Reform and Opening-up, the reform of using loans instead of allocation changed the previous accounting concept of fiscal expenditure for state-owned enterprises. With the development of Reform and Opening-up, national wealth increases year by year. However, if replaced the total amount of deposits and loans with the increment amount of each year, the development is similar to the path of fiscal reform and fluctuates with the change of fiscal policy. It can be seen that finance in China, especially for banking, is set up for fiscal purposes and developed with fiscal reform. Although the history of financial development theory is limited, existing studies show that financial development is related to economic growth and income distribution, and it may widen or narrow the gap of income distribution. The original research of the relationship between finance and economy started in 1912, Schumpeter believes that finance can distinguish the innovative entrepreneurs and provide credit support for their innovation, and therefore promote economic growth (Schumpeter 1912). McKinnon and Edward (1973) study the relationship between financial development and economic growth from different perspectives and propose the theory of "financial repression" and "financial deepening", which are the foundation of financial development theories in developing countries. They suggest that excessive government intervention will negatively affect financial efficiency and economic development, and therefore financial liberalization should be advocated. The marketization of the interest rate can increase savings and investment and help to achieve the goal of financial and economic growth. However, Stiglitz and Weiss hold different opinions on their proposition (Stiglitz and Weiss 1981). They suggested that information asymmetry of credit cooperation is the biggest problem in the financial market. Stiglitz put forward the financial constraint theory in 1993 and pointed out that governments should support financial institutions to guide enterprises and residents through a series of financial restraint policies such as deposit regulation and market access restrictions. The financial restraint theory believes that governments can solve the problem of financial market failure and promote economic growth, and therefore financial supervision should be adopted and strengthened. On the other hand, the financial repression theory suggests that the government's control in the financial market distorts the allocation of resources and damages economic growth, and thus financial liberalization should be advocated. In reality, after the second world war, the phenomenon of financial repression did not appear in Thailand, Indonesia, Malaysia and China and the exercise of financial constraint theory also failed in the United States and Europe. As early as 1969, Goldsmith systematically elaborated on the concept of financial structure (Goldsmith 1969). He suggested that financial development is about the change of financial structure and the evolution process of financial structure is the process of financial development. He also proposed the eight indicators to measure financial structure, including the Financial Interrelations Ratio (FIR). This indicator is very complicated in the original design and it is used as a measure of financial structure and financial development scale. To measure the financial level of a country or a region, McKinnon proposed a quantitative index of financial level, which is M2/GDP, based on Goldsmith's theory (McKinnon and Edward 1973). This index reflects Although the history of financial development theory is limited, existing studies show that financial development is related to economic growth and income distribution, and it may widen or narrow the gap of income distribution. The original research of the relationship between finance and economy started in 1912, Schumpeter believes that finance can distinguish the innovative entrepreneurs and provide credit support for their innovation, and therefore promote economic growth (Schumpeter 1912). McKinnon and Edward (1973) study the relationship between financial development and economic growth from different perspectives and propose the theory of "financial repression" and "financial deepening", which are the foundation of financial development theories in developing countries. They suggest that excessive government intervention will negatively affect financial efficiency and economic development, and therefore financial liberalization should be advocated. The marketization of the interest rate can increase savings and investment and help to achieve the goal of financial and economic growth. However, Stiglitz and Weiss hold different opinions on their proposition (Stiglitz and Weiss 1981). They suggested that information asymmetry of credit cooperation is the biggest problem in the financial market. Stiglitz put forward the financial constraint theory in 1993 and pointed out that governments should support financial institutions to guide enterprises and residents through a series of financial restraint policies such as deposit regulation and market access restrictions. The financial restraint theory believes that governments can solve the problem of financial market failure and promote economic growth, and therefore financial supervision should be adopted and strengthened. On the other hand, the financial repression theory suggests that the government's control in the financial market distorts the allocation of resources and damages economic growth, and thus financial liberalization should be advocated. In reality, after the second world war, the phenomenon of financial repression did not appear in Thailand, Indonesia, Malaysia and China and the exercise of financial constraint theory also failed in the United States and Europe. As early as 1969, Goldsmith systematically elaborated on the concept of financial structure (Goldsmith 1969). He suggested that financial development is about the change of financial structure and the evolution process of financial structure is the process of financial development. He also proposed the eight indicators to measure financial structure, including the Financial Interrelations Ratio (FIR). This indicator is very complicated in the original design and it is used as a measure of financial structure and financial development scale. To measure the financial level of a country or a region, McKinnon proposed a quantitative index of financial level, which is M2/GDP, based on Goldsmith's theory (McKinnon and Edward 1973). This index reflects the function of payment intermediation and saving in monetary and financial systems. Although the measures of Goldsmith and McKinnon reflect the scale of financial development in a country, they ignore the ability of finance in diverting savings into investment. In 1992, Asian Development Bank optimized McKinnon's index by replacing M2 with the credit volume of the private sector and this indicator can be used to represent the allocation efficiency of credit resources. Data, Variables and Research Methods A log-linear equation was constructed to estimate the impact of macro and micro factors on property income. On this basis, the regression-based inequality decomposition method was adopted to identify the contribution of financial instruments and financial development to property income inequality (Fields 2003). First, we construct the income determining equation: where Y is individual property income, x 1 , x 2 , . . . , x K is K factors affecting property income. In the empirical regression, the selected factors mainly include individual factors, village or neighborhood committee factors and regional macro factors. β 0 is a constant term, β 1 -β K is other parameters needed to be estimated, and ε is the random disturbance term. Write Equation (1) in the form of matrix: where a = [β 0 , β 1 , . . . , β K , 1], Z = 1, x 1 , x 2 , . . . , x K , ε i and 1 as x 0 , ε as x k+1 , then there are K + 2 variables in Z. If the variance of both sides of Equation (1) is calculated, the left side of Equation (1) is a simple inequality measure index, namely the logarithmic variance. According to the covariance theorem of random variables (Mood et al. 1974), the following equation holds: The left side of Equation (3) is the covariance of [lnY] with itself, which is actually the variance of [lnY], so we get: Divide both sides of Equation (4) Among them, s k is the relative contribution weight on income inequality of the k factor. If we ignore the influence of random disturbance on inequality, we can get: is the decision coefficient of the logarithmic linear regression model in Equation (1). At this point, the relative contribution weight on income inequality of the k factor can be expressed as: The data used in this paper comes from CFPS. CFPS is an interdisciplinary survey covering more than 16,000 families in 25 provinces, municipalities and autonomous regions in the mainland of China. The samples include all kinds of data from sample families such as the changes and dynamic relationships in families, economic activities, education and health condition. Household income of CFPS in 2010, 2012, 2014 and 2016 are used in this paper, and household property income is chosen as the explanatory variable. In order to make the measurement as accurate as possible, the first step is to clean the database. Firstly, as the property income in CFPS dataset belongs to the whole family, the size of property income may be different depends on the number of family members. Therefore, it is necessary to control the size of the family in the model. Secondly, it is common to use the relevant characteristic of residents as explanatory variables when constructing income determining equations based on existing studies (Li and Zhao 1999;Luo and Wang 2012;Li and Liu 2013;Luo 2018). During the period of original data collection of the CFPS database, investigations are conducted on all members in families without a clear specification of the head of the household. In order to remedy this limitation, members with the highest income in families will be considered as the head of the household, and their characteristic variables will be introduced to the model as explanatory variables. These variables include age, gender, residence ("1" stands for urban areas and "0" stands for rural areas), education level, occupation and health condition ("1" stands for "relatively healthy", "healthy" and "very healthy" while "0" stands for other conditions). In addition, variables from two villages are added to the model to investigate the impact of the economic development of the community on residents' property income. It includes the economic condition of the community ("1" represents the poorest condition and "7" represents the richest condition) and the per capita income level of community. As a community survey was not conducted in 2012 and 2016, the results in 2012 and 2016 were cautiously simulated by the data in 2010 and 2014, respectively, and it will not lead to obvious bias due to the stability of the community's economic situation in recent years. Finally, this study focuses on the impact of fiscal expenditure and financial development on household property income. Therefore, the proportion of fiscal expenditure in GDP, the index of financial development scale and the index of financial development efficiency were placed in the model. Furthermore, provincial macro data are matched to each family according to the code of each province. For each family, investments related to real estate, property transactions and financial assets generally will not occur across provinces. Thus, the degree of financial development at the provincial level can reflect the market conditions of family property transactions better, and affect the level and distribution of residents' property income. As some values are missing in several variables, only those families with complete information remained. Therefore, the sample sizes in 2010, 2012, 2014 and 2016 are 4988, 4610, 4216 and 6104 respectively. The total number of samples for four years is 19,918 and the descriptive statistics for all variables are shown in Table 2. Table 3 reports the estimated results of the property income determining equation. Firstly, samples from CFPS in 2010, 2012, 2014 and 2016 were used to estimate the income determining equation in different years, respectively. Then, the samples for four years were collected and used as a sample to estimate the income determining equation. When the regression conducted year by year, dummy variables of provinces were introduced in each model to control the heterogeneity of data. When the regression used the samples for four years as a whole, dummy variables of both provinces and periods were necessary to be controlled. The estimated results show that: Decomposition Results and Analysis (1) Factors related to the head of household and family. Householder's age is positively related to family property income. Though the result is not significant in 2012, the results of other samples are significant at the statistical level of 10%. It can be explained that property income depends on property value. After a long period of accumulation, the property value of the elderly will be higher than the young people's, and therefore the elderly will get higher property income. The gender of the householder has no significant effect on household property income except in 2016. The influence of householder's residence is only significant in 2012, but it is significant when considering all samples as a whole at the statistical level of 10%. It indicates that the property income of households living in urban areas is slightly higher than those living in rural areas. The estimated coefficient of householder's education level is positive in all samples, but it is statistically significant only in 2010 and for the whole sample. It means that this variable can be an important factor in household property income. Although the estimation coefficient of the householder's occupation is positive in all samples, it is not statistically significant except in 2014 and for the whole sample. It suggests that the influence of householder's occupation on household property income is uncertain. The health condition of householder and family size has no significant effect on household property income. In general, wages and operating income are relevant to health conditions and family size. However, property income is different from wage and operating income and it mainly depends on long-term accumulation of property value. Therefore, it is reasonable that the health condition of householder and family size are not relevant to property income. (2) Community factors. The economic condition of the community has no significant effect on household property income. However, per capita income level of the community has a positive relationship with household property income. It shows that residents in a well-developed community tend to own properties with higher value, and therefore the sales and rents of those properties will be higher. Moreover, the degree of economic development should be determined not only by the appearance of the community but also by the level of residents' income. (3) Provincial macro factors. The influence of the proportion of regional fiscal expenditure in GDP on household property income is relatively complex. In the sample of 2010, this variable is negatively related to the property income at a significant level of 1%. In the samples of 2012 and 2014, the variables are positively related to the property income at a significant level of both 1% and 5%. In the sample of 2016, the coefficient of variables is positive but not statistically significant. Due to the inter-annual heterogeneity, the effect of financial expenditure on household property income in the whole sample is not statistically significant. Overall, the government's allocation uses more economic resources, which may affect residents' savings, wealth level and household property income. The index of financial development scale can positively promote the growth of residents' property income in all samples at a significant level of 1%. And the index of financial development efficiency has a negative impact on residents' property income at a significant level of 1% except for 2014. In general, property transactions and financial investments develop better in regions with a higher degree of financial development, and residents from those areas tend to gain higher property income. Besides, the ability of the regional financial sector in converting savings into investment would be higher if the region's financial market is regulated and the greater efficiency of capital flows represents the higher efficiency of financial development. On the contrary, speculation will appear in the financial market of regions that lack effective supervision, and it will damage the profitability of financial assets for residents. Table 4 reports the breakdown results of property income inequality in the sample years. On the whole, characteristics of the head of household including age, gender, residence, education level, occupation, health condition and family size, and community economic conditions including indicators such as economic conditions and per capita income of residents, contribute little to property income inequality. This paper primarily focuses on the proportion of fiscal expenditure, the index of financial development scale and the index of financial development efficiency. In the three core variables, the contribution of the proportion of regional fiscal expenditure on property income inequality in 2010 is 9.34%, and the ratio changes into −17. 31%, −19.23% and −7.38% in 2012, 2014 and 2016, respectively. As the variable's contribution to property income inequality is negative, the proportion of fiscal expenditure can help to reduce the inequality of property income distribution. In terms of the whole sample, a high percentage of fiscal expenditure can also reduce the inequality of property income distribution, but the ratio is only −4.94% in this sample. Combined with the estimated results from the determining equation, it can be concluded that governments can participate in resource allocation and intervene in the market and private sector through fiscal expenditure. Nevertheless, it is helpless for residents to gain higher property income, but it can reduce the inequality of property income distribution by improving the restraint of the rich. But the effect of redistribution is relatively low, and it remains to be improved in the future. Note: "*", "**" and "***" mean significant at the statistical level of 10%, 5% and 1% respectively. From the perspective of financial development, two indicators that measure the level of regional financial development have positive contributions to the inequality of property income. The contributions of two variables different with years, however, when considered as a whole sample, a bigger scale of regional financial development will lead to more inequality in the property income distribution of residents. The contribution rate of the financial development scale is up to 20.81% while the contribution rate of each year is 11.93%, 5.99%, 6.97% and 13.79% respectively. The efficiency of financial development also damages the equality of property income distribution, and the rate of contribution is 7.62%. Specifically, the contributions of financial development efficiency on property income inequality in 2010, 2012, 2014 and 2016 are 7.9%, 15.56%, 11.69% and 3.8% respectively. It can be concluded that financial development provides a normative market for resident's property transaction and leasing, and ensures the property traded at a more reasonable price. However, families with higher property value gain benefit from property transactions and earn more property income, while low-income families with less property cannot benefit from the improvement and development of financial markets. As a consequence, financial development will widen the gap of property income between the rich and the poor. Table 5 reports the estimated results of the property income determining equation according to different regions. In the eastern region, the proportion of fiscal expenditure significantly positively related to property income at a significant level of 1%, and the estimated coefficient value is as high as 32.326. Government intervention has a positive effect in economically developed areas. It will effectively promote trade and leasing of property, and becomes an important factor in increasing residents' property income. In central and western regions, fiscal expenditure also significantly contributes to residents' property income, but the estimated coefficients are only 5.798 and 9.672, which are much weaker than the effect in the eastern region. The proportion of regional fiscal expenditure 32.236 (5.29) *** 5.798 (2.35) ** 9.672 (9.14) *** Note: "*", "**" and "***" mean significant at the statistical level of 10%, 5% and 1% respectively. Similarly, the scale of financial development contributes to the improvement of residents' property income in the eastern region, but the estimated coefficient has reduced to 2.621. However, the influence of financial development efficiency is not significant and presents a weakly negative relationship. This shows that in developed regions, the financial market is relatively standardized and stable, and market competition is relatively strong. Property transactions, leasing and financial investment behaviors are normal businesses for residents, and they will not largely increase the residents' income. On the contrary, in the central region, the scale of financial development is statistically significant at 1% level and the estimated coefficient is 3.146. The efficiency of financial development is also statistically significant at a 1% level and the value of the estimated coefficient is −1.151. The results show that, in the central area, a high value of the financial development scale index can improve residents' property income while the situation is opposite for the financial development efficiency index. In the western area, two indicators that measure the level of financial development are significantly negatively correlated with property income. One of the reasonable explanations is that financial development cannot benefit the resident's property income in the irregular financial market. Finally, Table 6 reports the breakdown results of property income inequality in the eastern, central and western regions. It can be seen that in the eastern region, the contribution of regional fiscal expenditure on the inequality of property income is −26.87%. Governments can significantly reduce the inequality of property income distribution through fiscal intervention. In the central region, fiscal expenditure still plays a role in reducing the inequality of property income distribution though its effect is only −4.66%. In the western region, fiscal expenditure deteriorates the level of property income inequality, resulting in a substantial increase in property income inequality of 18.49%. In terms of financial development index, financial development in the three regions worse the distribution of property income, but in the eastern region, the index of financial development scale only increases 1.77% of property income inequality and the index of financial development efficiency leads to less than 8% increase in property income inequality. In the central region, the index of the financial development scale significantly damages the equality of property income and its contribution is up to 28.24%. Compared with scale, the index of financial development efficiency slightly increases the inequality of property income with the contribution of 3.88%. In the western region, the contributions of financial development scale and financial development efficiency on property income inequality are relatively high, which leads to further deterioration of property income distribution. Conclusions With the construction of the logarithmic linear income determining equation, this paper uses the data from CFPS in 2010CFPS in , 2012CFPS in , 2014CFPS in and 2016 to estimate the correlation coefficient of property income determining equation, and measures the contribution of influencing factors based on the regression of inequality decomposition method. Empirical analysis shows that: (1) The scale of fiscal expenditure is a "double-edged sword" for residents' property income. Effective fiscal expenditure can promote the growth of residents' property income, but the redistribution effect of fiscal expenditure on property income is relatively low. Overall, the crowding-out effect appears in government allocation as it occupies economic resources from residents, and thus affect the wealth of residents and their property income. The regional difference is obvious. In terms of the eastern, government intervention has a notable positive incentive effect in the market. It effectively promotes the trade and leasing of property, becoming an important factor in increasing residents' property income. The scale of financial expenditure significantly promotes property income at the statistical level of 1% and the estimated coefficient is as high as 32.236. In central and western regions, the effect of government fiscal expenditure has been significantly weakened, with the estimated coefficient of 7.7 (5.798 in the central area and 9.672 in the western area, both significant at the statistical level of 5%). In the redistribution effect, the research shows that an appropriate size of fiscal expenditure can reduce the inequality of residents' property income. The government's participation in resource allocation and market intervention through fiscal expenditure will not help residents gain higher property income, but it can reduce savings and accumulations of rich groups and then improve the equality of property income distribution. As for the relative scale of fiscal expenditure, its contribution to property income inequality in the eastern is up to −26.87%, but this rate drops into −4.66% in the central. In the western area, fiscal expenditure even worsens property income inequality, resulting in inequality increased by 18.49%. Combining the governance performance of the eastern, central and western regions, the importance of government's financial management is obvious and the government still has a large space in increasing residents' property income. (2) The development of finance and economy is complementary. Financial development improves residents' property income, but at the same time aggravates the inequality of residents' property income. On one hand, except in the western region, the analysis of the financial development scale index suggests that financial development promotes the residents' property income growth at a statistically significant level of 1%. The development of finance helps to provide a standard market for residents' property transactions and leasing, encouraging more and more residents to engage in financial transactions. Meanwhile, financial development constantly enriches financial products and promotes greater property income for more and more households. On the other hand, except in 2014, the financial development efficiency index also has a negative effect on property income at a statistically significant level of 1%. In general, a great financial development scale leads to inequality of property income distribution and the contribution is as high as 20.81%. This number indicates that residents with none or a small amount of property cannot benefit from financial development and the gap between families with property and without property will expand accordingly. High efficiency of financial development results in deterioration of property income distribution and its contribution ratio of unequal distribution is 7.62%. (3) Financial development is also related to the efficiency of local government. In financial markets with strong supervision, financial development can improve residents' property income and restrain the inequality of property income, and vice versa. In an economically developed area, financial development contributes to the growth of residents' property income, and the financial development scale index increases property income inequality by 1.76% and the financial development efficiency index leads to a nearly 8% increase in property income inequality. In economically developed areas, the financial market is well-organized with various kinds of financial products and strong market competition, and therefore the growth of residents' property income is relatively stable. In the central part of China where the economic development is relatively backward, although financial development can increase residents' property income, it leads to the deterioration of unequal property income and the contribution is up to 28.24%. In the western region, financial development negatively affects residents' property income and its contribution to unequal property income distribution is about 10%. Policy Recommendations Despite the current percentage of property income in total income is small, residents' property income has grown rapidly in recent years. Financial development contributes to the rapid growth of property income but damages the equality of property income distribution. In addition, the redistribution effect of fiscal expenditure on residents' property income is limited, and therefore fiscal and financial policy fails to achieve the original purpose of "create conditions to let more people own property income". According to the findings above, recommendations of policy are listed as follows. (1) It is advisable to allocate government expenditure rationally and spare no effort to build an effective government. The results of this paper show that an appropriate scale of fiscal expenditure has an inhibitory effect on the inequality of property income in economically developed regions while the situation is contrary to less developed regions. Under the condition of the socialist market economy with Chinese characteristics, the budget and final account system in China are different from western countries. With the maturity of the market economy, expanding fiscal expenditure is inevitable. In order to keep the balance of the budget system, both "increasing income" and "reducing spending" is necessary. In terms of "increasing income", the government primarily needs to eliminate the factors that hinder the development of productive forces through streamline administration and institute decentralization. Besides, innovation is recommended to improve the enterprises' efficiency and then promote the growth of GDP. Moreover, the government needs to accelerate the reform process of the financial and taxation system. In the context of "Supply-side Structural Reforms", the situation of tax revenue growth rate exceeding GDP growth rate is unlikely to continue. Thus, the financial and tax reform should focus on the revolution of the tax system, especially in the tax structure. Specifically, the government should promote the implementation of inheritance tax and real estate tax, and reduce the proportion of indirect tax to increase the proportion of direct tax. "Reducing spending" is about the best use of every penny. That is to say, the government is suggested to strengthen the constraints of public fiscal expenditure, and further improve the "Budget Law" and its supporting implementation rules to achieve the requirements of "establish a comprehensive, standardized and binding budget system and fully implement performance management". (2) It is suggested to improve the rule of law, adjust financial support policies and promote market equity. To realize the redistribution of national income, the government uses different fiscal policies to ensure fund flow in financial institutions without changing the ownership of capital. Proactive fiscal policies can promote national economic development and contribute to the rapid growth of finance. However, this study shows that the current financial policy not only promoting residents' property income but also worsening inequality, and the deterioration is more obvious in developed regions compared with less developed regions. Therefore, it is necessary to promote the legislation, standardization and routinization of market regulation to build a fair market.
9,045.4
2020-06-17T00:00:00.000
[ "Economics" ]
The Impact of Corruption on Economic Growth and Cultural Values in Nigeria: A Need for Value Re-orientation Corruption is a global problem and no country of the world is totally free of its menacing grip [1]. It has been seen as a structural problem of political, economic, cultural and an individual’s malaise [2]. Although corruption remains a global challenge to the quest for development and welfare, it is a recurring theme in the African discourse. It has affected many countries all over the world especially the developing countries [3]. The Nigerian scenario and experience provides a useful or global illustration of the nuances surrounding corruption and how it interfaces with the state and the struggle for development and value re-orientation. Introduction Corruption is a global problem and no country of the world is totally free of its menacing grip [1].It has been seen as a structural problem of political, economic, cultural and an individual's malaise [2].Although corruption remains a global challenge to the quest for development and welfare, it is a recurring theme in the African discourse.It has affected many countries all over the world especially the developing countries [3].The Nigerian scenario and experience provides a useful or global illustration of the nuances surrounding corruption and how it interfaces with the state and the struggle for development and value re-orientation. Nigeria is the most populous country in Africa and a very important oil producer.However, it has been struggling to decrease unemployment, income inequality and its dependence on oil [4].Therefore, an axiom that Nigeria is richly endowed by providence with human and material resources critical for national development and advancement is toothless.This is because, it is widely accepted that the misappropriation of public funds and asset by corrupt elites has been a major cause of Nigeria's underdevelopment [5].Nigeria scores poorly on Transparency International's Corruption Perception Index as it gained two points in 2014 as compared to 2013 receiving a score of 27 on a scale from 0 (most corrupt) to 100 (least corrupt).The country was therefore, ranked the 38 th most corrupt country in the world (that is, 136 th out of 175 countries assessed) [6]. Nigeria is often classified as a neo-patrimonial prebendalism state [7,8] and these particular characteristics have serious implications on the social mechanisms enabling corruption in the country [9].Patrimonialism is defined as a social and political order where patrons secure the loyalty and support of clients by granting benefits from their own or state resources while neo-patrimonialism gives rise to a 'hybrid' state which often fails to guarantee the universal and fair distribution of public resources [10].Corruption in Nigeria manifest itself in different ways, both on a micro and macro level, and it occurs at all levels of society [11].According to the report by Amundsen [12], the types of corruption in Nigeria are; rent-seeking, embezzlement, conflict of interest, bribes and kickbacks, nepotism and cronyism, corruption in provision of services, political patronage, and electoral corruption, among others. The 1999 constitution of the Federal Republic of Nigeria provides the motto of the country which is Unity and Faith, Peace and Progress.This is because, every society needs to define its values and engage in activities that will sustain those set of values [13].However, there has been a lot of indiscipline in every face of life in Nigeria.Among them are; lack of integrity, corruption, the get-rich-quick syndrome and pursuit of easy money which has reduced the dignity of labour, religious intolerance, none respect for the country in terms of our institutions and national symbols.This necessitated the great need for value re-orientation.As quoted by Okoroafor and Njoku [14], value re-orientation is aimed at inculcating good values that can help Nigeria out of her numerous predicaments which can refocus the nation toward greatness.The Nigerian government has therefore put in place several efforts to orientate Nigerians to imbibe and instill the culture of virtue and to shorn immoral acts.The government has made some efforts and different strategies to curb corruption in the country, For instance, the introduction of War Against Indiscipline (WAI) by Buhari to change the immoral attitude of Nigerians for better, the introduction of Economic and Financial Crime commission (EFCC) to check corruption in the country, and other agencies such as; Independent Corrupt Practices and Other Related Offences Commission (ICPC) to ensure ethical and moral values by restoring the good moral values inherent in the traditional society.But Odey and Ashipun [15] noted that most of these policies made by the Nigerian government are still altered by the custodian of power and authorities in the state.In the vein, Ughorojeh [16] lamented that while all successive governments have been time and care to identify and condemn the evil corruption plaguing the Nigerian economy, not much efforts has been made to combat it.Similarly, Onoge [17] noted that corruption has persisted in the country despite efforts to rout it out, noting that its rate and scale increased enormously in the oil boom days. Recently, the Nigerian government has also set up strategy or fight against corruption under the leadership of Buhari with stringent penalties put in place for offenders.According to Odey and Ashipu, ethics is intrinsically related to morality and it is also related to religion which is a product of people's culture.Thus, considering the intensity of government efforts in instilling discipline and eradicating corruption in Nigeria in order to transform and re-orientate the cultural values in Nigeria has informed this study. In the similar vein, several studies have also shown the negative effect of corruption on economic growth in Nigeria [18][19][20][21][22][23].Thus, the effect of corruption on economic growth in Nigeria cannot be overemphasized.Also, Guru and Abdul noted that corruption has a significant negative effect on economic growth and development.Adewale [24] posits that although corruption is a universal phenomenon, its magnitude and effects are more severe and deepseated in Nigeria.Tolu and Ogunro argued that the futile attempt by the government to fight the cankerworm stems from the fact that the government itself is greatly infected with the virus and an average Nigeria is seen as corrupt in most part of the world.It appears that corruption has become deep-rooted in Nigeria as a result of the fact that, people from other countries now see it as part of the tradition of the Nigerian society.Very little or no study has been done in the area of evaluating the impact of corruption on economic growth and cultural values in Nigeria.It is against this background that this study intends to fill the gap by addressing the relationship. Conceptual Clarification Corruption Corruption Perception Index (CPI) was adopted or used as a measure for the level of corruption at national level in 2000.It ranks countries/territories based on how corrupt a country's public sector is perceived to be.Scores range from 0 (highly corrupt) to 100 (very clean).This implies that low scores indicate high level of corruption and high scores mean low level of corruption.In Nigeria, Umar, Samsudin, Mohamed [25] argued that textual evidence reveals the apparent successes in the investigation, prosecution and conviction of corrupt offences in Nigeria but, the context in which the agency exists remains its major obstacle, especially the legal system, government commitment and management issues. Different arguments have been put forward to explain the pervasiveness of corruption in Africa; these include poverty, the personalization of public office, the political culture and the inability of leaders to overcome their colonial mentality in respect of their perception of public office [26].Rotimi, Obasaju, Lawal and Iseolorunkanmi presents kpakpin corruption model comprising trio (Pressure, Opportunity and Action).According to them, the nexus within the trio is the channel through which fraud or corruption practices manifests and that for any form of corruption or corrupt practice to manifest, the trio channel must come to being and be realized (Figure 1). Control of corruption Control of corruption reflects perceptions of the extent to which public power is exercised for private gain.This includes both petty and grand forms of corruption, as well as "capture" of the state by elites and private interests.It is one of the six dimensions of the Worldwide Governance Indicators [27]. Economic growth Ajayi [28] perceived economic growth as the increase overtime, of a country's real output of goods and services.Todaro and Smith [29] viewed economic growth as an expansion of the system in one or more dimensions without a change in its structure.According to Ijirshar [30], economic growth is an increase in the capacity of an economy to produce goods and services, compared from one period of time to another which can be measured in nominal terms (including inflation) or in real terms (adjusted for inflation).In other words, economic growth can be defined as the increase in the monetary or market value of goods and services produced by an economy over time. Value re-orientation Value re-orientation is the ability to bring back, the good values of old, back into existence.It can also be defined as the efforts made towards re-enacting the good values and the ability to inculcate these values on the individuals or members of a society.It is conscious development of human resources through ideological appeals, planning, training, productivity and efficiency in achievements through corporate culture [31].These can be done through formal and informal approaches.The formal approaches involve the use of school subjects in educating learners on civic matters in the state.This is also stipulated in the National Policy on Education (NPE) which is meant for training of citizens and the inculcation of civic values at the different levels of schools setting.The informal approaches involve folklores, ridicule, proverbs, praises and corrections, among others.Olisa [32] noted the important role of religious groups in rescuing the decaying standards in the country. Corruption and value re-orientation in Nigeria Corruption constitutes a cankerworm that has eaten deep into the very fabric of the Nigeria's social system.It has assumed a monumental height as the nation is ranked as one of the most corrupt nations in the world.These corrupt practices stem from the various callous, greedy, self-motivated and self-seeking attitudes of our leaders who are only interested in serving their pockets rather than serving the people [33].Thus, the negative perception of Nigeria persists in spite of the several emphases on anti-corruption and integrity promotion policies and strategies by successive governments.It has therefore deteriorated the cherished and acceptable standards and cultural values in the state.The policies and programmes such as, Federal Character, National Youth Service Corps (NYSC), Unity secondary schools, National sports, National symbols, Festivals, National ethical re-orientation, War Against Indiscipline (WAI), Directorate of Social Administration, Self Reliance, Economic Recovery and Social Justice (MAMSER), National Economic Empowerment and Development Strategy (NEEDS), Youth Enterprise with Innovation in Nigeria and N-Power Programme.The numerous efforts by the government towards value orientation and national integration however, failed partially or totally.This is partly as a result of corruption as noted by Njoku that the malady of corruption has polluted the character and personality of every Nigerian, doubt why, seemingly responsible Nigerians within the corridor of powers gather around themselves sycophants and praise singers.According to him, everybody has become a suspect of misplaced values. These necessitate the need for value re-orientation in Nigeria which can only be effectively attained when corruption is reduced to bearest minimum.As lamented by Soyinka [34] that some of the value orientation programmes are often characterized by political propaganda, victimization and coercion.Therefore, the role or influence of government through establishing and financing schools for moral education, and religious groups through their teachings, sanctions, and admonitions cannot be doubted.According to Darting and Steinberg [35], failed moral training of children gave birth to corruption in our society.Thus, prevailing high level of corruption in the country calls for stringent or war against it. Implications of corruption on economic growth According to Ibraheem, Umar and Ajoke, corruption has various implications for both the developed and developing economies.It hampers development and thus raises the level of poverty in any economy that finds itself entrenched in corrupt practices.It therefore contributes to uncertainty and risk in the growth process and development potentials of any economy.This is because; high level of morals and discipline is a sine-qua-non for the overall development of the country.Thus, a corrupt nation needs to employ strict anticorruption codes as stipulated in the legislations that created anticorruption agency without prejudice or double standards irrespective of the culprit's stature or position in the society. According to Rotimi, Obasaju, Lawal and Iseolorunkanmi, corruption and economic growth have been inversely relating with each other, causing undue arousal or doom among the people.It impedes growth and also erodes the already established economic value systems in Nigeria.It is therefore not an understatement as concluded by Achebe [36] that corruption has permeated the African society and anyone who can say that corruption in Africa has not yet become alarming is either a fool, a crook or else does not live in this continent.Adewale in examining the crowding-out effects of corruption in Nigeria noted that corruption retards economic growth in the country.Similarly, Fabayo, Posu, and Obisanya revealed that high level of corruption leads to low investment and economic growth in Nigeria.The folds of corruption such as; bribery, fraudulent acts, embezzlement of funds and property (public and/or private), ball of stiffing and election rigging, money laundering, examination malpractices in public and private schools are some of the corrupt practices perpetrated in Nigeria which have contributed to the decaying cultural values in the country.These have further caused social odds in the state among which are; lack of public infrastructures for easy economic and business activities, increased level of poverty in the state despite the enormous natural and human resources, less respect for fundamental human rights, and so on.Corruption has therefore retarded the efforts of both the private and the government to improve the well-being of the people and the whole economy.It is harmful and unhealthy to the whole economic system which leads to misallocation of resources, inefficiency and decay of cherished and acceptable standards of behaviors (values) and cultural values. Value orientation in Nigeria and its problems Value orientation is not a recent phenomenon in the development of Nigeria.It started before westernization, when the traditional education in the country was concerned with teaching or training children for social responsibility and political participation.According to Falade and Falade [37], the main focus of socialization in the African traditional society is character training, and that, all the agents and processes of socialization aimed at providing individual who are truthful, hospitable, respectful, honest, skillful, obedient, and patriotic.Falade [38] noted that there are many indigenous values among the Yorubas that promote social integration and enhance the building of civil society (that is; the supreme importance of man's character and judgment of God).They adopt strategies like; ridicule, instruction, discipline, proverbs, clubs, folklores, praises and corrections in order to inculcate good character and traits into the young ones. At the National level, approaches have been adopted to inculcate values in the citizens which can be categorized into; formal and informal approaches, the integration of civic education along with other subjects such as; social studies, religious education, and citizenship helps in providing training and inculcation of civic values as specified by the Nigeria National Policy on Education (NPE).Section 2 (f) of the National Policy on Education (NPE) state that the purpose of preprimary education should be, to develop a sense of cooperation and team spirit, while section 3(c) states that the goal of primary education is to give citizenship education as basis for effective participation in and contribution to the life of the society or nation [39]. The civic education and other related subjects are aim at enabling the learners to acquire the skills, knowledge, values and attitudes that would make them to become responsible citizens; creating adequate and fundamental political literates among Nigerians.They noted some informal approaches of value orientation in the country, viz: Jaji declaration by Major General Olusegun Obansanjo in 1977; National According to Falade and Falade, the numerous efforts by the government towards value orientation have not yielded much result in Nigeria.Similarly, Ajere and Oyinloye [40] pointed out that Nigeria is heading for a state of anomie considering all forms of dysfunctions in behavior patterns among youths and adults.The former noted some reasons why value orientation programmes in Nigeria have partially or totally failed.Up to mention are: i. The wrong value system in the Nigerian society as most of Nigerians pursue wealth and material things without giving due attention to national values.Ugwuegbu [41] argued that value orientation programmes in Nigeria tend to emphasis more of the negative than positive values. ii. The abandonment of fundamental role of socialization and child-training by families in pursuit of social, political and economic gains. iii. The undue emphasis on intellectual ability and certificate as imbedded in the examination structure (cognitive based) at the expense of skills and values. iv. Corruption practices, ethnic and religious bias, intolerance and bribery evident among Nigerian leaders.Other problems such as; terrorism, armed robbery and ethnic militias which has emanated from unemployment, poverty and marginalization also contributes to the decaying standard of values in the state. Methodology The study used secondary data ranging from 1999 to 2015.This period was selected to cover only the democratic era.Data were sourced from Central Bank of Nigeria (CBN) Statistical Bulletin of several issues, World Bank reports and the Transparency International.The study used both descriptive (trends analyses) and econometric tools.The econometric techniques are as follow; Augmented Dickey Fuller Test (ADF) was used to ascertain the stationary properties of the time series.The ADF formula was specified as: Due to the small sample, the study also used Ng and Perron (2001) which constructed four test statistics that are based upon the GLS detrended data d t y .The formula is stated as: The modified statistics may then be written as: Auto-Regressive Distributed Lag (ARDL) Model was used given the stationary of the variables that were incorporated in the model to test for long-run relationship among the variables and therefore determine long-run coefficients.The speed of adjustment was also conducted in Ordinary Least Squares framework. Model specification The model for the study is specified as: Where Log(RGDP)=Logarithm of Real Gross Domestic Product as a proxy for economic growth CPI=Corruption Perception Index RCR=Relative Corruption Rank CR=Corruption Rank CC=Control for Corruption The stochastic form is stated as: U t =random or stochastic error term Data Analysis The trend of corruption in Nigeria It is not exaggeration of the tragic events of the country since independence, to say that all efforts to establish a just and efficient administration have been frustrated by corruption [42].Corruption perception index was 0.16 (16%) in 1990.The index increased to 0.19 (19%) in 2005, 0.24 (24%) in 2010 and to 0.26 (26%) in 2015.In terms of relative corruption rank, Nigeria was ranked 98 th in 1999, 152 th in 2005, 134 th in 2010 and 136 th out of 176 countries as least corrupt countries.While in terms of most corrupt economies, Nigeria was ranked 2 nd in 1999, 6 th in 2005, 37 th in 2010 and 33 rd most corrupt country in 2015 [43][44][45][46].This implies that Nigeria has been relatively recording higher level of corruption and dwindling towards most corrupt countries as evident by the corruption rank of most corrupt countries.The trends of corruption perception index, relative corruption rank and the corruption rank of most corrupt countries are presented in Figure 2 while the control of corruption is presented in Figure 3.It can observe from Figure 3 that the control of corruption has been decreasing during the study period. The impact of corruption on economic growth in Nigeria The result of ARDL model reveals that the best lag selected for optional performance of the model is lag one.The residual tests of the selected lag show the absence of serial correlation and heteroskedasticity in the model [46][47][48][49].This implies that residuals were multi-variate normal and stable. The result of ARDL Bounds test shows that there is long-run relationship among the variables since the F-Statistic of 9.58891 is greater than the lower bound (I0) critical value of 2.56 and upper bound (I1) critical value of 3.49 at 5% level of significance.The longrun estimates are presented in the equation below: RGDP=7.96-0.015RCR + 0.09CPI + 0.03CR + 0.32CC Standard Errors (8.6) (0.004) (0.208) (0.055) (1.151)The result shows that Relatively Corruption Rank (RCR) has significant influence on economic growth in Nigeria negatively.Other corruption indices such as corruption perception Index and corruption rank which are presented in an inverse form had positive impact on the growth of the Nigerian economy likewise corruption for corruption.This explains that higher level of corruption in the country retards or impairs economic growth. The short-run estimates revealed a negative but insignificant speed of adjustment implying that initial deviations (incident by corruption indicators) in RGDP does not significant adjust to the long-run equilibrium in Nigeria at 5% critical level.However, it converges to long run equilibrium by 0.05% yearly. Conclusion and Recommendations The study concludes that the longtime reign of corruption in the country has impacted negatively on economic growth in Nigeria.It has also decayed or deteriorated the country's cultural values.The negative impact of corruption on economic growth and the decaying standard of Nigerian cultural values have necessitated the need for value reorientation in order to bring redemption to the country's national character and image.The study therefore recommends the following; i. The Nigerian government should advance the use of anti- ii. There should be re-orientation process in education system in Nigeria that would lead to redemption or retrieval and salvaging or restoring of the country's national character and image.The schooling process instill in the youth, standard and acceptable morals.Therefore, re-orientation of the education process itself would ensure character development and transformation, skill acquisition and even entrepreneurship along with job creation. iii. Government should adequately fund education to maintain, rehabilitate physical facilities, instructional and living conditions in public schools as well as libraries, classrooms and laboratories.This is applicable to private and corporate organizations through procurement of imported technical and scientific equipment, books, instructional materials and journals in the educational sector. iv. Religious groups should nurture human soul and make the human person genuinely rich, since religion as a system of belief that exerts strong influence in the daily lives, cultural values and attitudinal re-orientation of members.This is because; religious values have never changed, and reflect the law of nature.Among the values are; integrity, truth, honesty, patience, trust-worthiness, faithfulness, love and kindness, obedience as well as humanity. v. Parents should endeavour to fulfill their parental roles, goals, values and manners that would influence the children's moral and social behaviour positively.These can be done through teaching and training of their children and adequate monitoring and guidance of their behavioural patterns at home and developing in them, self-control in absence of external authority. ethical re-orientation by the Alhaji Shehu Shagari administration in 1982, War Against indiscipline by the Buhari/Idiagbon administration in 1984, Directorate of Social Mobilization, Self-Reliance, Economic Recovery and Social Justice (MAMSER) introduced by the Babangida Administration, National Economic Empowerment and Development Strategy (NEEDS) introduced by Olusegun Obasanjo's administration in 2004, and Fight Against Corruption introduced recently by Buhari's administration in 2016. Volume 6 • Issue 1 • 1000388 Int J Econ Manag Sci, an open access journal ISSN: 2162-6359 perception index.RCR = Relative Corruption Index.CR = Corruption Rank of Most Corrupt Countries.
5,391.4
2016-11-30T00:00:00.000
[ "Political Science", "Economics" ]
Navigating the Multimodal Landscape: A Review on Integration of Text and Image Data in Machine Learning Architectures : Images and text have become essential parts of the multimodal machine learning (MMML) framework in today’s world because data are always available, and technological breakthroughs bring disparate forms together, and while text adds semantic richness and narrative to images, images capture visual subtleties and emotions. Together, these two media improve knowledge beyond what would be possible with just one revolutionary application. This paper investigates feature extraction and advancement from text and image data using pre-trained models in MMML. It offers a thorough analysis of fusion architectures, outlining text and image data integration and evaluating their overall advantages and effects. Furthermore, it draws attention to the shortcomings and difficulties that MMML currently faces and guides areas that need more research and development. We have gathered 341 research articles from five digital library databases to accomplish this. Following a thorough assessment procedure, we have 88 research papers that enable us to evaluate MMML in detail. Our findings demonstrate that pre-trained models, such as BERT for text and ResNet for images, are predominantly employed for feature extraction due to their robust performance in diverse applications. Fusion techniques, ranging from simple concatenation to advanced attention mechanisms, are extensively adopted to enhance the representation of multimodal data. Despite these advancements, MMML models face significant challenges, including handling noisy data, optimizing dataset size, and ensuring robustness against adversarial attacks. Our findings highlight the necessity for further research to address these challenges, particularly in developing methods to improve the robustness of MMML models. Introduction The rapid advancement in digital technologies has precipitated an unprecedented increase in data across a multitude of fields, heralding a significant transformation in our comprehension of intricate systems [1,2].This surge in data spans several modalities, encompassing visual elements in photographs, the semantic aspects of text, and auditory signals, thus offering a holistic view of the environment [3,4].This complex environment has paved the way for the emergence of multimodal machine learning (MMML), which seeks to forge computational models that can assimilate information from varied modalities, thereby enhancing prediction accuracy and the efficacy of decision-making processes [2,5]. The rationale behind integrating multiple modalities stems from the inherent shortcomings of relying solely on single-mode data.Despite their detailed visual content, images may miss the contextual richness achievable through text [6].Conversely, text, while semantically dense, often falls short of conveying the entirety of visual or auditory experiences [7].The amalgamation of these modalities fosters the creation of models that are both intricate and nuanced, mirroring the perceptual abilities of humans [8,9]. The introduction of deep learning frameworks has notably advanced MMML's potential, facilitating the intricate extraction and integration of features from diverse data streams [10,11].Nonetheless, the task of crafting effective multimodal frameworks is fraught with challenges, including reducing overfitting, managing data disparities, and filtering out data noise [12,13].Successful frameworks are those that deftly maintain the distinct characteristics of each modality while capitalizing on the synergies between them to enhance overall model performance [14,15]. In this era marked by the omnipresence of data and the melding of technologies, the modalities of text and imagery stand at the forefront of the MMML field.Images capture visual intricacies and convey emotional subtleties, whereas text offers semantic depth and narrative coherence [16,17].Integrating these modalities unveils insights that surpass their parts, transforming a variety of application areas [18,19].This study makes the following contributions: • Exploration of how MMML leverages pre-trained models to extract features from both textual and visual data, highlighting methods that enhance data representation.• A comprehensive review of fusion techniques, detailing approaches for integrating text and image data, along with an analysis of their benefits and impacts.• Discussion of the limitations and challenges encountered in MMML. • Examination of the resilience of MMML models against noisy and adversarial data to determine their adaptability and practicality in real-world scenarios. The structure of the remainder of this paper is as follows: Section 2 outlines the research methodology employed.Subsequent sections delve into the research questions more thoroughly. Methodology This Section 2 delineates the comprehensive approach adopted to scrutinize various facets of multimodal machine learning (MMML).The process initiates with the formulation of precise research questions, proceeds with detailed search strategies, and culminates in the systematic extraction and assimilation of data, incorporating a stringent quality evaluation.This scoping review was reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) , ensuring a rigorous and transparent approach; see Figure 1 for additional details. Research Questions This section introduces a structured approach to navigate the intricacies of MMML.This begins with carefully crafting specific research questions that aim to guide our investigation into the nuanced aspects of MMML.These questions focus on key areas such as applying pre-trained models for feature extraction, the diversity and effectiveness of fusion methodologies, the challenges inherent to these architectures, and the resilience of MMML models in the face of noisy or adversarial data.Through a detailed examination, we formulated the following research queries: Searching Methodology To address our research inquiries, we conducted a comprehensive search across multiple digital libraries to identify pertinent scholarly articles.We assembled an extensive corpus of the relevant literature through our detailed exploration of various academic databases.The digital libraries utilized for this search included the following: In our strategic pursuit of relevant academic materials, we employed a wide array of keywords, including multimodality, deep learning, machine learning, neural network, image, and text.This selection of keywords was meticulously crafted to encompass all topics pertinent to our study.These keywords served as the foundation for our search queries within the aforementioned databases.The search strategies we implemented are given below: • Scopus -Query executed: (ABS(machine AND learning) AND TITLE (multimodal) AND ABS(image) AND ABS (text) AND (TITLE-ABS (deep AND learning) OR TITLE-ABS (neural AND network))). - Filter criteria: no filters were applied. - Filter criteria: no filters were applied. • SpringerLink -Query executed: where the title contains multimodal; query: text AND image AND ("deep learning" OR "machine learning" OR "neural network"); sort by relevance. - Filter criteria: the top 32 most pertinent entries were selected.From SpringerLink, we chose 10% of the articles for initial screening from the search result because it generated nearly 300 articles that seemed irrelevant to our study.Similarly, for Semantic Scholar, we took the first 1% of articles for initial screening as we obtained thousands of papers from the search query.We did not use such filters for initial screening for the other databases because the articles generated from search queries seem relevant to our research goals.The time frame filter was added to the ACM and Semantic Scholar databases to retrieve the most recent and relevant research articles from the past five years.This was conducted because these databases were returning older publications that were less relevant to the current research objectives.We designed this selection strategy to capture a representative and a high-quality sample of the current research landscape in our field of study. In our initial search, we obtained 341 research articles.After removing duplicates, 335 articles remained for screening.During the abstract and title screening phase, we excluded 30 articles, leaving 330 papers for full-text screening.However, due to library access limitations, we could not access the full texts of 15 papers, reducing the count to 290 papers for eligibility assessment.By applying our exclusion criteria to these 290 papers, we ultimately identified 88 relevant papers that addressed our research question and were included in our review study. Selection Criteria Following retrieving research papers from the databases using our search queries, we established criteria for inclusion and exclusion to refine our selection.The inclusion criteria were designed to incorporate research publications discussing multimodal machine learning (MMML) models applied across different settings, particularly those involving image and text data.Conversely, we excluded research papers that did not pertain to MMML or dealt with modalities beyond image and text, ensuring our focus remained tightly aligned with the core objectives of our study.Following the execution of our search strategies as described above, we initially identified 341 research papers.By applying our predetermined inclusion and exclusion criteria to this collection, we could refine our selection down to 88 papers that directly contributed to addressing the research questions at the heart of our study.During our investigation and finalization of our paper, we came across several recent studies in the latter part of 2023 that delve into the latest developments in the domain of multimodal models.We have found recent innovations of MMML models in a work of Guo et al. [20] which is a survey study of these models.Finding these contributions highly pertinent, we incorporated ten more papers into our corpus.Table 1 illustrates the distribution of papers across each database, both before and after applying our selection criteria, providing a clear overview of our research process and the basis of our literature review. Data Extraction and Synthesis With a methodical technique, we make sure to extract the relevant information that is crucial for answering our research questions.We meticulously scanned every article to collect information that we considered relevant to answer RQ1, RQ2, RQ3, and RQ4.We encoded information about pre-trained deep learning architectures, fusion techniques, their performance and limitations, and datasets used in those applications.To obtain answers to the research questions, we looked into different sections of the articles.Table 2 discusses the relevant sections for each research question.In addressing this research question, our objective was to delve into the types of architectures utilized for multimodal machine learning (MMML) models, specifically focusing on training models for both text and image data.An exhaustive review of the finalized selection of papers showed that MMML models frequently employ well-established, pretrained architectures to train image and text data.This approach underscores the reliance on proven neural network architectures pre-trained on extensive datasets, facilitating the effective learning and integration of multimodal data.This research question is designed to guide researchers in identifying which architectures are most effective for developing MMML models that process text and image data.By determining the preferred pre-trained architectures within the field, this inquiry aids in addressing the foundational structures that have demonstrated success in integrating and analyzing multimodal data. Text Feature Extractor In our exploration of pre-trained architectures for text data within MMML models, we discovered that Bidirectional Encoder Representations from Transformers (BERT) is the predominant choice.As evidenced in Table 3, BERT stands out as the most frequently utilized model for training text data.It operates by randomly masking word tokens and representing each masked token with a vector, thereby capturing the semantic and contextual essence of the input text.This capability makes BERT highly effective in a variety of applications, such as detecting fake news, identifying rumors, recognizing sarcasm, locating trending places from social media posts, combating online antisemitism, predicting the helpfulness of reviews, and analyzing tourism online reviews, as referenced in various studies [21][22][23][24][25][26][27][28]. Although BERT is heavily used, other architectures like RoBERTa, a modification of BERT by Facebook aimed at detection tasks, have also been used [26,29].Following BERT, Long-Short Term Memory (LSTM) networks are another commonly used architecture for MMML models, particularly beneficial in applications such as sentiment analysis, creating visual logs, multimodal retrieval, and polarity detection [30][31][32][33]. While BERT and LSTM dominate the landscape for text data processing in MMML models, other architectures also contribute but to a lesser extent.Though not as popular, these models play a significant role in the diverse applications of MMML.A summary of the neural network architectures deployed for extracting text features across various studies is presented in Table 3, highlighting the versatility and range of tools available for researchers in the field.BERT has emerged as a fundamental framework in Natural Language Processing (NLP) tasks, particularly notable for its depth in text representation and interpretation within multimodal contexts.Its application extends to review helpfulness prediction, where Xiao et al. [27] employed BERT to transform texts into sequential embeddings, with each row vector denoting a word, thereby enhancing the accuracy of review helpfulness predictions.Moreover, Gao et al. [23] utilized BERT's WordPiece subword tokenization algorithm to create a word dictionary, optimizing word segmentation by selecting the most likely merges.Agarwal [39] applied the WordPiece tokenizer for processing clinical data with BERT, demonstrating its versatility across various datasets. Li [28] introduced a novel attention mechanism through BERT to better connect review comments, thereby improving the textual analysis's relevance and interpretability.Sahoo et al. [43] highlighted BERT's ability to handle long sentences without fixed input size constraints, making it an ideal choice for extensive text feature extraction.Furthermore, Xu et al. [45] utilized BERT's multi-head attention mechanism to explore deep semantic relationships within sentences, showcasing the model's advanced analytical capabilities.The adoption of BERT for text embedding by Lucas et al. [25], Yu et al. [44], Ban et al. [41], and Liang et al. [42] further validates its effectiveness in extracting meaningful text features. On the other hand, Long Short-Term Memory (LSTM) networks, designed to overcome the vanishing gradient problem of traditional Recurrent Neural Networks (RNNs), have also been widely used for text feature extraction.The application of LSTM ranges from extracting text features from visual logs by Chen et al. [31], optimizing pre-trained wordembedding matrices for advanced feature generation by Yadav and Vishwakarma [30], to encoding texts into feature vectors by Alsan et al. [32].Ange et al. [33] employed LSTM to account for various emotional states, sentiments, and prior opinions in polarity detection tasks, illustrating its capacity to process complex sequential data and its importance in sentiment analysis. These instances underscore the critical role that BERT and LSTM play in enhancing MMML models through sophisticated mechanisms for deep semantic analysis and feature extraction from text data, thereby boosting model performance across various applications.Bi-LSTM is an extended version of LSTM that can process long texts from forward and backward directions.To extract text information from CVs, Peña et al. [51] used Bi-LSTM, which consists of 32 units and a tangent activation function.Hossain et al. [54] applied Bi-LSTM to produce contextual text representation from both forward and backward directions for input data.Ghosal et al. [52] fed documents to Bi-LSTM and then to a Multi-Layer Perceptron (MLP-1) for text feature extractions.For emotion recognition from the F1 dataset, Miao et al. [53] first used GloVe for tokenizing texts and then passed the word embedding to Bi-LSTM. Text-CNN is another architecture used for text representation.For sentiment analysis, Xu and Mao [61] used Text-CNN with 1D convolutional network with 128 kernels each of size five and 1D MaxPooling layer of size 3. Xu et al. [45] and Wang et al. [60] also used Text-CNN to extract text features for false/fake news detection.A type of RNN is used for generating image description, which is Gated Recurrent Network (GRU) in Babu et al. [58].They passed image parameters to GRU to process and generate a sequence of words to describe the image.For text representation and to understand the characteristics of hashtags, Ha et al. [56] applied TF-IDF as it can capture the importance of hashtags based on their occurrences.Yu et al. [65] used Doc2Vec for text feature extraction which extends Word2Vec.In contrast to Word2Vec, Doc2Vec turns the complete document into a fixed-length vector while also considering the document's word order.In the paper, Doc2Vec created 300-D features for each document. Lu et al. [70] introduced the VilBERT model, or Vision-and-Language BERT, intended to develop task-agnostic combined representations of natural language and image content.VilBERT uses the BERT architecture for text, which consists of several layers of transformer encoders.These encoders are used for tokenization and embedding.Learning Cross-Modality Encoder Representations (LXMERT) was designed by Tan and Bansal [71] for tasks like image captioning and visual question answering.LXMERT employs a transformer model for the text modality, similar to BERT.It uses feed-forward neural networks and multiple layers of self-attention to process input text.As a result, LXMERT can capture the complex contextual relationships present in the text.Huang et al. [72] introduced a multimodal transformer called PixelBERT.The authors used BERT for text encoding by splitting the sentences into words and used WordPiece to tokenize the words.In Flamingo, Alayrac et al. [73] used another transformer-based model, Generative Pre-training Transformer (GPT).Multimodal Embeddings for Text and Image Representations (METER) is a multimodal model developed by Meta AI [46].This model is used for multimodal classification tasks and image text matching.The authors used BERT, RoBERTa, and ALBERT to obtain text encoding in this model. Image Feature Extractor Just as with texts, there are specific neural network architectures designed for extracting features from and training images.Convolutional Neural Networks (CNNs) play a pivotal role in computer vision and image analysis tasks.In Table 4, we provide an overview of the neural network architectures employed in MMML models for image feature extraction, as referenced in various studies.According to Table 4, VGG-16 emerges as the most utilized architecture among others for image-related tasks.Architectures such as VGG, ResNet, AlexNet, InceptionV3, DenseNet, and SqueezeNet represent the suite of CNN models employed for deep learning tasks in imaging.VGG-16, specifically, is characterized by its 13 convolutional layers and three fully connected layers, with dropout layers following each fully connected layer to mitigate overfitting, with the exception of the last layer [65].This configuration yields 4096-D features from each image.For image-based sentiment analysis, Shirzad et al. [64] utilized a pre-trained VGG-16 model, initially trained on the ImageNet dataset, then fine-tuned and retrained on a Twitter dataset.Huang et al. [40] engaged VGG-16 for training on the MINT dataset containing microscopic images.Kim et al. [68] adapted a pre-trained VGG-16 model, modifying the last layer with a sigmoid activation function.Babu et al. [58] integrated two pre-trained models, VGG-16 and Xception (both originally trained on the ImageNet dataset), for image feature extraction, where VGG-16 includes 16 convolutional layers, and Xception comprises 71 layers. ResNet-50 is another widely adopted CNN architecture.For instance, Hossain et al. [54] employed a pre-trained ResNet-50 with modifications for disaster identification, removing the top two layers and retraining the last ten layers with new weights while freezing the first 40 layers.Rivas et al. [57] utilized a ResNet version with 152 layers, extracting 2048-D features from each image.ResNet-18 has also been used in multimodal applications; Hangloo and Arora [22] utilized ResNet-18 to extract visual information capable of identifying 1000 different object categories. Beyond CNNs, Faster-RCNN has been employed for image feature extraction, with Guo et al. [36] using it to identify and extract features from objects within images.Additionally, transformers, typically known for sequence processing, have been adapted for image encoding.Paraskevopoulos et al. [59] divided images into 16x16 pixel patches for processing with a visual transformer, and Huang et al. [72] used ResNet within a multimodal transformer for image encoding. VilBERT uses a modified Faster R-CNN model for images, a deep neural network designed for object detection applications [70].The transformer-based architecture, similar to that used for the text, is fed with the visual attributes this network collected from the images.This enables the model to process the visual elements using self-attention, similar to how it processes textual data.Tan and Bansal [71] proposed a visual language model, LXMERT, where the authors did not use any CNN architecture for feature extraction.Instead, they used the object detection method and considered the features of the detected objects.The objects are represented by their bounding box positions and 2048-dimensional Region of Interest (RoI).Microsoft researchers developed Vision and Language (VinVL) and used an object detection model to obtain visual features.The authors extract region-based features from images using R-CNN [77].Jia et al. [78] introduced Large-Scale Image and Noisy-Text (ALIGN), where they used EfficientNet for image coding, a variation in CNN architecture.Contrastive Language Image Pre-training (CLIP) was first introduced by Radford et al. [79] to understand various visual and text concepts.For image encoding, they used a visual transformer.Similarly, Alayrac et al. [73] applied a visual transformer to obtain image features in their model Flamingo.The visual transformer is also used in METER [46]. Description of Language and Image Architectures Based on the previous discussion, we found that the most commonly used architecture to extract text features is BERT.The existing language models used for natural language processing tasks were unidirectional, where predictions only considered previous tokens they have seen.It raises a problem for the tasks that need bidirectional context understanding.BERT is a pre-trained deep bidirectional model that uses a masked language model and a "next sentence prediction" task to jointly pre-train representations for text pairs [80].BERT's model architecture is almost similar to the transformer described by Vaswani et al. [1], a multilayer bidirectional transformer encoder.In the multilayer encoder, BERT uses multihead self-attention.An attention function maps a query and a set of key-value pairs and outputs the weighted sum of the values.The model can concurrently process data from various representation subspaces at multiple positions with multi-head attention, as follows: where , and the projections are the following parameter matrices: Here, Q is the query matrix, and K and V are the matrices for keys and values [1].The pre-training in BERT takes place by combining two tasks: masked language model (MLM) and next sentence prediction (NSP).In the MLM part of BERT, 15% of the input tokens are masked at random, and these masked tokens are then predicted using cross-entropy loss.A replacement technique addresses the fine-tuning challenge, which involves keeping the original tokens and using random and [MASK] tokens.In pre-training, a binarized next-sentence prediction task is included to improve the model's comprehension of the relationship between sentences.For example, two sentences, A and B, with a 50% chance that B is the sentence that comes after A (labeled "IsNext") and a 50% chance that B is a random sentence from the corpus (labeled "NotNext").NSP benefits tasks like Question Answering (QA) and Natural Language Inference (NLI).In the fine-tuning part, BERT aims to tailor the model to a particular task by adapting to a smaller, task-specific dataset to train and modify the parameters of the pre-trained model.The self-attention mechanism of BERT's architecture, in particular, makes it adaptable to perform various tasks, from text classification to question answering, which makes this process efficient.In this part, BERT is fed task-specific input data and outputs accordingly. We also discussed various techniques to extract image features, and among them, we found different variations in the Residual Network (ResNet) architectures that are primarily used.The use of ResNet architectures is preferable to others because its performance does not decrease even though the model increases the number of layers, and it is computation-ally efficient.This can be conducted when adding more layers to the network, making the added layers 'identity mapping' and the other layers duplicate layers of the original model.This way, training accuracy will not decrease by adding more layers.He et al. [81] first introduced residual learning.In their paper, they defined residual block as where x is the input layer, y is the output layer, and the F function is for residual mapping.He et al. [81] first defined H(x) as a mapping function to fit a few stacked layers, where x is the number of stacked layers.So, instead of using all stacked layers for the mapping function, the authors use another mapping function, which is F(x) : H(x) − x.It makes the original function as F(x) + x.It is possible to represent F(x) + x using feedforward neural networks with what are known as "shortcut connections".By using these shortcut connections, one or more layers are skipped.We blend their outputs with the outcomes from the stacked layers, effectively maintaining the original input (identity mapping) through these shortcut connections.Interestingly, these identical shortcut links increase neither the number of parameters nor the computing complexity. RQ 1.2 Which Datasets Are Commonly Utilized for Benchmarking These Models? To address this research question, we meticulously reviewed the selected articles to identify the datasets employed in multimodal applications.Through this review, we uncovered several common data sources researchers frequently utilize to compile study datasets.These include social media platforms such as Twitter and Flickr, which offer rich textual and visual data sources.Additionally, we identified widely recognized datasets such as IMDB, known for its extensive collection of movie reviews and metadata, and COCO, a benchmark dataset in the field of computer vision for object detection, segmentation, and captioning tasks.This exploration highlights the diverse range of datasets that underpin research in multimodal machine learning, reflecting the broad applicability of MMML models across various domains and data types. In our comprehensive review of the datasets encountered within the selected articles, we compiled findings into Figure 2, showcasing the diversity and frequency of dataset usage in multimodal machine learning research.Notably, the Twitter dataset, comprising tweets and images, was utilized by several researchers, including [26,29,57,60,64].Each study selected a distinct Twitter dataset tailored to their specific research tasks. Figure 2 highlights that the Flickr30k dataset is the most frequently used among the datasets we reviewed.An extension of this, the Flickr30k Entities dataset was employed by Yu et al. [44], encompassing 31,783 images with 44,518 object categories and 158k captions, providing a rich resource for training and testing multimodal machine learning models. Another pivotal dataset in the field is MSCOCO, utilized by Alsan et al. [32] for multimodal data retrieval.The MSCOCO dataset, renowned for its comprehensive pairing of images and text, includes 80 object categories across 330k images, each accompanied by five descriptions, offering an extensive basis for training on dual encoder deep neural networks [58].This assortment of datasets underscores the vast potential and applicability of MMML models across various contexts and data types, highlighting the significance of dataset selection in developing and evaluating these models. After summarizing datasets used in the articles, we analyzed the performance of datasets in different applications; see Table 5.For performance analysis, we gathered articles that reported the F1 score as a metric for evaluating dataset performance.We want the F1 score because it is one of the best measures of datasets with imbalanced samples in a number of classes.From the above table, we see that the work by Liang et al. [42] on the MM-IMDB dataset gave the highest F1 score for multimodal image text classification. RQ2: Which Fusion Techniques Are Prevalently Adopted in MMML? Reviewing the literature, we identified various fusion techniques employed in multimodal machine learning (MMML) models.These techniques, pivotal for integrating textual and visual data, are classified based on their structural and methodological approaches into several categories: • Concatenation Technique: This method involves the straightforward combination of textual and visual vectors to create a unified representation, facilitating the simultaneous processing of both data types.For instance, Palani et al. [21] concatenated text and image feature vectors to generate multimodal feature vectors, thereby harnessing the strengths of both text and visual information.The authors performed the concatenation by averaging the vector values in each vector position.Similarly, Paraskevopoulos et al. [59] applied the concatenation technique to merge text and visual encoders, assembling them into a classifier model to enhance the model's interpretative power. • Attention Technique: This approach utilizes the attention mechanism to focus on specific parts of the text and image features, enhancing the model's ability to discern relevant information from both modalities for improved decision-making.Ghosal et al. [52] utilized an attention mechanism as a fusion technique for detecting appropriateness in scholarly submissions, acknowledging that not all modalities are equally important.By introducing an attention layer and computing attention scores, the model could prioritize modalities with higher relevance, as demonstrated by Zhang et al. [35] who employed a multi-head attention mechanism for the joint representation of image and text features, calculating attention scores to weight the importance of images for source words.Xu et al. [45] further explored this technique by using the attention mechanism to discern relationships between words in a sentence and corresponding image regions, thereby ensuring a meaningful association between text and image features. • Weight-based Technique: This category includes Early Fusion, Late Fusion, and Intermediate Fusion techniques, each applying different weightage strategies to the integration process, allowing for a nuanced amalgamation of modalities at various stages of the model's architecture.Hossain et al. [54] utilized Early Fusion for disaster identification by merging image and text features, ensuring equal representation from each modality by taking the same number of nodes from the last hidden layer of each modality.This technique was also applied by Hangloo and Arora [22] for detecting fake news in social media posts.Late Fusion, on the other hand, is applied after feature computation, as seen in the work of Thuseethan et al. [69] for sentiment analysis, where it directly integrates features computed for attention-heavy words and salient image regions, showcasing the versatility of weight-based fusion in constructing multimodal frameworks. • Deep Learning Architectures: In the field of multimodal deep learning architectures, the development and application of diverse deep learning models have significantly advanced the area of multimodal feature representation.These architectures facilitate enhanced fusion and interpretation of information across different data modalities. A notable example is using Bi-LSTM by Asgari-Chenaghlu et al. [34] for integrating image and text features, showcasing the model's ability to handle sequential data effectively.Additionally, Yue et al. [24] introduced a knowledge-based network, ConceptNet, to fuse data.This network employs the calculation of pointwise mutual information for matrix entries, further refined by smoothing with the contextual distribution, illustrating an innovative approach to integrating multimodal data. A summary of these techniques is given in Table 6. RQ3: What Limitations or Obstacles Are Encountered When Using These Architectures? The exploration of MMML has unveiled significant advancements in efficient architectures and fusion methods.However, challenges and limitations have emerged alongside these developments, highlighting the complexities of integrating various data modalities.After thoroughly reviewing the research papers, we observed that most should have extensively discussed the limitations or obstacles encountered when working with multimodal machine learning models.However, through a rigorous examination of the articles, we identified and categorized the limitations that other researchers have encountered.In this section, we investigated the limitations or challenges encountered when utilizing MMML architectures, categorizing common issues observed in MMML models: • Dataset Size: One of the primary challenges in MMML models is determining the optimal size for datasets, as these models require large datasets due to data integration from multiple modalities.Data preprocessing for such vast amounts of data is costly and computationally intensive [9].Furthermore, the disparity in size and complexity between image and text datasets complicate their simultaneous training [82].• Data Annotation: Most publicly available datasets for text and images are tailored for specific tasks, necessitating the creation of custom datasets for new applications.This process involves data annotation, which, on a large scale, is often not readily accessible [83]. • Noisy Data: The presence of noisy data within multimodal contexts can lead to misclassification [26].The accuracy of outcomes diminishes if one of the modalities contains noisy data, underscoring the importance of data quality in MMML models. • Task-Specific Image Feature Extractor: The effectiveness of MMML models can be limited by task-specific image feature extractors.Challenges in extracting relevant features due to the inappropriateness of the method for specific tasks highlight the need for task-aligned model selection [28,84]. RQ4: In What Way MMML Models Can Be Robust against Noise and Adversarial Data? Label noise and data sample noise are two types of noise that can be present in data quality: label noise refers to faults or undesirable variations in the data labels, while data sample noise is related to errors or changes in the actual data samples.Deep learning methods, particularly those based on adversarial and generative networks, have shown promise in enhancing the quality of data for machine learning tasks by effectively managing label noise and data sample noise.Label noise in datasets arises from various factors, including human mistakes, inexperience, difficult annotation jobs, low-quality data, subjective classifications, reliance on metadata, and cost-cutting strategies on annotation processes.Label noise is a prevalent problem in real-world applications.In contrast to the ideal circumstances frequently expected in building models, label noise is common.It can result in unfavorable effects, including machine learning applications performing less well, the demand for training data increasing, and possible class imbalances.Domain knowledge can be a powerful tool to reduce label noise.For instance, ontology-based methods enhance classification tasks using hierarchical relationships between data classes. To address this research question, we examined the articles' methodology and discussion sections, seeking information about adversarial attacks, noisy data, and adversarial robustness.We aimed to identify any discussions or analyses related to these topics that could impact the performance and reliability of multimodal machine learning models.From our review, we first found that by encoding relationships between labels using a graph network, the Multi-Task Graph Convolution Network (MT-GCN) model uses both well-labeled and noisy-labeled data.Auxiliary Classifier GAN (AC-GAN), Conditional GAN (cGAN), Label Noise-Robust GAN (rGAN), and other extensions of Generative Adversarial Networks (GANs) offer additional techniques for handling label noise [83]. Pre-trained Vision and Language (VL) models have proven more resilient than taskspecific models.By introducing noise into the embedding space of VL models, the Multimodal Adversarial Noise GeneratOr (MANGO) technique has been put forth to improve this robustness [85].The purpose of MANGO is to evaluate and enhance VL models in response to four kinds of robustness challenges: alterations in the distribution of answers over nine distinct datasets, logical reasoning, linguistic variances, and visual content manipulation.MANGO uses a neural network to produce noise, which hinders the model from readily adjusting, in contrast to techniques that provide predictable local perturbations.This method is supplemented by masking portions of photos and removing text tokens to further diversify input and influence data distribution.Using MANGO to train models has been found to enhance performance on benchmarks. Discussion Based on a thorough literature review, we have concluded that BERT, LSTM, and their variations are the most popular language models among researchers in multimodal machine learning.Architectures such as ResNet and VGG, which are variations in CNN, are commonly used for image-processing tasks.Through our investigation into fusion techniques commonly employed in MMML, we have looked into various methods designed to merge data from multiple modalities.Our exploration covers a spectrum from weightbased methods like Early Fusion, concatenation, and attention mechanisms to cuttingedge multimodal deep learning architectures.These techniques aim to generate insightful conclusions and representations from the complex interplay between text and visual data.By showcasing numerous methods that address the complex requirements of multimodal data analysis, this review emphasizes the dynamic character of MMML.The choice of fusion technique is dictated by the specific needs of the task, with each method offering distinct benefits and applications. Investigating the limitations and challenges within MMML architectures offers valuable insights into the complexities of employing multiple data modalities.It becomes evident that addressing these critical issues is paramount for overcoming the obstacles inherent in MMML designs.Enhancing data annotation resources, adapting models to specific tasks, devising strategies for noise reduction, and improving data preprocessing techniques are crucial steps for the further development of MMML. From our search queries and after snowballing, we have found very few papers that discussed noise and adversarial attacks in the multimodal machine learning model.In MMML, the study of robustness and adversarial attacks is still in its primary phase, with little research on these complex problems.The potential for adversarial robustness may be particularly substantial but understudied, given the inherent intricacy of MMML models, which integrate and correlate information from a variety of input kinds, including text, images, and audio.Research on the adversarial attack of MMML systems needs to be more critical, as seen by the scarcity of work in this area.This gap offers a chance to conduct new research to create novel protection mechanisms while looking further into the subtleties of adversarial threats in multimodal situations.Expanding research efforts to strengthen MMML models against adversarial attacks is essential as they become more complex to ensure their dependability and credibility in practical applications.Developments in this area may result in multimodal systems that are more resilient and can endure a broader range of hostile strategies. Conclusions Our scoping literature review has systematically identified prevalent methods for leveraging data from image and text modalities.From our investigation into RQ1, we found that BERT and LSTM stand out as the leading pre-trained architectures for text embedding.In contrast, various VGG and ResNet architectures are predominantly utilized for image embedding.Our study further reveals that MMML practitioners frequently employ benchmark datasets such as Twitter, Flickr, and COCO to train and evaluate their models.These datasets offer rich, diverse, and multimodal data sources, enhancing and expanding MMML models' capabilities. As we delve into fusion methods, it is evident that the MMML community employs a broad spectrum of techniques, ranging from concatenation to attention mechanisms and advanced neural networks.Each method brings distinct advantages, reflecting the dynamic nature of multimodal fusion.However, our exploration of MMML's limitations and challenges uncovered several critical issues, including computational complexity, data constraints, real-time processing challenges, noise resilience, and larger datasets.Awareness of these limitations is crucial for researchers and practitioners engaged in MMML. This literature review sheds light on the architectural preferences, dataset selections, and flexible fusion strategies embraced by the MMML community.By addressing the inherent limitations and challenges of MMML, this study serves as a valuable guide, steering scholars and practitioners toward informed decisions and innovative solutions as MMML continues to evolve and expand its reach into various domains.As the exploration into multimodal data deepens, there is a profound opportunity to enhance our understanding of the world through integrated data modalities.This endeavor holds the potential to revolutionize industries, improve decision-making processes, and enrich our perspective on the world.In our future work, we aim to investigate the behavior of MMML models under adversarial conditions.Analyzing how these models respond to adversarial attacks will offer vital insights into their security and robustness, uncovering strategies to shield them from malicious interference. 3. 1 . RQ 1.1 Which Pre-Trained Models Are Predominantly Employed for the Processing and Learning Image and Text Data? Table 1 . Papers from each database before and after selection criteria. Table 2 . Data extraction for research questions from different sections. Table 3 . Architectures used to train text features in MMML. Table 4 . Architectures used to train image features in MMML. Table 5 . Performance metrics across different datasets. Table 6 . Fusion technique categories used in articles.
8,757
2024-07-09T00:00:00.000
[ "Computer Science" ]
Nanoscale zero-field electron spin resonance spectroscopy Electron spin resonance (ESR) spectroscopy has broad applications in physics, chemistry, and biology. As a complementary tool, zero-field ESR (ZF-ESR) spectroscopy has been proposed for decades and shown its own benefits for investigating the electron fine and hyperfine interaction. However, the ZF-ESR method has been rarely used due to the low sensitivity and the requirement of much larger samples than conventional ESR. In this work, we present a method for deploying ZF-ESR spectroscopy at the nanoscale by using a highly sensitive quantum sensor, the nitrogen vacancy center in diamond. We also measure the nanoscale ZF-ESR spectrum of a few P1 centers in diamond, and show that the hyperfine coupling constant can be directly extracted from the spectrum. This method opens the door to practical applications of ZF-ESR spectroscopy, such as investigation of the structure and polarity information in spin-modified organic and biological systems. you mention a few lines below. 8. It would be helpful to quote the calculation results below Eq. (S7) that you claim are consistent with experiment. They are given in the caption of Fig. S3, but not in the text. 9. Why are you testing for dependence of the 14-N transition frequencies on asymmetry of the quadrupole coupling? Is point symmetry at the N atom known for the P1 center? If there exists a rotation axis with multiplicity of at least three, the hyperfine and quadrupole tensor must both have axial symmetry. If not, I would be surprised if the hyperfine tensor would have exact axial symmetry. 10. Adding page numbers to the SI would be helpful. 11. The following typos and grammar mistakes should be corrected, as they may pose problems for a language editor: -eigenvalues of a parameter tensor are 'principal values', not 'principle values' -p. 3: 'phase difference before continuous driving', not 'phase difference before the continues driving' -p. 3: 'consistent with the hyperfine coupling', not 'consisting with the hyperfine coupling' -first paragraph of SI: 'induced by the surface dangling bonds as discussed in the main text', not 'induced by the surface dangling as discuss in the main text' Reviewer #2 (Remarks to the Author): The authors report on a somewhat overlooked territory in the range of nanoscale ESR techniques. They present in a very simple form how one can vary Rabi driving power to excite ESR transitions in target spins at zero magnetic field. They show that this form of zero-field ESR allows for an almost unhindered determination of the hyperfine tensor components of P1 centers in diamond as compared to double-electron-electron-resonance form of nanoscale ESR, a field to which they also contributed greatly in sensing target spins with the nitrogen-vacancy center in diamond. While the spectrum shown and the associated analysis are convincing, I find the manuscript lacking in several points, one of which is in fact whether this is indeed nanoscale ESR at all. Since the authors used an ensemble of NVs, detecting an ensemble of P1 centers respectively, and since no other information is given, I can only surmise that the signal arising from all NVs in the confocal spot (one micron cubed) was taken into consideration. From the low contrast of the resulting data, I also believe that this kind of measurement would not be easy to reproduce on the single spin level (NV and target spin). Therefore, the word 'nanoscale' here is somewhat problematic. I am positive that with different improvements to the experimental setup, the microwave power stability could be mitigated, but as long as that is the case, I fail to see the impact such a measurement could yield compared to the traditional ESR (induction-based) methods. If this were nanoscale ESR, then the fourth figure captures the essence of this work in a striking way, specifically figure 4c and 4d. In general, I think more focus should be given to the impact of this method, namely page 4 and figure 4. Regarding the pulse sequence: the authors' description of it is confusing, to such extent that I am not sure someone else in the community would be able to straight-forwardly reproduce it (not impossible, but just not trivial). I believe more effort should be made in clarifying the experimental details with emphasis on the pulse scheme. To summarize: While the manuscript definitely shows great potential, I am not completely convinced that it would have a great impact in the field. If the authors address all the points mentioned in this review and if the problematic issues (nanoscale volume, universality, analytical model) are dealt with, it would indeed be possible to recommend it for publication in Nature Communications. Specifically, below please find my comments for the manuscript: ============================================================= =01. The following was not stated clearly in the manuscript: Does one necessarily need hyperfine interaction to a nuclear spin (and a strong one at that) for this method to work? If so, then this ZF-ESR is not universal for all paramagnetic spins. 02. If all spins give the same signal, is there a spatial feature? What is the sensing volume? What is the no. of NVs in said sensing volume? What is the no. of P1 centers in said sensing volume? How does it compare with standard (traditional) ESR? 03. "Methods/State evolution" is not clear. Too simplistic in my humble opinion. Not only that, but I could not really understand from that section what exactly the pulse sequence used to probe the hyperfine interaction is (I could from the main text, but the "state evolution" section only confused me). 04. This method is limited to this 400 MHz in a sense. What happens when it is a spin 7/2 with 1 GHz splitting? 05. Have the authors really shown that these are only a few P1 centers? Can they simulate or give an analytical expression showing the difference between this so-called nanoscale sensing of few P1 centers and the sensing of many such defects? 06. The authors write that the diamond is adhered on a CPW. From the figure (1a,1b) it looks like the opposite. What is the source of this diamond? What is its thickness? Actually, figure 1b is very misleading, since from it the reader might think that the laser is focused on a single NV center, and also that the laser focal volume does not include the P1 centers. If the authors are exciting the NVs and reading them from the backside, I can only guess it is a rather thin (100 um or less) diamond. This should be clearly stated in the text (methods). 07. There is no mention in the text nor in the caption of figure 1 what exactly this light-blue layer on top of the diamond is. The authors do mention that different colors of arrows pertain to different spins, but do not offer a legend for this or any detail. 08. Fig. 3a's pulse sequence is misleading or just confusing. I would give one pulse sequence and show that its amplitude is the parameter being scanned. 09. Fig. 3c -looks like there are other very clear dips/peaks in the spectrum. What are they? Here, again, a numerical calculation of the P1 center's spectrum as probed by the NV using the ZF-ESR would have helped. 10. All figures with measurements -what is the error? At least mention in the text if it is too small to plot, although in fig.3c it may be significant as the contrast is very poor. 11. The contrast of the actual measurement (figure 3c) is very poor. What is the reason for this? How long did it take to acquire such a spectrum? If the measurement time is so long, did the authors take measures to account for drifts? 12. This is, I admit, very basic, but I think there should be a derivation or analysis of equation (1) in the supp. Specifically, the difference between it and a pure (x)-only pulse sequence. 13. Again, if ensembles were used, is this really nanoscale sensing? A typical confocal spot has a volume of 1 micron cubed (or more). If there are many NVs there and many P1 centers there, what is nanoscale? How would the signal look for real nanoscale sensing using one NV center? Would it be observable at all? 14. Typically Omega is much larger than the hyperfine interaction. Here the comment of 400 MHz being larger than A is true for 400 MHz, but what happens when Omega is of the same order and sometimes smaller than P1's hyperfine? Is this factored in the model? Is there a model? 15. Actually I do not see any model but just Gaussian peaks (dips) fitting. A model is kind of a minimal requirement here if the authors wish this method to mean anything. 16. Does a radical always mean a nuclear spin next to the free electron? This relates to my first comment. 17. The authors cite Broadway et al. as a good reference for a comparison between nanoscale ESR and ZFESR. I could not find any such comparison there. Perhaps this is a typo. If not, this reference should be more elaborate since, as I wrote, there is no such comparison there to the best of my knowledge. 18. The equations need proofing, e.g. Eq.(8) where the z after I should be in subscript... 19. Hx is continuously driven. Hy is just a pi/2 pulse, correct? Where is this treated in the state evolution section? 20. 400 MHz pulses again: Fuchs et al. actually pointed out that the RWA is not valid anymore at those strong driving conditions. How did this affect the authors' measurements? Did they use Gaussian enveloped pulses to circumvent some of the associated issues? This is not specified anywhere in the manuscript or in the supp. and so I guess not. Then I am even more puzzled that their experiment was working so beautifully. 21. There are some multiple peaks (dips) in the simulations shown in figure 4. I believe these are artefacts of the square pulses used in the simulation. If so, this should be stated. Perhaps a section in the supplementary explaining how this simulation was calculated would be of benefit to the inexperienced reader. Reviewer #1 (Remarks to the Author): This manuscript demonstrates zero-field EPR spectroscopy on P1 centers implanted into diamond by using nearby NV centers for sensitive detection. The experiment is based on dipole-induced polarization transfer between the NV observer and the observed P1 centers. The authors rightly argue that nanoscale EPR at zero field (or earth field) has advantages compared to such experiments in the presence of a strong magnetic field, which breaks symmetry and thus introduces orientation dependence of the transition frequencies. This is a highly significant advance in EPR spectroscopy on the nanoscale, which will probably have great influence on further development of this exciting field. It is also a very elegant use of the magnetic sensor properties of NV centers. The results and ideas are thus of sufficiently broad interest for publication in Nature Communications. In general, the manuscript is well written, clear, and concise. The conclusions are fully supported by the experimental evidence. Yet, a few points in the discussion should be improved, there are minor mistakes, and in some places the English needs to be improved before a non-specialist can do further language editing (see below). Therefore, minor revision is required. Reply: We appreciate that the Reviewer recommends our work for publication in Nature Communications. As suggested by the Reviewer, we have carefully revised the manuscript and polished the language. Details: 1. The specialist will understand from your wording that the product of T1rho with the dipole-dipole coupling between the NV center and the observed species determines whether the experiment is feasible. For a broad readership, this should be made explicit and, given that you know the dependence of T1rho on driving power, you should provide an estimate for the maximum distance between the NV center and the observe species where the experiment works. Reply: We have added a detailed theoretical model of this NV-P1 system in the revised SI Sec. I, and also gave an estimation of the detection area of the NV center in the revised "Methods". Specifically, the dependence of the signal on the T1rho and the dipole-dipole coupling between the NV center and the target spin is given by Eq.S8 in the revised SI. We also give an analytical estimation of the detection range (< 15 nm) of the NV center in the revised Methods. 2. It is true that you use a kind of cross-polarization technique and it may be appropriate to cite the Hartmann-Hahn paper. However, in your case polarization is transferred from a dressed species to a "bare" species, which is the type of transfer first shown in the NOVEL dynamic nuclear polarization experiment (A. Henstra, P. Dirksen, J. Schmidt, and W. T. Wenckebach, J. Magn. Reson. 77, 389 (1988)). It would be appropriate to cite the NOVEL paper, too. Reply: Thanks for the suggestion. The reference has been added in the revised manuscript. 3. Fig. 1a) is a sketch or diagram, but not a "phase diagram" of the setup. The term "phase diagram" has a well-defined meaning in physical chemistry. It should not be used for something different in work related to this field. Reply: Sorry for the mistake. It has been corrected. 4. The non-expert reader may puzzle what you mean by low-frequency noise with respect to Fig. 4f. What you show by the dashed black line is a broad low-frequency signal due to background spins. If there are only few nearby background spins, you might observe a few lines with stochastic frequencies. You might want to allude to this very briefly in the main text and discuss it in a little more detail in the SI. Reply: We have modified the description to clarify the low-frequency signal in the revised manuscript. Due to the dense distribution and short correlation time, the background spins always behave as a broad low-frequency signal, as observed in the Fig.S4. By further careful treatment of the diamond surface, it is indeed possible to narrow this background signal. According to the Reviewer's suggestion, we have also discussed it in the revised SI (section II and III). It may be helpful for the reader if you briefly mention in the main text that you show and discuss results on 14-N P1 centers in the SI. Reply: We have mentioned it in the revised manuscript (page 3, right column, first paragraph). 6. You discuss line broadening by the earth magnetic field. In principle, this could be avoided by placing the setup in a mu metal box or by active compensation of the earth field. This should probably be mentioned, because it would at once improve resolution and sensitivity. Reply: We have mentioned it in the revised manuscript (page 4, left column, first paragraph). 6. The caption of Figure S2 should mention that the arrows denote the three transition frequencies of the P1 center that you discuss in the main text. Reply: We have mentioned it in the revised SI. 7. The earth magnetic field varies a bit, but I believe that in Hefei it is closer to 0.5 G than to 0.3 G, which you quote below Eq. (S1) in the SI. This would also fit to the 1.4 GHz broadening that you mention a few lines below. Reply: It has been corrected in the revised SI. 8. It would be helpful to quote the calculation results below Eq. (S7) that you claim are consistent with experiment. They are given in the caption of Fig. S3, but not in the text. Reply: We have mentioned it in the revised SI. 9. Why are you testing for dependence of the 14-N transition frequencies on asymmetry of the quadrupole coupling? Is point symmetry at the N atom known for the P1 center? If there exists a rotation axis with multiplicity of at least three, the hyperfine and quadrupole tensor must both have axial symmetry. If not, I would be surprised if the hyperfine tensor would have exact axial symmetry. Reply: The previous purpose is to discuss a general case. However, as mentioned by the Reviewer, the asymmetry of the quadrupole coupling conflicts with the symmetry of the hyperfine coupling. Since we know that the P1 center is symmetric, we removed the part of discussion on asymmetry in the revised SI. 10. Adding page numbers to the SI would be helpful. Reply: We have added it in the revised SI. 11. The following typos and grammar mistakes should be corrected, as they may pose problems for a language editor: -eigenvalues of a parameter tensor are 'principal values', not 'principle values' -p. 3: 'phase difference before continuous driving', not 'phase difference before the continues driving' -p. 3: 'consistent with the hyperfine coupling', not 'consisting with the hyperfine coupling' -first paragraph of SI: 'induced by the surface dangling bonds as discussed in the main text', not 'induced by the surface dangling as discuss in the main text' Reply: We thank the Reviewer for pointing these mistakes. We have carefully polished the language in the revised manuscript and SI. Reviewer #2 (Remarks to the Author): The authors report on a somewhat overlooked territory in the range of nanoscale ESR techniques. They present in a very simple form how one can vary Rabi driving power to excite ESR transitions in target spins at zero magnetic field. They show that this form of zero-field ESR allows for an almost unhindered determination of the hyperfine tensor components of P1 centers in diamond as compared to double-electron-electron-resonance form of nanoscale ESR, a field to which they also contributed greatly in sensing target spins with the nitrogen-vacancy center in diamond. While the spectrum shown and the associated analysis are convincing, I find the manuscript lacking in several points, one of which is in fact whether this is indeed nanoscale ESR at all. Since the authors used an ensemble of NVs, detecting an ensemble of P1 centers respectively, and since no other information is given, I can only surmise that the signal arising from all NVs in the confocal spot (one micron cubed) was taken into consideration. From the low contrast of the resulting data, I also believe that this kind of measurement would not be easy to reproduce on the single spin level (NV and target spin). Therefore, the word 'nanoscale' here is somewhat problematic. I am positive that with different improvements to the experimental setup, the microwave power stability could be mitigated, but as long as that is the case, I fail to see the impact such a measurement could yield compared to the traditional ESR (induction-based) methods. If this were nanoscale ESR, then the fourth figure captures the essence of this work in a striking way, specifically figure 4c and 4d. Reply: We thank the Reviewer for pointing out the insufficient interpretation of the nanoscale detection in our previous manuscript. To clarify this point, we have added a theoretical model in the revised SI section I, and given a quantitative estimation of the detection area (< 15 nm) of the NV center in the "Methods". In the revised SI and "Methods", we give an analytical expression of the detected signal and show that more than 90% of the signal is contributed by the P1 centers within the 15 nm detection range, where the mean number of P1 is ~4. As the mean NV spacing is ~135 nm (mentioned in the revised "Methods"), each NV in the confocal spot is an isolated sensor with a nanoscale detection area. Therefore, it is indeed nanoscale ESR. We also note that a recently published nanoscale ESR paper [Hall, L. T. et al. Nat. Commun. 7, 10211(2016)] also used an ensemble of NV centers to detect an ensemble of P1 centers. They claimed that the NV sensor is highly local, and the response of each NV is dominated by a few P1 centers resided in the nanoscale detection range of NV. Secondly, the ZF-ESR measurement can be easily reproduced on single NV and single target spin level if the target spin can be put close (for example, ~10 nm) to the NV center. The ZF-ESR spectrum in our previous manuscript had low contrast because it was not normalized by the Rabi oscillation (the Rabi of ensembles of NVs has only ~10% contrast). We have modified it in the revised manuscript. Another reason is the short dephasing time (~0.1 us) of P1 centers (according to Eq. S21 in the revised SI). The short dephasing time is induced by the dense bath spins from the ions implantation. For a separable P1 or other defects, the bath is much purer, and the dephasing time is estimated to be ~us [PRB 87, 195414 (2013)]. Thus an order of increase of the signal contrast is expected. Besides, a single NV has ~40% Rabi contrast. Therefore, the reproduction on the single spin level may be potentially more efficient. Our purpose of using an ensemble of NVs rather than single NV is to make the P1 close to the NV, for which high-dose ions implantation is required. It leads to high density of NVs, which cannot be resolved by the confocal microscope. However, it does not affect the nanoscale detection area of the NV center. In general, I think more focus should be given to the impact of this method, namely page 4 and figure 4. Regarding the pulse sequence: the authors' description of it is confusing, to such extent that I am not sure someone else in the community would be able to straight-forwardly reproduce it (not impossible, but just not trivial). I believe more effort should be made in clarifying the experimental details with emphasis on the pulse scheme. Reply: We have modified the paragraph related to figure 4 to clarify the significant advantage of our method. By using NV centers, the resolution of ESR has been significantly improve from mm~μm scale to nanoscale. Besides, this nanoscale ZF-ESR removes the two serious obstacles of the applications of nanoscale ESR, i.e., the spectrum dispersion induced by the random orientations of target spins and the significant background signal. Furthermore, different from the induction-based ESR, the sensitivity of NV does not depend on the magnetic field or the operation frequency, and thus the sensitivity of nanoscale ZF-ESR is comparable and even potentially better than nanoscale ESR. In the revised "Methods", we have given a detailed description of the pulse sequence. To summarize: While the manuscript definitely shows great potential, I am not completely convinced that it would have a great impact in the field. If the authors address all the points mentioned in this review and if the problematic issues (nanoscale volume, universality, analytical model) are dealt with, it would indeed be possible to recommend it for publication in Nature Communications. Reply: We appreciate the Reviewer's patient comments. In the revised manuscript and SI, we have addressed all the concerns given by the Reviewer. Specifically, below please find my comments for the manuscript: ============================================================== 01. The following was not stated clearly in the manuscript: Does one necessarily need hyperfine interaction to a nuclear spin (and a strong one at that) for this method to work? If so, then this ZF-ESR is not universal for all paramagnetic spins. Reply: We have emphasized in the last paragraph (also mentioned in the second paragraph, right column, page 2) that the key of our method is the direct measurement of the energy level structure of paramagnetic spins. The interaction, which induces energy level splitting, can be any forms, including the electron-nuclear hyperfine interaction, electron-electron fine interaction, and even many body interactions, etc. The detection range of the energy level splitting is determined by the Rabi frequency. The lower bound of Rabi frequency is limited by the transverse relaxation rate of the NV center, which is ~ MHz. While the upper bound of Rabi frequency is limited by the zero-field splitting of the NV center, which is ~ GHz (detail in the reply of comment #4). For some isolated radicals without any intrinsic interactions, it is indeed difficult to detect them by our method as the energy splitting is near zero. However, we believe that this frequency detection range (MHz-GHz) can cover most of the usual paramagnetic radicals and complexes. 02. If all spins give the same signal, is there a spatial feature? What is the sensing volume? What is the no. of NVs in said sensing volume? What is the no. of P1 centers in said sensing volume? How does it compare with standard (traditional) ESR? Reply: We apologize for the unclear statement in the previous version. The resonance frequencies (i.e. signal position) of all the spin are the same, but the coupling rates with the NV (i.e. signal intensity) are spatially dependent. The statement has been corrected and a detail description is given in the revised SI. As mentioned above, the detection range is less than 15 nm with about four P1 centers residing in the detection area of each NV, the mean NV spacing is ~135 nm with ~100 NVs residing in the confocal spot. Here we note again that the NV is a local sensor, and all the NV sensors are working independently. There is no principal limitation for further single-NV applications. For traditional ESR, the best spatial resolution is ~ micron (The traditional ZF-ESR usually need a sample with size of ~cm). While for our NV-based ZF-ESR, the spatial resolution is ~10 nm, it is indeed a significant improvement. 03. "Methods/State evolution" is not clear. Too simplistic in my humble opinion. Not only that, but I could not really understand from that section what exactly the pulse sequence used to probe the hyperfine interaction is (I could from the main text, but the "state evolution" section only confused me). Reply: We apologize for the unclearness here. The "Methods" has been rewritten. The Section "Spin locking in zero field" gives a detailed description of the pulse sequence.
6,050.8
2018-04-19T00:00:00.000
[ "Physics" ]
A Real Time Hearing Loss Simulator Summary Several hearing loss simulators ( HLS ) have been developed to demonstrate the e ff ects of hearing loss on auditory perception to normal hearing (NH) listeners, and to facilitate prediction of the perception of sound products by hearing impaired customers. This paper describes a real-time HLS based on an inverse, compressive GammaChirp (GC) filterbank, and how it was used to temporarily handicap NH listeners participating in a traditional notched-noise (NN) masking experiment (e.g. [1]) with a 2-kHz signal frequency. Sets of NN thresholds were obtain with a wide range of symmetric and asymmetric notches at two noise spectrum levels while participants listened to the sounds presented both with and without the HLS . The NN data were used to derive auditory filter shapes and input/output (IO) functions, which demonstrate that the HLS can simulate the elevation of pure tone threshold and the flattening of the input/output function commonly observed in sensory-neural hearing loss. the Introduction Simulation of sensory processing disorders provides a powerful tool for investigating hearing itself and hearing loss. In the past, there have been twom ain approaches: The first wase quivalent-threshold masking, designed to simulate the reduced performance of hearing impaired listeners in one task or another,without regard for the quality of the perceivedsound (see Lum and Braida for areview [2]). Forexample, to simulate specificaudiometric losses, theyoften simply mixed aloud broadband band noise with the shape of the audiogram to the signal producing atotally different experience from what the impaired listener heard. In the second approach, theym ade an attempt to simulate the actual perception of the hearing impaired listener. In the simplest case, theyr educed the levelo ft he sound with alinear FFT-filter having afrequencyresponse close to that of the target audiogram. This wasu sed to demonstrate the consequences of sensory hearing impairment to the general public. But this linear attenuation of the signal is ap oor imitation of what happens when someone loses the active process of the peripheral auditory system and its associated gain. More recently,some integrated models of hearing impairment have been proposed [3,4], which were intended to simulate all aspects of moderate, sensory- Model of hearing impairment The principle of the current hearing loss simulator is essentially the same as that of the simulator developed by Irino and colleagues, referred to as an inverse, dynamic compressive Gammachirp (dcGC)auditory filter [4,5,6]. It wasu sed by Matsui et al [7] to simulate the effect of hearing loss in as yllable perception task. However, they did not derive either the auditory filter or the input/output function of the cochlea using the simulator.A st he name suggests, the inverse dcGC simulator wasdesigned to cancel the natural compression of the normal hearing listener. In the current dcGC model, cochlear compression is simulated in three stages: 1) The signal is filtered into 32 bands using abank of passive GammaChirp (pGC)filters. 2) The levelatthe output of each pGC filter is estimated. 3) The levelisused to control the center frequencyofahigh-pass asymmetry function (HP-AF) that represents the active mechanism in that filter band. The center frequencyofthe HP-AF decreases as the output levelofthe pGC increases, reducing filter gain and increasing filter bandwidth in the process. Thus, as in the cochlea, the gain is maximal at lowlevels and minimal at high levels, and the system provides fast acting compression overalarge dynamic range, separately in each dcGC band. To cancel the natural compression of the dcGC filterbank, the hearing loss simulator HLS applies asecond ver- Figure 1. HLS signal processing: 1) pGG filters; 2) channelby-channel levele stimation; 3) calculation of HP-AF leveldependent filter coefficients and application of gain in each band; 4) time reversed pGC to cancel the delay group of each band; 5) passive gain to add apassive hearing loss (not used here); 6) sum all bands to re-synthesize. sion of the active mechanism in reverse (see figure 1),that is, the center frequencyofthe second HP-AF increases as the levelout of the pGC increases. In this way, the simulator acts as an inverse compressor in each frequencyband, in aw ay that should cancel the natural compression of a normal listener. The processing is done with amix of python and Open Computing Language (opencl). All filter coefficients are designed with acascade of biquad filters. The HP-AF coefficients are computed in advance for all levels and the resulting gains stored in alookup table. All steps are computed sample by sample in each band at 44.1 kHz; the use of aGraphics Processing Unit (GPU)a nd opencl makeit possible to process the 64 bands associated with binaural audio streams in real time. This means the hardware version of the simulator can be inserted in anyaudio system to simulate asensory-neural hearing impairment. The HLS software can be downloaded as an open source project 1 . Notched-Noise Experiment Af orm of GC HLS has previously been used to simulate the performance of ag roup of hearing impaired (HI) listeners on as peech-in-noise task [8]. The average audiogram of the HI group wasused to fit the HLS for the normal hearing listeners. It showed the presence of amoderate hearing loss that, in turn, explained their speech intelligibility deficit. However, it wasnot clear whether the deficit wasentirely attributable to their hearing losses or whether it wasatleast partially due to amore general deterioration of the signal. To resolvethe ambiguity and validate the current GC HLS,w ed esigned aN Ne xperiment, centered at 2kHz, to measure the effect of the HLS on absolute threshold, auditory filter shape, and the IO function of ag roup of normal hearing (NH) individuals, making adirect comparison with and without the HLS.Adetailed description of the NN experiment and the derivation of auditory filter shape with aGCfilter model is presented in Patterson et al (2003) [9]. Methods Six young, normal hearing listeners were tested in their best ear,h aving giveni nformed consent prior to the start of the experiment. Twosets of NN thresholds were collected, one without the HLS (referred to as the ByPass condition)and one that included the HLS (referred to as the HLS condition). In the latter condition, the system wasset to simulate acomplete loss of compression in all bands. In this case, the HLS prediction for absolute threshold at 2kHz increases by about 37 dB SPL. Absolute threshold at 2kHz wasmeasured using atwointerval, two-alternative,f orced-choice procedure with a2 -down, 1-up tracking paradigm. The intervals were 200 ms in duration, separated by 500 ms. The timing of the intervals wasindicated visually on acomputer display. One interval, randomly selected, contained a2 00 ms sinusoid. The task of the listener wast oi ndicate the intervalthat had this signal by ab utton press. The initial level of the tone was4 0dBS PL int he ByPass conditions and 77 dB SPLinthe HLS conditions. The initial step size was 8dB. It wasreduced to 4dBafter tworeversals, and to its final levelof2dB after 2more reversals. Threshold measurement wasterminated after 16 reversals. Threshold was taken to be the average of the last 12 reversals. The conditions were presented in random order. The same experimental procedure wasused to estimate signal threshold in the twoN Nc onditions -w ith, and without, the HLS.T he only difference wast hat the 200ms notch noise waspresent in both intervals of each trial. The spectrum levelo ft he NN was2 5o r4 5dBS PL in the ByPass condition and 45 or 60 dB SPL in the HLS condition. The initial levelo ft he tone wass et to 30 dB above the spectrum levelo ft he NN in all conditions (i.e. 55 dB, 75 dB or 90 dB). The widths of the lower and upper noise bands were fixed at 400 Hz. The notch noise was generated by filtering awhite noise with a16th order butterworth bandpass filter to establish the extremities of the NN. The notch wasthen added using a16th-order Butterworth, band-reject filter.D epending on the condition, the notch wasp ositioned either symmetrically or asymmetrically about the signal frequency, 2kHz. Nine or ten symmetrical notches were used for each noise level. were grouped into three blocks (A,Band C) to provide for breaks in the testing; each block contained about the same number of notch widths and levels. Overall, 21 or 22 notch conditions were tested at each of 2noise levels with, and without, the HLS. The stimuli were passed through the numerical optical output of an RME sound card. This output wasthen connected to the numerical optical input of the same RME sound card, and this wasthe input to the HLS.The output of the simulator waspresented monaurally to the best ear of the listener through aSennheiser HD250, linear II headphone. The stimuli were calibrated using aclass Asound levelm eter (Larson Davis 824)c onnected to an artificial ear (Larson Davis AEC101). The listeners sat in adoublewalled sound booth. The experimental paradigm wasformally approvedbyanational ethics committee (CPP Léon Bérard). Results The average threshold data for the ByPass and HLS parts of the experiment are plotted, as afunction of notch width, in the upper and lower panels of Figure 2, respectively. The upper and lower threshold curves in the HLS condition have very similar shapes to the upper and lower threshold curves in the ByPass condition, indicating that the effect of the HLS is basically what it should be -asophisticated, fast acting sound attenuator.Relative to overall level, widening the notch produces avery similar effect on threshold after the intervention of the HLS,and this is true for the asymmetric notches (green and magenta symbols)a sw ell as the symmetric notches (black symbols). The main difference between the twop atterns of threshold curves is that the range of thresholds obtained with the HLS is somewhat compressed relative to the pattern in the ByPass condition. The average value of absolute threshold is shown by the black, horizontal dashed line in each panel; the value was1 0.0 dB SPL (std=3.50)i nt he ByPass condition and 47.0 dB SPL(std=1.24)inthe HLS condition. The difference, 37 dB, is exactly the change in absolute threshold predicted by setting the degree of compression to zero in the HLS.N ote, however, that whereas absolute threshold is about 5dBbelowthe lowest NN threshold in the ByPass condition, it is al ittle above the lowest NN threshold in the HLS condition. We return to the differences between the ByPass and HLS threshold values in the Discussion. In order to derive the auditory filters, the notch-noise data have been fitted using the same P0 power spectrum model of masking as described previously in this issue [10] and earlier [11,12]. In each condition, the minimization of equation 4 [10]: provides the full set of parameters of the dcGC model which best predicts the data. Using these parameters, figure 3, presents 5a uditory filters, at 5i nput levels (every 10 dB in the range of the data), derivedw ith the dcGC model in the ByPass and HLS parts of the experiment in the upper parts of panels Aa nd B, respectively.T he blue lines showthat the auditory filter provides gain in the passband region in both the ByPass and HLS conditions. The errors between the threshold values predicted by this dcGC filter model showthat the model provides an accurate description of the ByPass data with an error equal to 2.32 dB as indicated in Figure 3, and also an reasonable description of the HLS data with an rms error equal to 3.17 dB. In this condition, the prediction of the minimum threshold is about 5dBbelowabsolute threshold, and the model predictions are alittle above the corresponding data at the wider notch widths. This discrepancyi sr eflected in the higher rms error in this condition than in the ByPass condition. Note that the design maximizes the number of different NN conditions in the experiment, in preference to replicating asmaller number of conditions, and so the error in the GC fitsincludes the intra-individual variability (i.e. the error in individual threshold estimation). The input/output (IO) functions and the bandwidth (BW) functions provided by the dcGC model are plotted, as afunction of stimulus level, belowthe corresponding filter shape plots in Figure 3. The blue portions of the IO and BW curves showestimates from roughly the same range of levels as the measured thresholds; the cyan sections showextrapolations to lower and higher levels. Discussion The IO function for the ByPass condition is strongly compressive,a se xpected, with as lope of 0.2 dB/dB for input levels around 60 dB SPL. The IO function for the HLS condition is much less compressive;the minimum is 0.41 dB/dB. The form is consistent with the loss of compression that would be expected from aHIlistener with a 37-dB hearing loss. The BW values for filters in the levelrange of the threshold data (the blue portion of the BW function)are 1.6-2.0 times the normal ERB (ERB N )v alue [13] in the ByPass condition, and 1.3-2.2 times the ERB N value in the HLS condition. Part of the difference arises from the fact that the ERB N BW values were derivedwith aroexfilter-shape which has been shown to underestimate the actual width of the tip of the auditory filter (see [13], subsection IV.B). It remains the case, however, that the average BW value in the HLS condition is somewhat smaller than that in the ByPass condition. This is because the HLS simulates the loss of gain in the HI by reducing the levelofthe stimuli (signal + maskers)p resented to the NH listeners in HLS condition. That is, the sounds are actually being presented to these NH listeners at amuch lower levelthan the CP input axis would suggest. In the ByPass condition, the stimuli are being presented at the stated CP input level. The BW of the NH listeners is greater at higher levels, so the BW values are greater in the ByPass condition than in the HLS condition. This does, however, mean that the HLS is limited to simulating the loss of gain in the HI; it does not simulate the increase in BW associated with the need to present stimuli at higher levels for the HI. Conclusions The HLS wasobserved to raise absolute threshold substantially and reduce compression, making the auditory system appear more linear.T hese changes are qualitatively consistent with the presence of afl at hearing loss of around 40 dB. Asimilar simulator [8] has been shown to produce areduction of intelligibility for speech presented in noise, similar to that observed with HI listeners. The results of the current experiment allowu st oc onclude, more generally, that the HLS illustrates the joint effects of reduced audibility and reduced compression commonly encountered in HI listeners.
3,371.2
2018-09-01T00:00:00.000
[ "Engineering", "Medicine" ]
Formation of colorimetric fingerprints on nano-patterned deterministic aperiodic surfaces Periodic gratings and photonic bandgap structures have been studied for decades in optical technologies. The translational invariance of periodic gratings gives rise to well-known angular and frequency filtering of the incident radiation resulting in well-defined scattered colors in response to broadband illumination. Here, we demonstrate the formation of highly complex structural color patterns, or colorimetric fingerprints, in twodimensional (2D) deterministic aperiodic gratings using dark field scattering microscopy. The origin of colorimetric fingerprints is explained by rigorous full-wave numerical simulations based on the generalized Mie theory. We show that unlike periodic gratings, aperiodic nanopatterned surfaces feature a broadband frequency response with wide angular intensity distributions governed by the distinctive Fourier properties of the aperiodic structures. Finally, we will discuss a range of potential applications of colorimetric fingerprints for optical sensing and spectroscopy. ©2010 Optical Society of America OCIS codes: (050.2770) Gratings; (160.5298) Photonic crystals; (290.4210) Multiple scattering; (130.6010) Sensors. References and links 1. E. Yablonovitch, “Inhibited spontaneous emission in solid-state physics and electronics,” Phys. Rev. Lett. 58(20), 2059–2062 (1987). 2. J. D. Joannopoulos, S. G. Johnson, J. N. Winn, and R. D. Meade, Photonic crystals: molding the flow of light (Princeton Univ Pr, 2008). 3. A. David, “High efficiency GaN-based LEDs: light extraction by photonic crystals,” Ann. Phys. Fr. 31(6), 1–235 (2006). 4. N. Ganesh, W. Zhang, P. C. Mathias, E. Chow, J. A. N. T. Soares, V. Malyarchuk, A. D. Smith, and B. T. Cunningham, “Enhanced fluorescence emission from quantum dots on a photonic crystal surface,” Nat. Nanotechnol. 2(8), 515–520 (2007). 5. B. Cunningham, P. Li, B. Lin, and J. Pepper, “Colorimetric resonant reflection as a direct biochemical assay technique,” Sens. Actuators 81(2-3), 316–328 (2002). 6. S. V. Boriskina, A. Gopinath, and L. Dal Negro, “Optical gap formation and localization properties of optical modes in deterministic aperiodic photonic structures,” Opt. Express 16(23), 18813–18826 (2008), http://www.opticsinfobase.org/abstract.cfm?URI=oe-16-23-18813. 7. Y. S. Chan, C. T. Chan, and Z. Y. Liu, “Photonic band gaps in two dimensional photonic quasicrystals,” Phys. Rev. Lett. 80(5), 956–959 (1998). 8. X. Zhang, Z.-Q. Zhang, and C. T. Chan, “Absolute photonic band gaps in 12-fold symmetric photonic quasicrystals,” Phys. Rev. B 63(8), 081105 (2001). 9. A. Della Villa, S. Enoch, G. Tayeb, V. Pierro, V. Galdi, and F. Capolino, “Band gap formation and multiple scattering in photonic quasicrystals with a Penrose-type lattice,” Phys. Rev. Lett. 94(18), 183903 (2005). 10. L. Moretti, and V. Mocella, “Two-dimensional photonic aperiodic crystals based on Thue-Morse sequence,” Opt. Express 15(23), 15314–15323 (2007), http://www.opticsinfobase.org/abstract.cfm?URI=oe-15-23-15314. 11. M. Notomi, H. Suzuki, T. Tamamura, and K. Edagawa, “Lasing action due to the two-dimensional quasiperiodicity of photonic quasicrystals with a Penrose lattice,” Phys. Rev. Lett. 92(12), 123906 (2004). 12. M. E. Zoorob, and G. Flinn, “Photonic quasicrystals boost LED emission characteristics,” LEDs Magazine Aug., 21–24 (2006). #126565 $15.00 USD Received 6 Apr 2010; revised 2 Jun 2010; accepted 6 Jun 2010; published 23 Jun 2010 (C) 2010 OSA 5 July 2010 / Vol. 18, No. 14 / OPTICS EXPRESS 14568 13. A. Micco, V. Galdi, F. Capolino, A. Della Villa, V. Pierro, S. Enoch, and G. Tayeb, “Directive emission from defect-free dodecagonal photonic quasicrystals: A leaky wave characterization,” Phys. Rev. B 79(7), 075110– 075116 (2009). 14. S. V. Boriskina, A. Gopinath, and L. D. Negro, “Optical gaps, mode patterns and dipole radiation in twodimensional aperiodic photonic structures,” Phys. E 41(6), 1102–1106 (2009). 15. R. Lifshitz, “Quasicrystals: A matter of definition,” Found. Phys. 33(12), 1703–1711 (2003). 16. L. Dal Negro, C. J. Oton, Z. Gaburro, L. Pavesi, P. Johnson, A. Lagendijk, R. Righini, M. Colocci, and D. S. Wiersma, “Light transport through the band-edge states of Fibonacci quasicrystals,” Phys. Rev. Lett. 90(5), 055501 (2003). 17. A. Gopinath, S. V. Boriskina, N.-N. Feng, B. M. Reinhard, and L. D. Negro, “Photonic-plasmonic scattering resonances in deterministic aperiodic structures,” Nano Lett. 8(8), 2423–2431 (2008). 18. M. A. Kaliteevski, S. Brand, R. A. Abram, T. F. Krauss, R. D. L. Rue, and P. Millar, “Two-dimensional Penrosetiled photonic quasicrystals,” Nanotech. 11(4), 274–280 (2000). 19. E. Maciá, “The role of aperiodic order in science and technology,” Rep. Prog. Phys. 69(2), 397–441 (2006). 20. J. J. Amsden, H. Perry, S. V. Boriskina, A. Gopinath, D. L. Kaplan, L. Dal Negro, and F. G. Omenetto, “Spectral analysis of induced color change on periodically nanopatterned silk films,” Opt. Express 17(23), 21271–21279 (2009), http://www.opticsexpress.org/abstract.cfm?URI=oe-17-23-21271. 21. M. R. Schroeder, Number theory in science and communication (Springer, 1985). 22. A. Groisman, S. Zamek, K. Campbell, L. Pang, U. Levy, and Y. Fainman, “Optofluidic 1x4 switch,” Opt. Express 16(18), 13499–13508 (2008), http://www.opticsexpress.org/abstract.cfm?URI=oe-16-18-13499. 23. D. W. Mackowski, “Calculation of total cross sections of multiple-sphere clusters,” J. Opt. Soc. Am. A 11(11), 2851–2861 (1994). 24. N. O. Petersen, P. L. Höddelius, P. W. Wiseman, O. Seger, and K. E. Magnusson, “Quantitation of membrane receptor distributions by image correlation spectroscopy: concept and application,” Biophys. J. 65(3), 1135–1146 (1993). 25. V. N. Bliznyuk, V. M. Burlakov, H. E. Assender, G. A. D. Briggs, and Y. Tsukahara, “Surface structure of amorphous PMMA from SPM: auto-correlation function and fractal analysis,” Macromol. Symp. 167(1), 89–100 (2001). 26. H. Assender, V. Bliznyuk, and K. Porfyrakis, “How surface topography relates to materials’ properties,” Science 297(5583), 973–976 (2002). 27. E. M. Barber, Aperiodic structures in condensed matter: fundamentals and applications” (CRC Press, 2009) 28. S. Y. K. Lee, J. J. Amsden, S. V. Boriskina, A. Gopinath, A. Mitropoulos, D. L. Kaplan, F. G. Omenetto, and L. Dal Negro, “Spatial and spectral detection of protein monolayers with deterministic aperiodic arrays of metal nanoparticles,” Proc. Natl. Acad. Sci. U.S.A. (to be published). Introduction Scattering of photons by periodic photonic structures gives rise to a variety of interesting physical effects including manipulation of spontaneous emission [1], formation of forbidden photonic gaps [2], enhanced resonant light extraction [3,4], and resonant narrow-band backscattering [5].Recent theoretical and experimental studies revealed that these effects can also be observed in more complex structures with deterministic aperiodic morphologies (e.g., quasi-crystals, pseudo-random structures) that do not possess translational periodicity despite their long range order.In particular, photonic bandgaps (including complete bandgaps) have been observed in aperiodic photonic structures [6][7][8][9][10], lasing in defect-free photonic quasicrystals has been demonstrated [11], and enhanced light extraction and beam shaping have been obtained with aperiodic nanopatterned photonic surfaces [12][13][14].Furthermore, phenomena inherent to random media, such as light localization, have been demonstrated in aperiodic photonic structures with high degrees of structural complexity [6]. Deterministic aperiodic structures lack translational invariance inherent to periodic media, yet may feature global and/or local rotational symmetries that are forbidden in periodic lattices (i.e non-crystallographic symmetries).Aperiodic lattices range from incommensurately-modulated periodic patterns and incommensurate composite structures [15] to quasiperiodic patterns such as the well-known Fibonacci [16,17] and Penrose lattices [9,11,18] and to pseudo-random geometries with properties similar to those of random media [6,17].Unlike random media, however, aperiodic structures are generated by well-defined deterministic algorithms based on symbolical dynamics and number theory [19], and are amenable to rigorous engineering and optimization.Furthermore, aperiodic photonic structures offer a broader and more flexible design space than their periodic counterparts, enabling larger control over the degree of anisotropy in their angular and spectral optical responses. The optical properties of photonic gratings are governed by the Fourier spectra of the associated geometrical lattices, which range from truly discrete spectra in the case of periodic and quasiperiodic structures to singular-continuous and absolutely-continuous (flat) spectra for structures with higher degrees of the structural disorder such as Thue-Morse and Rudin-Shapiro lattices [6,17,19].The discrete set of Bragg peaks in the Fourier transforms of simply periodic lattices corresponds to a discrete set of wave vectors k in their diffraction diagrams, and results in the appearance of well-defined grating orders in optical scattering [20].The Fourier spectrum of photonic quasicrystals is generally dense, and usually features one or several subsets of main reflections (brighter Bragg peaks) over-imposed to a diffused background of weaker satellites [Fig.1(a This feature of aperiodic gratings has already been used to efficiently extract light from semiconductor light-emitting diodes (LEDs) and to shape the light emission profile [12].Here, we demonstrate how multiple light scattering in nano-patterned deterministic aperiodic surfaces, which occurs over a broad spectral-angular range, leads to the formation of highly complex structural color patterns, or colorimetric fingerprints, in both the near and the farfield zones.We also discuss the new opportunities in the field of bio-chemical sensing that are offered by the spatial and spectral modifications of these colorimetric fingerprints induced by small refractive index perturbations on aperiodic surfaces.[21] and their corresponding 2D Fourier transforms (b,e).Simulated far-field multi-color scattered intensity maps of the Gaussian prime (c) and Rudin-Shapiro (f) arrays of 200nm-diameter nano-spheres with the refractive index of 1.5 and minimum center-to-center separations of 300 nm (c) and 400 nm (f).The RGB images shown in (c) and (f) are obtained by overlapping the forward-scattered field intensity distributions corresponding to the arrays illumination by a plane wave at three wavelengths in the red, green and blue parts of the optical spectrum: λB = 470 nm (blue), λG = 520 nm (green), λR = 630 nm (red). Colorimetric fingerprints of periodic and aperiodic gratings In order to better understand the distinctive scattering behavior of aperiodic nano-patterned surfaces, we will first briefly review the scattering properties of regular periodic gratings.We fabricated two-dimensional air-holes periodic gratings on quartz substrates by using standard electron-beam lithography (EBL).Several representative arrays of 100nm-radius and 70nmdeep cylindrical indentations with the grating period (center-to-center separation between neighboring indentations) ranging from 500nm to 800nm are shown in Fig. 2(a).The arrays were illuminated by a white light from a glass optical fiber bundle with 1.6mm bundle diameter at an approximately 15° grazing angle to the array surface using a dark field scattering setup shown in Fig. 2(b).The light reflected normally from the array plane is collected with a 5X microscope objective and imaged using a CCD digital camera (Media Cybernetics Evolution VF).The acquired images are shown in Fig. 2(c) and demonstrate the typical single-color scattering response of periodic arrays.It can be seen in Fig. 2(c) that the increase of the array grating period results in the red-shift of its scattering response.The observed red-shift is a well-known phenomenon that can be qualitatively described with classical scalar diffraction theory of periodic gratings as follows: ( ) where Λ is the grating period, λ is the wavelengths of the incident light, θ inc and θ sc are the incident and scattered angles (measured with respect to the normal to the grating surface), m is the diffraction order and n 1 and n 2 are the refractive indices of the ambient medium and the grating, respectively.The scattered wavelengths corresponding to the first four diffraction orders of a periodic grating with the 400 nm period calculated by using Eq. ( 1) are shown in Fig. 2(d) as a function of the scattering angle.In the experimental setup used, the angular distribution of the collected light is restricted by the objective collection cone (NA = 0.15), and the frequency spectrum is limited to the visible wavelength range.The range of the angular and spectral distribution of the collected light is shown in Fig. 2(d) by the dark shaded area, which is formed by the intersection of the light-shaded strips indicating the spectral and spatial collection limits, respectively.It can be seen in Fig. 2(d) that only one grating order crosses the dark-shaded area (meaning that it can be collected by the microscope), resulting in the observed single-color response of periodic gratings.The diffraction theory of periodic gratings predicts that the increase of the grating period results in the angular shifts of all the diffraction orders, giving rise to a red-shift of the scattered radiation that can be collected by the objective, in perfect agreement with the experimental data.The observed angular response of periodic gratings is sensitive to the ambient refractive index variations and thus has been used to design biochemical sensors [5,20] and optofluidic switches [22].In particular, the wavelength shift of the collected light caused by the change in the ambient refractive index or by the adsorption of molecules on the periodic nanopatterned surface is used as a transduction signal in grating-based optical sensors [5,20].3(g)] lattices were fabricated on quartz substrates by using the same standard EBL process as for periodic gratings.Since aperiodic structures lack translational periodicity, they cannot be assigned a single lattice parameter such as the grating period, but are simply characterized by defining the minimum center-to-center interparticle separation in the array.All the other, generally incommensurate, length scales present in a particular aperiodic structure can be exactly calculated from the particular deterministic inflation rule used to generate the lattice.The aperiodic arrays shown in Fig. 3 have the minimum center-to-center interparticle separations in the 300 nm to 400 nm range.In contrast to the dark-field scattering images of periodic gratings shown in Fig. 2(c), the images of aperiodic arrays collected with the experimental setup described in Fig. 2(b) feature highly organized colorimetric fingerprints, as demonstrated in Fig. 3(b), 3(d), 3(g), 3(h).Color spatial localization in different parts of the nanopatterned aperiodic surfaces can clearly be observed.We have also fabricated aperiodic arrays with different minimum centerto-center separations ranging from 250 nm to 700 nm.The colorimetric fingerprints of these structures are shown in Fig. 4. The spatial localization of the different chromatic components on the nano-patterned surfaces is evident for all the inter-particle separations within the visible spectral range.We have also studied the role of the grating material on the formation of colorimetric fingerprints.Aperiodic nanostructures have been fabricated in various material platforms, including low-index dielectrics (such as quartz and organic polymers), higher-index dielectrics (silicon nitride (SiN)) and metals (chromium and gold).The collected dark-field images of Rudin-Shapiro arrays composed of indentations in the quartz substrate, of SiN and gold nano-disks deposited on quartz substrates are compared in Fig. 5. Clearly, all the three colorimetric signatures shown in Fig. 5 feature spatial localization of various spectral components of scattered light, while the relative intensities of different spectral components depend on the specific material platform used.We can conclude that the observation of structural color localization and the formation of colorimetric fingerprints under white light illumination is a general feature of the aperiodic arrangement of nano-scale elements with separations on the order of the wavelength of light.The particular spatial distribution of the localized colors is uniquely governed by the geometrical configurations of the aperiodic structures, while the resonant scattering response of individual scattering elements contributes to the intensity distribution of various spectral components in the colorimetric fingerprint. Formation mechanism of colorimetric fingerprints To get a physical insight into the mechanism governing the experimentally observed colorimetric response of periodic and aperiodic gratings, we simulate the light scattering process by modeling 2D gratings of dielectric microspheres in free space.In the simulations, the gratings are illuminated by a plane wave incident at a grazing angle (15 degrees) to the array plane similarly to the experimental geometry.Far-field scattering characteristics and near-field intensity distributions of the electric field scattered by both periodic and aperiodic gratings were calculated by using rigorous full-wave generalized multi-particle Mie theory (GMT) [17,23].GMT algorithm provides an exact analytical solution to Maxwell's equations for a cluster of spheres of an arbitrary spatial configuration and enables understanding the role of the array morphology on its angular and spectral scattering characteristics.The scattering responses of finite-size periodic and aperiodic gratings were simulated and compared, including the periodic grating composed of 486 spheres with 400 nm grating period, a Gaussian prime aperiodic array of 412 spheres with 300 nm minimum separation, and a Rudin-Shapiro array of 512 spheres with 400 nm minimum separation.The spatial intensity distributions in Fig. 6 were calculated for the scattered fields at three different wavelengths in the blue, green, and red parts of the visible spectrum.The single-color intensity patterns in the plane above the array were super-imposed to produce multi-colored Red-Green-Blue (RGB) images, which well approximate the intensity distribution experimentally collected by the microscope objective. By observing the images in Fig. 6 and the corresponding movies (Media 1, Media 2, Media 3), it can be seen that although for both periodic and aperiodic gratings most of the light intensity is scattered into the zero-th diffraction order, they feature drastically different angular distributions of scattered light.In particular, periodic gratings scatter light anisotropically, redirecting it along well-defined directions corresponding to angularly distinct grating orders [Fig.6(a)-6(c)].This frequency-dependent anisotropy in the angular intensity distribution leads to the angular color filtering observed in the experiments under the limitation of a finite collection efficiency [Fig.6(d)] and results in the single-color responses of periodic gratings (see Fig. 1).In turn, scattering from aperiodic arrays results in the appearance of multiple diffractive orders covering a much wider angular and spectral range [Fig.6(e), 6(g), 6(i), 6(k)].Therefore, even if the light is collected within a limited collection cone (or numerical aperture), multiple spectral components always reach the detector.The collected spectral components are then combined to re-create multi-color colorimetric fingerprints that form on aperiodically nanopatterned surfaces due to multiple light scattering at various incommensurate length scales. Sensitivity of fingerprints: implications for optical sensing The refractive index sensitivity of the spatial distribution of the field intensity scattered by an aperiodic nanopatterned surface can be used as a novel transduction mechanism for label-free bio-chemical sensing.For a fixed wavelength of the incident light, the changes in the aperiodic array geometry and/or refractive index contrast caused by the presence of target molecules near the nanopatterned surface will modify the resonant conditions for the multiple light scattering at multiple frequencies in the array plane.As a result, a single-wavelength colorimetric fingerprint of the array will be significantly perturbed by the local modifications of the surface structure.This effect is illustrated in Fig. 8, which shows a single-color fingerprint of the Gaussian prime array of 100 nm-radius nanospheres before and after the incorporation of a 10 nm-thick index-matching dielectric layer that uniformly covers all the nanoparticles.The modifications in the spatial intensity profile are clearly observed.These changes can be quantified and compared by using the well-developed mathematical framework of correlation-function analysis [24][25][26].We finally remark that the sensitivity of the proposed aperiodic structures can be further improved by proper size scaling of the arrays.In fact, due to the aperiodicity of these systems, their scattering peaks become sharper and denser (the density of surface spatial frequencies increases) by increasing the systems size and, in stark contrast with the behavior of regular periodic lattices, the intensity of their Bragg peaks does not decrease significantly far from the center of the diffraction patterns [27].This provides additional degrees of freedom with respect to periodic systems for performance optimization, and offers the opportunity to tailor multiple scattering effects in deterministic aperiodic structures by proper size scaling.We are currently exploring the proposed sensing approach to detect the presence of thin low-index molecular layers accumulating on aperiodic nanopatterned surfaces [28]. Conclusions We have investigated both experimentally and theoretically the scattering characteristics of periodic and deterministic aperiodic photonic gratings illuminated by the white light.In sharp contrast to the well-known single-color scattering response of periodic gratings, aperiodically nanopatterned surfaces feature highly complex spatial intensity patterns uniquely associated to the spectral properties of the scattering surfaces.The experimentally observed colorimetric fingerprints of aperiodic gratings result from the resonant multiple scattering of various spectral components of light interacting with the nanostructured surfaces in combination with broadband angular scattering distribution observed in aperiodic systems.These unique scattering characteristics of deterministic aperiodic scattering systems and the sensitivity of the associated colorimetric fingerprints to morphological and refractive index surface variations make them very appealing for the engineering of novel sensing platforms for labelfree optical detection of thin molecular layers. ), 1(b)].On the other hand, no point-like Bragg peaks appear in the continuous Fourier spectra of pseudo-random structures such as Rudin Shapiro lattices [Fig.1(d), 2(e)].The scattering intensity distribution in the far-field zone of aperiodic photonic gratings follows the corresponding Fourier transforms of the geometrical lattices [see Fig. 1(c), 1(f)] and can be flexibly engineered by changing the aperiodic array morphology. Fig. 1 . Fig. 1. 2D aperiodic lattices arranged according to the Gaussian prime (a) and Rudin-Shapiro (d) inflation rules[21] and their corresponding 2D Fourier transforms (b,e).Simulated far-field multi-color scattered intensity maps of the Gaussian prime (c) and Rudin-Shapiro (f) arrays of 200nm-diameter nano-spheres with the refractive index of 1.5 and minimum center-to-center separations of 300 nm (c) and 400 nm (f).The RGB images shown in (c) and (f) are obtained by overlapping the forward-scattered field intensity distributions corresponding to the arrays illumination by a plane wave at three wavelengths in the red, green and blue parts of the optical spectrum: λB = 470 nm (blue), λG = 520 nm (green), λR = 630 nm (red). Fig. 2 . Fig. 2. Colorimetric signatures of 2D periodic gratings.(a) Scanning electron microscopy (SEM) images of 2D periodic arrays of 100-radius and 70nm-deep cylindrical indentations nano-patterned on a quartz substrate.The center-to-center lattice constants of different arrays are: 500 nm (top left), 600 nm (top right), 700 nm (bottom right), and 800 nm (bottom left).(b) A schematic of the dark field scattering setup used in the measurements.(c) Images of periodic arrays illuminated at a grazing incidence with white light from a single fiber.(d) Wavelength versus the scattering angle for the first four diffractive orders of the periodic grating with 400 nm period. Fig. 3 . Fig. 3. SEM images and colorimetric fingerprints of 2D aperiodic gratings.Nanopatterned aperiodic arrays of 100-radius and 70nm-deep cylindrical indentations on a quartz substrate.(a) Thue-Morse lattice (nearest center-to-center separation d = 400 nm), (c) Rudin-Shapiro lattice (d = 400 nm), (e) Penrose lattice (d = 400 nm), and (g) Gaussian prime lattice (d = 300 nm).(b,d,g,h) Dark-field microscopy images of corresponding aperiodic gratings.Next we will focus on the distinctive scattering behavior of deterministic aperiodic gratings.Four types of aperiodic gratings with Thue-Morse [Fig.3(a)], Rudin-Shapiro [Fig.3(c)], Penrose [Fig.3(e)], and Gaussian prime [Fig.3(g)]lattices were fabricated on quartz substrates by using the same standard EBL process as for periodic gratings.Since aperiodic structures lack translational periodicity, they cannot be assigned a single lattice parameter such as the grating period, but are simply characterized by defining the minimum center-to-center interparticle separation in the array.All the other, generally incommensurate, length scales present in a particular aperiodic structure can be exactly calculated from the particular deterministic inflation rule used to generate the lattice.The aperiodic arrays shown in Fig.3have the minimum center-to-center interparticle separations in the 300 nm to 400 nm range.In contrast to the dark-field scattering images of periodic gratings shown in Fig.2(c), the images of aperiodic arrays collected with the experimental setup described in Fig.2(b) feature highly organized colorimetric fingerprints, as demonstrated in Fig.3(b), 3(d), 3(g), 3(h).Color spatial localization in different parts of the nanopatterned aperiodic surfaces can clearly be observed.We have also fabricated aperiodic arrays with different minimum centerto-center separations ranging from 250 nm to 700 nm.The colorimetric fingerprints of these structures are shown in Fig.4.The spatial localization of the different chromatic components on the nano-patterned surfaces is evident for all the inter-particle separations within the visible spectral range. Fig. 5 . Fig. 5. Experimentally measured colorimetric fingerprints of Rudin-Shapiro arrays with 400nm center-to-center nearest separation on quartz substrates with array nanoelements made of different materials: (a) 100nm-deep air indentations, (b) 80nm-high silicon nitride disks, and (c) 30nm-high gold disks under white light illumination and dark-field scattering microscopy. Fig. 6 . Fig. 6.Angular profiles of light scattered by periodic and aperiodic gratings.Spatial field distributions (side view) of the light scattered by a periodic array of 100nm-radius nanospheres with refractive index n = 1.5 and 400 nm grating period illuminated by a plane wave at θinc = 75° and (a) λ = 470 nm (blue), (b) λ = 520 nm (green), (c) λ = 630 nm (red).The direction of the incident field is indicated with a white arrow (see also Media 1).(d) Multiwavelength scattered field distribution (top view) at 100 µm above the periodic grating within the collection cone ( ± 30°) of the microscope objective with N.A. = 0.5.(e-l) Same as (a-d) but for a Gaussian prime array with 300 nm nearest center-to-center separation and a Rudin-Shapiro array with 400 nm nearest center-to-center separation, respectively (see also Media 2 and Media 3).To directly compare with the conditions of the dark-field scattering experiments, we plot only the spatial distribution of the calculated scattered field intensity in the plane perpendicular to the array [side view, Figs.6(a)-6(c), 6(e)-6(g), 6(i)-6(k)] and in the plane parallel to the array [top-view, Figs.6(d), 6(h), 6(l)] located above the array surface (100 µm).The scattering responses of finite-size periodic and aperiodic gratings were simulated and compared, including the periodic grating composed of 486 spheres with 400 nm grating period, a Gaussian prime aperiodic array of 412 spheres with 300 nm minimum separation, and a Rudin-Shapiro array of 512 spheres with 400 nm minimum separation.The spatial intensity distributions in Fig.6were calculated for the scattered fields at three different wavelengths in the blue, green, and red parts of the visible spectrum.The single-color Fig. 7 . Fig. 7. Colorimetric fingerprint formation in the plane of aperiodic arrays.Calculated spatial field distributions (top view) of the scattered light in the plane of a Gaussian prime array of nanospheres with n = 1.5 and 300 nm nearest center-to-center separation at (a) λ = 470 nm (blue), (b) λ = 520 nm (green), (c) λ = 630 nm (red), and a combined RGB image.The calculated intensity distribution of the scattered light in the plane of the Gaussian prime array illuminated by a plane wave at several visible wavelengths is plotted in Fig. 7(a)-7(c).Because the incident fields of different wavelengths resonantly interact with different length scales encoded in the aperiodic surface, the resulting monochromatic scattering intensity patterns show different distribution of intensity maxima in the array plane.The colorimetric patterns of the RGB principal frequency components are mixed together in Fig. 7(d).It can be seen that the single-color images do not overlap completely, resulting in the formation of complex colorimetric fingerprints characteristic of the Gaussian prime array surface morphology [compare to Fig. 3(h)]. Fig. 8 . Fig. 8. (a) Calculated spatial field distributions (top view) of the scattered light in the plane of a Gaussian prime array of nanospheres with n = 1.5 and 300 nm minimum interparticle separation at λ = 530 nm.(b) Same in the presence of a 10-nm thick index-matching layer covering the particles.
6,084.4
2010-07-05T00:00:00.000
[ "Physics" ]
Studies of the Lipid Phase Transitions of Escherichia cob by High Sensitivity Differential Scanning Calorimetry* SUMMARY High sensitivity adiabatic differential scanning calorimetry was performed on lipids, membrane vesicles, and whole cells of Escherichia coli enriched in particular unsaturated fatty acids by genetic means. Information concerning the shape of the transition is discussed. Transitions with an asymmetric shape reminiscient of a second order transition were observed. Comparison between the lipid transition ob- served in whole cells, membrane vesicles, and extracted lipids enriched in elaidate reveal some basic similarities. Studies of synthetic lipids were undertaken in an attempt to interpret the shapes of these transitions as a function of the lipid components of the membrane. The lipids of the membranes ofEscherichia coli, in common with those of other organisms, undergo phase transitions from a gel phase to a liquid-crystal phase (1, 21. Numerous studies relating the state of the membrane to the physiology of the organism have been reviewed by Cronan and Gelmann The availability of mutants defective in fatty acid synthesis has permitted genetic manipulation of the physical state of the membranes through their chemical composition (4). The lipid phase transition has been monitored by x-ray diffraction (2,5), differential scanning calorimetry (l), and fluorescence (6). In this communication SUMMARY High sensitivity adiabatic differential scanning calorimetry was performed on lipids, membrane vesicles, and whole cells of Escherichia coli enriched in particular unsaturated fatty acids by genetic means. Information concerning the shape of the transition is discussed. Transitions with an asymmetric shape reminiscient of a second order transition were observed. Comparison between the lipid transition observed in whole cells, membrane vesicles, and extracted lipids enriched in elaidate reveal some basic similarities. Studies of synthetic lipids were undertaken in an attempt to interpret the shapes of these transitions as a function of the lipid components of the membrane. tion is highly asymmetric with a gradual onset and an abrupt end at 40". The transition enthalpy averaged for many experiments was 9.6 cal (g of lipid)-I. The oleate-enriched E. coli lipids have a broader transition which occurs at a lower temperature. Some variation in scan behavior was seen, but the scans shown here are representative. A scan obtained with elaidate-enriched whole cells is presented in Fig. 2A along with one obtained with lipids from the same cell growth (Fig. 20. They have similar appearances, with the cell transition being broader, but with nearly the same temperature range and asymmetry as the lipid transition. The absorption of heat seen at higher temperatures is due to the irreversible denaturation of cellular material. Ambiguity in selecting the appropriate base-line for the scan with whole cells prevented an accurate determination of the transition enthalpy. Many scans with whole cells were of poor quality due to exothermic processes probably associated with cellular metabolism. It was found that fewer washings of the cells in Tris buffer, and shorter handling before calorimetry, made the exothermic processes less likely. However, this prevented complete removal of fatty acid from the growth medium and is probably the reason why the scan of lipids extracted from the same cells is of poorer quality than the scan shown in Fig. 1. Fig. 2B is a scan of membrane vesicles made from the same cells. This transition is also similar in shape to the whole cell and lipid transitions. Protein denaturation occurs at higher temperatures. The fatty acid chain compositions of elaidate-enriched lipids showed that 80 to 90% and sometimes as much as 95% of the fatty acid chains of the lipids were elaidate. The oleate-enriched lipids contained 40 to 50% oleate. In both cases, there were no C,, unsaturated fatty acids. No correlation between the fraction of elaidate and the variations in the transition behavior were seen in the various preparations, nor was there A calorimetric scan of dielaidoyl phosphatidylethanolamine is shown in Fig. 3A. It bears a strong resemblance to the transition of elaidate-enriched lipids. The enthalpy is 6.3 Cal/g of lipid, and the temperature at the maximum is 37.5". The asymmetry is very apparent although the transition is as narrow as that of other synthetic lipids studied in the same instrument (13). A scan obtained with dielaidoyl phosphatidylethanolamine containing 13.2 mol % dimyristoyl phosphatidylethanolamine is shown in Fig. 3B. The asymmetry seen here is considerably less than in elaidate-enriched E. coli lipids. The scan rate in A was O.l"/ min and in B was O.B"/min. All of the lipid transitions shown were reversible, although the reheats of E. coli lipids, vesicles, and ceils did not retain all of the sharpness or enthalpy of the first heats. DISCUSSION The temperatures of the transitions reported here agree well with those given in previous reports (2, 5, 6). Asymmetry in the lipid transitions of biological membranes has been observed in Acholeplasma laidlawii (14) as well as in some previous studies ofEscherichia coli lipids (15,161. The present experiments show this asymmetry much more clearly. The transition is broader in whole cells than in extracted lipids, but the temperatures at which the maxima in specific heat occur are nearly the same and the asymmetry is retained. It is possible that in E. coli the lipids are distributed unevenly between the inner and outer surfaces of a membrane, the inner and outer membranes,' or laterally within the plane of a membrane. This could give rise to differences between the melting behavior seen in extracted lipids and in whole cells and could account for the observations made here; however, the observed differences in heating scans between cells, lipids, and membrane vesicles are not large enough to support or refute the contention that nonuniform lipid distributions are widespread. Other possible explanations are that various membrane proteins have differing affinities for fluid and solid lipids (17), or that the membrane proteins are positioned in such a way that they disrupt the cooperativity of the lipid phase transition. The growth of strain K1060 is profoundly affected by the fatty acid which supplements the growth medium (4). In the case of elaidate enrichment, 37" is found to be the minimum growth temperature. K1060, growing at 37", would be growing below the end of the lipid transition and would have a substantial fraction of its lipids in the gel state. Experiments on a different strain grown with the minimal amount of unsaturated fatty acids (18) indicate that this strain can grow at a temperature well below the end of the lipid transition2 when the membranes are not completely fluid. Since E. coli lipids are 60 to 70% phosphatidylethanolamine (19), the elaidate-enriched lipids of the strain studied here have dielaidoyl phosphatidylethanolamine as their major chemical species. In comparing the calorimetric scans of elaidate-enriched K1060 lipids and of synthetic dielaidoyl phosphatidylethanolamine, many similarities are evident. The Calorimetry of E. coli Membranes 4751 temperatures of the transitions are nearly equal, suggesting that the cells have reduced their transition to the lowest obtainable temperature by incorporating as much elaidate as possible. Another striking similarity is the asymmetry evident in the two transitions. The occurrence of asymmetric transitions similar to the one observed here for dielaidoyl phosphatidylethanolamine appears to be a general property of phosphatidylethanolamines (13). Purified synthetic lecithins, in common with most pure crystalline substances which undergo a phase transition, have melting heat capacity curves which are symmetrically broadened derivative curves of the step functions in enthalpy expected for first order transitions. The melting of phosphatidylethanolamines clearly deviates from this behavior. In addition, it is difficult to construct a phase diagram (13), with all components completely miscible in both phases, which can predict melting behavior of the sort seen here with elaidateenriched lipids. The shapes of these transitions thus suggest unusual phase behavior and possibly a second order transition. A second order phase transition is distinguished from a first order phase transition in the lowest order of the derivative of the free energy in which one sees a discontinuity with temperature. Thus, a first order transition has a latent heat, but a second order transition has a discontinuity in the heat capacity without having a latent heat. Such transitions are also called A transitions, and some elementary discussion of these phenomena can be found in Ref. 20. Discontinuities in any property are in actual systems always smeared out to some extent, and for this reason we cannot identify with certainty the order of these transitions. The asymmetry could be due to a pretransitional heat uptake as is seen in some liquid crystals (21). It is also possible that the many components of the E. coli lipids, and impurities in the synthetic lipids, give rise to an asymmetric broadening of a first order transition. In this connection, it is significant that repeated recrystallization from ethanol and chloroform/ acetone of synthetic phosphatidylethanolamines, including dielaidoyl phosphatidylethanolamine, did not change the shape or temperature of the transition once the lipid was purified to constant behavior. Thus, an impurity is probably not the cause of the asymmetry in dielaidoyl phosphatidylethanolamine or other phosphatidylethanolamine transitions. To aid in analyzing the asymmetry of such transitions, we shall introduce a quantitative index of asymmetry. A horizontal line is drawn on a differential scanning calorimetry scan at half the maximal heat capacity. This line intersects the heat capacity plot above and below the temperature at which the maximum in heat capacity occurs ( T,,,). We shall divide the temperature difference between T,,, and the lower intersection temperature by the difference between the upper intersection temperature and T,,,. This ratio will be used as an index of asymmetry. Synthetic dielaidoyl phosphatidylethanolamine has an asymmetry index of between 2.5 and 3.5. The scan in Fig. 1 has an asymmetry index of 8.0. An attempt to simulate the fatty acid chain composition of elaidate-enriched E. coli lipids with a mixture of two phosphatidylethanolamines is shown in Fig. 3. In this mixture, and in other mixtures, the asymmetry index was slightly lower than in pure dielaidoyl phosphatidylethanolamine, and we conclude that this experiment was not successful in reproducing the asymmetry of the E. coti lipid transition. The oleate-enriched E. coli lipids, in which the fatty acid chains were much more heterogeneous, had a rounder and more symmetrical transition curve. It may be that the shape of the elaidate-enriched E. coli lipid transition is a result of the polar group heterogeneity and that the fatty acid chains are homogeneous enough not to influence the shape of the transition significantly. Diversifying the fatty acid chains gives the result seen with oleate-enriched E. coli lipids where the transition is broader and more symmetrical. Further study of the phase behavior of the lipids of E. coti is important since it is still not known whether the two coexisting phases seen by x-rays during melting (4) have different compositions (22), and knowing the composition of the two phases may provide important information about the impact that the transition has on the functioning of membrane proteins.
2,528.4
1977-07-25T00:00:00.000
[ "Biology", "Chemistry" ]
Changes in Carbon Electrode Morphology Affect Microbial Fuel Cell Performance with Shewanella oneidensis MR-1 The formation of biofilm-electrodes is crucial for microbial fuel cell current production because optimal performance is often associated with thick biofilms. However, the influence of the electrode structure and morphology on biofilm formation is only beginning to be investigated. This study provides insight on how changing the electrode morphology affects current production of a pure culture of anode-respiring bacteria. Specifically, an analysis of the effects of carbon fiber electrodes with drastically different morphologies on biofilm formation and anode respiration by a pure culture (Shewanella oneidensis MR-1) were examined. Results showed that carbon nanofiber mats had ~10 fold higher current than plain carbon microfiber paper and that the increase was not due to an increase in electrode surface area, conductivity, or the size of the constituent material. Cyclic voltammograms reveal that electron transfer from the carbon nanofiber mats was biofilm-based suggesting that decreasing the diameter of the constituent carbon material from a few microns to a few OPEN ACCESS Energies 2015, 8 1818 hundred nanometers is beneficial for electricity production solely because the electrode surface creates a more relevant mesh for biofilm formation by Shewanella oneidensis MR-1. Introduction The bare anode of a microbial fuel cell (MFC) receives electrons from bacteria, serves as the substratum for bacteria to attach and initiate biofilm formation and provides the scaffold on which it grows [1].In many cases, the formation and health of the biofilm are directly correlated to high current production by a MFC [2][3][4].Therefore, understanding how the electrode structure and morphology might influence the formation and size of a biofilm in a biofilm-anode is paramount for the development of any biofilm-electrode based technology. Several studies have reported that changing the structure of the anode resulted in an increase in current production [5][6][7].These studies focused on how the electrode properties influenced the electrochemical reaction or increased the available/reactive surface area thus providing a foundation for later investigation into how electrodes effected biofilm formation and growth [8,9].Observations from these studies led to the modification of anodes in order to further increase reactive surface area [7,10,11] and or decrease overpotentials, a conventional approach borrowed from catalytic fuel cell research [12][13][14].For example, Logan et al. [11] showed that increasing the overall surface area by employing a graphite electrode brush increased current density by ~2.5 times compared to a carbon cloth anode.At the same time however, Dewan et al. [15] found that current densities for electrodes with a larger surface area cannot always be directly extrapolated using the current densities generated by smaller electrodes.Additionally, Dewan et al. [15] found that power densities scale with the logarithm of the projected surface area.As a result, anodes that serve as the substratum for electricity-producing biofilms may need to incorporate more than just a higher surface area or decreased activation overpotential.Perhaps anode selection should also account for factors that may influence the bio-electrochemical reaction indirectly, such as an anode surface morphology that impacts the onset and growth of the biofilm. Given the size of a typical bacterium (1-3 μm), increasing the surface area to volume ratio of the material does not necessarily increase the surface area available for bacterial respiration after some threshold [7].However, changes at the micro and nanometer scale affect the surface morphology of the electrode that bacteria and their biofilms attach to and grow on.Changes in surface morphology have already been shown to affect biofilm growth [16,17].More importantly, several studies have correlated changes in electrode structure and biofilm-anode performance of mixed cultures [18,19].In order to build upon these findings and eliminate the possibility that differences in performance were due to differences in the physiological profile of the mixed culture it is important to investigate whether changes in electrode surface morphology influence the ability of an electrode to spur biofilm formation in a pure culture and thus increase biofilm-anode current production. The interface between a biofilm and an anode cannot be understood by evaluating the individual components (i.e., a bacterial species or electrode material).As a result, determining an electrode's effect on biofilm formation requires simultaneous evaluation of the electrode's properties and an understanding of the physiology of the bacteria in an electrochemical context.While one can easily measure the conductivity of an electrode and subjectively evaluate its surface morphology, accounting for the physiology of the bacteria is more challenging since a change in the environmental conditions can trigger different mechanisms of extra-cellular electron transfer (EET) in the bacteria [20,21]. Studies on EET in a pure culture like Shewanella oneidensis MR-1 facilitate the determination of which mechanism is being used.For example, Marsili et al. [21] revealed that riboflavin is the shuttle being used by Shewanella oneidensis during mediated electron transfer and showed that it is oxidized at a specific potential.This helps to explain its ability to respire the electrode as a planktonic biomass [22].Additionally, Baron et al. [20] showed that S. oneidensis employs direct electron transfer at a distinctly different potential.Their use of cyclic voltammograms (CVs) of the anodes provide a way to reasonably identify, based on the reduction potential, which EET mechanism (mediated or direct electron transfer from a biofilm) is being used and to what extent.While the shape of cyclic voltammograms for reversible electron transfer for soluble mediators (i.e., riboflavin) is widely established [23], the presence of direct electron transfer from a biofilm and how it manifests itself in CVs for microbial fuel cells is a more recent discovery [24,25]. Engineering electrodes for optimal biofilm-anodes can be improved by examining the effects of electrode properties on biofilm-anode formation and by devising experiments that incorporate the fundamental physiological findings in the literature [20,21], biofilm kinetics, and bioelectrochemistry.Given that several engineering or modification studies have shown significant changes in biofilm colonization and formation when surface morphologies were changed for mixed cultures [17][18][19]26,27], it is only appropriate to examine how this might affect the biofilm-electrode interface of a pure culture in which the electrode surface uniquely serves as both the substratum and the terminal electron acceptor.Using a pure culture removes any inconsistencies regarding the physiological profile of the community, the presence of scavengers, metabolic pathways that serve as electron sinks (e.g., methanogenesis), and the community dynamics associated with bacterial competition. Here the effect of changing the morphology of the anode surface (i.e., decreasing the diameter of the electrode's constituent material) on anode respiration/current production by Shewanella oneidensis MR-1 is studied.Amperometry was used to monitor current production over time, CVs were used to account for its electron transfer mechanisms and, the differences between electrode materials were qualified using scanning electron microscopy (SEM), Energy Dispersive X-ray spectroscopy (EDX), conductivity measurements and, areal weight measurements. Electrode Characterization Plain Toray carbon paper (PTCP) (TGPH-120, E-tek, Somerset, NJ, USA), referred to as carbon microfiber (CMF) paper, and carbon nanofiber (CNF) mats (Applied Sciences, PR-19-XT-HHT, Cedarville, OH, USA) were used as anodes in this study.1 cm 2 electrodes were cut from each sample and weighed to determine areal weight.Electrode conductivity was measured using a standard 4-point probe measurement.Electrodes were soaked in 1 M sulfuric acid for at least 1 h prior to installing in the reactor.Prior to examination, the fixed electrodes were sputtered with palladium using a Cressington Sputter Coater 108 Auto (Cressington, Watford, UK) for 30 s. Images of the anodes were taken before and after Micro-Electrolysis Cell (MEC) operation for comparison.Images were taken using a JSM-6510LV SEM (JEOL, Peabody, MA, USA) set at 20 kV. Micro-Electrolysis Cell (MEC) Operation The single-chamber 1-L reactor contained three working electrodes (each 1 cm 2 ) positioned equidistant from a single Ag/AgCl reference electrode and a counter electrode (6 cm 2 ).The counter electrode was made of plain Toray carbon paper with a 1000 Å thick layer of platinum deposited onto its surface via electron beam evaporation [28].A multi-channel potentiostat (CH Instruments 1040A, Austin, TX, USA) was used to maintain a potential of +0.043 V vs. Ag/AgCl for each working electrode.Current was measured and recorded every 100 s (amperometric measurements).CV scans were conducted on a range from −0.7 to 0.3 V at a rate of 2 mV/s.The reactor was sparged with N2 gas and wrapped in aluminum foil during operation.The addition of fuel included injecting 10 mL of 100 mM lactic acid with 10 mL trace element solution and 10 mL vitamin solution.The reactor was stirred with a magnetic stir bar at 60 rpm. The experiments were initiated using sterile medium described above.After 2 days of abiotic operation, 10 mL of LB media containing Shewanella oneidensis MR-1 was inoculated into the reactor.After two weeks of operation the anodes in the reactor were sacrificed for SEM images.The anodes were removed from the MEC, rinsed with phosphate buffer, and placed in a 4% paraformaldehyde solution for ~15 min, rinsed with de-ionized water and placed in a petri dish.These fixed electrodes were then set aside for imaging.The paraformaldehyde solution was made by adding 4 g of paraformaldehyde to 70 mL of de-ionized water, heating the solution to 70 °C, adding drops of 1 N•NaOH until the solution cleared, adding 9 mL of 1M phosphate buffer after the solution cooled and refrigerating it overnight. Current Production The differences in current production between CNF and CMF working electrodes were monitored amperometrically (Figure 1).The CNF electrode generated several times more current than CMF throughout the experiment and is comparable to current generation from previous experiments [29].The superior performance by CNF is confirmed by the fact that it exhibited a ~10-fold increase in current over that of CMF and that after the substitution of new electrodes into the MEC on day 15, current production by both CNF and CMF returned to the same levels exhibited prior to electrode replacement.Again, current production by CNF was substantially higher.The shapes of the I-t (current vs. time) curves throughout the experiment are identical and differ primarily in magnitude, with CNF producing current up to a factor of 10 more.The length of time given to the bacteria to colonize the electrode and generate current is well beyond times allotted in various experiments for biofilm formation suggesting that the time allowed for bacteria to agglomerate on the surface is not an issue [20,30].However, determining whether the current was generated by a biofilm or a planktonic mass is important and can be elucidated using CV. Cyclic Voltammograms Cyclic voltammetry was performed on both electrodes on day 2 and on day 15.On day 2, the voltammograms for CMF and CNF are similar in amplitude and shape (Figure 2A).However, the voltammograms taken on day 15 (Figure 2B) show that CNF is trending more towards a Nernst-Monod sigmoidal curve [24,25] while CMF maintains a similar shape to that exhibited on day 2.After fitting the CV data taken on day 15 to the Nernst Monod Model (Figure 2B) it is easy to see that the CV for CNF correlates better to the Nernst-Monod sigmoidal shape than the CV for CMF. Biofilm-Based Electron Transfer CVs for an anode-respiring biofilm will exhibit different shapes than CVs for a planktonic biomass using mediators.Biofilms using conduction based electron transfer will have a voltammogram with a sigmoidal profile [24,25] while mediated electron transfer (planktonic biomass) will often show simple oxidation and reduction peaks [8]. The shapes of the voltammograms taken on day 15 (Figure 2B) show that CNF is trending toward a sigmoidal curve, like that of the Nernst-Monod model, while the shape of the voltammogram for CMF shows no significant changes from day 2. The fact that CNF correlates better with its Nernst-Monod model fit suggests that it formed a more complete conducting biofilm-electrode than CMF.Specifically, the sigmoidal profile generated by CNF on the forward scan and the decrease of the reduction peak on the reverse scan support this trend in the CNF voltammogram.The fact that most of the current for CNF is generated above the redox potential of riboflavin (−0.41 V vs. Ag/AgCl) also supports the idea that mediated transfer was not responsible for the increase in current production.This suggests that electricity from the CNF electrode is being produced by electron transfer from a biofilm.The SEM images in Figure 3 confirm that CNF has formed a substantial biofilm on its surface. Comparison of Electroactive Surface Area and Kinetics Using CVs There is no indication in the CVs (Figure 2) that CNF has a significant advantage because it has more electroactive surface area.If the increased current production by the CNF electrode were merely a function of surface area, the shape of the voltammogram for both electrodes would be identical differing only in the magnitude of current production.In other words, the shapes of the voltammograms would look the same, but the voltammogram for CNF would be tilted up vertically because of higher current production. Since a kinetic advantage is often obtained from electron transfer for materials that are similar in size with its reductant (i.e., cytochromes or mediators) [31] it is important to account for the size disparity between the constituent materials for the electrodes (i.e., the size difference between carbon nanofibers and carbon microfibers).If the voltammogram for the CNF electrode is shifted horizontally to the left, relative to the voltammogram of the CMF electrode, this would indicate that CNF is more efficient than CMF at catalyzing the reaction.In the voltammograms of Figure 2A, the horizontal positions for the onset of current are identical; neither electrode displayed a kinetic advantage (i.e., no large decrease in the activation overpotential).In other words, the similarity between the voltammograms taken on day 2 (Figure 2A) suggests that neither electrode possessed improved catalytic properties.As a result, the advantage of using CNF is not due to higher specific surface areas (i.e., higher concentration of active sites) or faster kinetics.This may mean that other factors (i.e., electrode conductivity and electrode morphology) contributed to the increased current production and biofilm formation on CNF. SEM Images for Biofilm Colonization SEM images were used to demonstrate biofilm colonization on electrodes.Images in Figure 3 highlight the biofilm that formed on CNF mats while SEM imaging of the CMF mat showed no appreciable biofilm.Micrographs of the CNF electrode reveal a biofilm as well as outlines of cells (Figure 3A,B) and are similar to other micrographs of Shewanella oneidensis MR-1 biofilms on electrodes [30].Figure 3C is a magnification of a single bacterium embedded in the CNF biofilm-electrode. Biofilm Formation The differences in biofilm formation by Shewanella oneidensis MR-1 are less surprising when we consider that it can also respire electrodes as a planktonic biomass and that single physical mutations to Shewanella oneidensis MR-1 have been shown to have profound effects on biofilm formation.For example, previous studies of biofilm formation by Shewanella oneidensis MR-1 showed that the presence of the flagellum, swimming motility, presence of a mannose-sensitive hemagglutinin type IV pilus, and pilus retraction played a significant role in the ability for Shewanella oneidensisMR-1 to form a biofilm.Specifically, the lack of a flagellum decreased the concentration of biomass (decreased biofilm formation), the lack of motility prevented the formation of a pronounced three dimensional biofilm architecture (bulk structure), the mutants defective in mannose-sensitive hemagglutinin type IV pilus biosynthesis had defects in initial attachment and the mutant defective in pilus retraction displayed poor propagation of the biofilm [30].In addition, another study showed that mutants lacking the gene pilD (indicated in Type IV pilin production) and the protein secretion genes gsp and gspD produced less current in MFCs relative to the wild-type S. oneidensis MR-1.The images of the electrodes used in those microbial fuel cell experiments with the mutant lacking pilD revealed a lack of biofilm as compared to the wild-type [29].These previous studies provide a foundation from which to investigate how morphology affects different phases of biofilm formation at a genetic level but, more importantly they highlight that small changes in how a cell interacts with its environment can have significant consequences for the entire biofilm.As the change in a substratum structure has affected biofilm formation in studies with mixed cultures [17][18][19]27] it is important to examine the differences in electrode morphology here for this pure culture. Morphology of Sterile Electrodes The electrode features revealed in the micrographs in Figure 4 highlight the morphological differences between CNF and CMF.The CNF mat shows a woven matrix of carbon nanofibers set upon a carbon scaffolding (Figure 4A) while CMF exhibits more of a rigid interlinked structure (Figure 4D).The typical diameter of the constituents used in CMF are ~10 μm while the carbon nanofibers are ~200 nm in diameter (Figure 4E vs. 4C).Carbon nanofibers are more flexible and carbon microfibers are linear and rigid.It is important to note the difference in morphology at the scale of a single bacterium when comparing electrodes in Figure 4 because the electrode features, relative to the size of the bacteria (i.e., 1-3 μm), are orders of magnitude different.CMF exhibits a constituent material with a serrated surface that is much larger (i.e., 10 μm) than a single bacterium.Conversely, CNF exhibits a constituent material much smaller (i.e., 200 nm) than a single bacterium.In addition to these characterizations, physical and electrical properties of both electrodes are listed in Table 1.Impact of Electrode Morphology. A single bacterium of Shewanella oneidensis MR-1 attaching to the surface of CNF would be in contact with multiple nanofibers (Figure 4A-C) but would cover only a portion of a single fiber of CMF (Figure 4B,D,F).Since the bacteria adhere to the features of the electrode it is important that the space between the features of each electrode be within a distance that bacteria can effectively collaborate.This distance, while not established quantitatively, has been shown to influence biofilm formation in medical studies [17].That same influence is mimicked here as the tighter spacing/morphology of the CNF electrode spurs on better biofilm formation.While a minimum threshold for surface rigidity (stiffness) has been shown to inhibit biofilm accumulation [32] for other bacteria, the "softer" and more mesh-like CNF shows no such issues here as demonstrated by the prevalence of biomass on the electrode. Differences in Electrode Conductivity Electrode conductivity is a function of areal weight (mass/geometric surface area).A more densely packed material (i.e., higher areal weight) translates into a smaller resistance to current (i.e., high conductivity).CMF has a larger areal weight and a higher conductivity yet it is CNF that produces more biofilm and more current.It seems that the smaller conductivity of CNF does not affect the formation and performance of its biofilm-electrode. A recent study by Malvankar et al. [33] showed, for Geobacter sulfurreducens, that there is a direct correlation between conductivity of the biofilm and current production.They observed biofilm conductivities as high as 0.5 S/m.In our studies, CNF showed a conductivity of 1300 S/m while CMF had a conductivity of ~15,500 S/m.The differences in these values support the idea that the conductivities for CNF and CMF have no significant effect on differences in current production because both conductivities were substantially greater than the highest reported biofilm conductivities and because the electrode that performed better, CNF, had the lower conductivity.Additionally, even with electrode materials with higher resistivities Chen et al. [27] was able to generate much higher current densities with mixed cultures suggesting that for general electrode materials there is not a strong correlation between resistivity/conductivity and current density. Ultimately, the CNF electrode generated more current (Figure 1), exhibited a voltammogram that showed the current was being generated by a biofilm (Figure 2), and showed substantial coverage by bacteria when examined under a SEM after the experiment (Figure 3).The results presented here illustrate the overall trend and repeatability seen in multiple experiments.Here we used two sets of electrodes to demonstrate the consistency with which CNF outperforms CMF.Since the electrodes were exposed in the same reactor at the same time, differences in current production are best explained by differences in the nature of the electrode materials. The advantage typically associated with CNF is the increased surface area [34], better kinetics [35], and high conductivity [36].In this case however, the advantage of using CNF electrodes was its electrode surface morphology created by its thinner constituent carbon materials.This provided a better mesh for bacterial colonization and growth which produced a more substantial biofilm-anode and led to an increase in current production. Conclusions In this study, Shewanella oneidensis MR-1 produced significantly more current with CNF than CMF.The examination of sterile electrodes showed that CNF and CMF differed in morphology, surface area, size of the constituent material, and conductivity.After accounting for differences in surface area, size of the constituent material, and electrode conductivity the results suggests surprisingly that the morphology (i.e., tighter spacing/size of the features) of the electrode surface of CNF is what enables the formation of electricity producing biofilms by a pure culture relative to CMF.Therefore, controlling electrode morphology and structure may have significant consequences for biofilm-electrode formation and current production in other pure cultures. Figure 1 . Figure 1.Amperometric data from a MEC inoculated with Shewanella oneidensis MR-1.Current production by carbon nanofiber mats/CNF (red) and carbon micfiber paper/CMF (blue) was monitored over a 4 week period. Figure 2 . Figure 2. Cyclic voltammograms for carbon nanofiber mats/CNF (red) and carbon microfiber paper/CMF (blue) at Day 2 (A); and Day 15 (B) of the experiment.Day 15 was chosen because of the difference in current production.Electrode replacement took place after the CV.CVs were scanned from −0.7 V to +0.3 V vs. Ag/AgCl at 2 mV/s. Figure 3 . Figure 3. SEM images of increasing magnification of anodes evaluated in an MEC for 2 weeks and inoculated with Shewanella oneidensis MR-1.Images of carbon nanofiber mat/CNF images of increasing magnication (A,B); A magnified image of a single bacterium, set in a biofilm, found on the CNF electrode is also shown (C).
5,010.6
2015-03-04T00:00:00.000
[ "Materials Science" ]
Polar Vortex Multi-Day Intensity Prediction Relying on New Deep Learning Model: A Combined Convolution Neural Network with Long Short-Term Memory Based on Gaussian Smoothing Method The variation of polar vortex intensity is a significant factor affecting the atmospheric conditions and weather in the Northern Hemisphere (NH) and even the world. However, previous studies on the prediction of polar vortex intensity are insufficient. This paper establishes a deep learning (DL) model for multi-day and long-time intensity prediction of the polar vortex. Focusing on the winter period with the strongest polar vortex intensity, geopotential height (GPH) data of NCEP from 1948 to 2020 at 50 hPa are used to construct the dataset of polar vortex anomaly distribution images and polar vortex intensity time series. Then, we propose a new convolution neural network with long short-term memory based on Gaussian smoothing (GSCNN-LSTM) model which can not only accurately predict the variation characteristics of polar vortex intensity from day to day, but also can produce a skillful forecast for lead times of up to 20 days. Moreover, the innovative GSCNN-LSTM model has better stability and skillful correlation prediction than the traditional and some advanced spatiotemporal sequence prediction models. The accuracy of the model suggests important implications that DL methods have good applicability in forecasting the nonlinear system and vortex spatial–temporal characteristics variation in the atmosphere. Concept and Research Background The Arctic polar vortex, hereinafter referred to as polar vortex, is one of the major atmospheric circulation systems affecting the atmospheric conditions and weather in the Northern Hemisphere (NH). It plays a critical role in feedback mechanism of stratospheretroposphere exchange and high-latitude weather circulation system. The formation of the polar vortex directly affects the polar stratosphere-troposphere exchange (STE) process and polar environment, and the variation of its intensity and position inevitably leads to the anomaly of circulation, which is specifically reflected in its impact on temperature and precipitation [1][2][3][4]. The location and intensity of the polar vortex are closely related to the cold air activities in the Eurasian continent [1,[5][6][7], which also have strong interaction with El Niño-Southern Oscillation (ENSO) [8][9][10], North Atlantic Oscillation (NAO) [11,12], Quasi-biennial Oscillation (QBO) [13][14][15], and other atmospheric circulation systems. Besides, from the perspective of Earth ecology and environment, the polar vortex also plays a significant role in Arctic sea ice loss [16,17] and global warming feedback mechanism [18,19]. The intensity variation of the polar vortex is a complicated nonlinear system. In recent years, many previous studies have pointed out that the intensity drastic variation and fragmentation process of the polar vortex will lead to more extreme weather events, such as cold waves and strong snowfall [5][6][7]. Strengthening of polar vortex intensity will lead to low ozone values, which may have a connection with air pollution and extreme weather events [20,21], further causing sudden changes in the global ecological environment and affecting human health. Among the studies on the effect of polar vortex intensity, Baldwin et al. [22] pointed out that the breaking and weakening of the polar vortex are accompanied by the rise of stratospheric temperature and potential height, which affect the global weather change on the surface, and the weak polar vortex event also can cause temperature fluctuations in the lower troposphere and cause the Arctic Oscillation (AO) to have a continuous negative phase [12]. Oehrlein et al. [23] found that the strong polar vortex events and variability have a certain degree of response under chemical action (mainly ozone), indicating that interaction between polar vortex intensity and chemical substances is an important system that can represent the climate change in winter in the North Atlantic and Europe. The impact of this variation of polar vortex intensity on many extreme weather events and the role of chemicals show that it is of great significance to accurately predict the polar vortex intensity. Current studies have illustrated that there exists an obvious interannual and decadal variation in polar vortex intensity [19,[24][25][26], while there are few previous types of research focused on the variation of daily scale intensity, and most of them focus on the case study of strong/weak polar vortex intensity events in seasons and impact on extreme weather events and the role of chemical substances [12,23,27]. It shows a strong uncertainty characteristic in the prediction of almost all nonlinear indexes in the atmosphere, and the model finds it difficult to capture the physical rules of the nonlinear system itself, such as ENSO index, tropical cyclone (TC) intensity, and the prediction of extratropical cyclone activity, e.g., [28][29][30][31][32][33]. Therefore, the study of the prediction of polar vortex intensity on the day-to-day scale is a new direction that is worth exploring. It can further predict the impact of the polar vortex on atmospheric circulation systems during the intraseasonal and improve the probability of predicting extreme events. In the field of atmospheric and ocean science, plenty of atmospheric circulation phenomena and the prediction of various weather system indexes provide an important basis for the prediction of polar vortex intensity. For example, Gray et al. [34] utilized the positive prediction model to predict polar vortex events, demonstrating that better reflecting the characteristics of vortex flow above the stratosphere can increase the predictability of the polar vortex. Zheng et al. [29] used NMME and S2S models to predict the winter extratropical cyclone activities from sub-seasonal scale to seasonal scale, pointing out that the sub-seasonal prediction technology in East Asia and other regions is related to the anomaly of the stratospheric polar vortex. Lee et al. [35] compared six prediction systems to predict stratospheric polar vortex and positive tropospheric Arctic Oscillation (AO) in winter of the NH. The results show that there exists a strong correlation between the accuracy of the stratospheric vortex and AO prediction. These studies demonstrated the importance of polar vortex intensity prediction, but they did not clearly predict the day-to-day variation of the polar vortex, and the polar vortex events are always predicted by the numerical prediction model. Thus, accurately predicting the variation of polar vortex intensity and capturing the characteristics of polar vortex distribution can not only provide important research guidance for the prediction of various nonlinear systems affecting the global atmosphere and climate change, but also optimize the advanced methods for various vortex intensity indexes forecasting. According to the present research, the main methods used in predicting the nonlinear exponential variation in the atmosphere are traditional mathematical methods and model prediction, while traditional mathematical-statistical methods have many defects. It can only learn the sequence characteristics of the data, while it cannot capture the physical rules of the atmosphere. Traditional mathematical-statistical methods such as autoregression (AR), moving average (MA), and autoregression integrated moving average (ARIMA) cannot capture the spatial distribution information of physical phenomena in the atmosphere when predicting time series [36][37][38][39][40][41]. Similar to TCs and many synoptic-scale vortices, the distribution modality of the polar vortex will change greatly in a few days or even a few hours. The role of physical variables and dynamic fields should be fully considered. Moreover, the polar vortex intensity index varies rapidly on the daily time scale, and traditional models cannot capture the signal of intensity variation very well, so it is necessary to further analyze the temporal and spatial characteristics. Related Works and Research Gap Deep learning (DL) is an important branch of artificial intelligence (AI). With the advent of the information age, DL methods play an important role in various fields of natural science research and have made outstanding contributions. Nosratabadi et al. [42] pointed out that the algorithm of hybrid DL models will be further applied to various fields of data science, including the stock market, marketing, and cryptocurrency. With the extensive development of computer science and statistics, AI also promotes the application of DL in Earth science, hydrological processes, and climate change, giving the DL algorithm natural applicability in the application field of weather forecasting [43]. DL algorithms and models have the ability to learn from a large number of long-time signal data and extract image features, that is, providing a powerful nonlinear function fitting ability to forecast the polar vortex index. Compared with ordinary atmospheric models and traditional mathematical methods, many DL neural networks have strong advantages [44][45][46][47][48]. In recent years, traditional DL models such as multilayer perceptron (MLP), recurrent neural network (RNN) and convolutional neural network (CNN) for learning multidimensional fields have been widely used in the atmosphere and ocean [49][50][51][52][53][54][55]. When using traditional DL methods for time series prediction and image feature extraction, more accurate prediction results can usually be obtained by adjusting the parameters and structure of the network model. For example, Ham et al. [30] have shown that, compared with many atmospheric models, CNN will achieve a higher correlation when predicting the El Niño index and lead ENSO events forecast time by one and a half years. Deng et al. [56] further proposed a vortex identification method, based on CNN, which can quickly detect vortices from the flow field in an objective and robust way, and addressed the defects of traditional methods. Similar to the vortex structure of polar vortex, there is a lot of research on TC track and intensity prediction, e.g., [31,[57][58][59][60][61]. For example, the MLP network was used to predict the position of the cyclone eye in a high-resolution 3D remote sensing image by [62]. Alemany et al. [63] considered all types of hurricanes, then used RNN to predict hurricane trajectory by adding the grid identification number to learn the spatial relationship on the map. A better prediction accuracy than the results of [64] was obtained. Rüttgers et al. [31] used typhoon satellite images as the input to a generative adversarial network (GAN). After adding dynamic fields such as velocity field, the prediction of typhoon trajectory was significantly improved. The long short-term memory (LSTM) network was first proposed by [65], and many of its variant structures and applications were derived. Sutskever et al. [66] provided a general framework for sequence-to-sequence learning by applying LSTM encoder-decoder framework. Karevan et al. [55] investigated many achieved remarkable results in the weather prediction tasks through LSTM and its improved models. Numerous previous studies indicate that CNN, LSTM, and their improved models are also widely used in different scientific research fields. Similar to the description of [67], we further list some of the latest representative applications of CNN and LSTM in various scientific fields, as shown in Table 1: However, due to the chaotic characteristics and high uncertainty of various weather systems in the atmosphere, many extreme weather events and meteorological ocean indexes show complex variation characteristics. The ordinary machine learning model cannot extract the change characteristics of these systems well, and the prediction accuracy also reaches a bottleneck. For example, the variation of polar vortex intensity and location are closely related to sea surface temperature (SST) and sea ice loss in the Arctic, but the characteristics of such uncertain influence factors and physical information need to be further extracted by complex networks. The DL model is also developing in the direction of deeper and wider. With the renewal and iteration of the neural networks, many spatiotemporal prediction models are constantly proposed and applied to the prediction of various systems in the atmosphere [76][77][78][79][80][81][82][83]. The ensemble DL model is one of the most typical cases. Compared with the traditional DL models, ensemble models often achieve better prediction results, which can capture the important features in images and time sequences more accurately. A mask region-based convolutional neural network (mask R-CNN) model for quasi-supervised reidentification of tropical cyclones proposed by [84] showed a good performance in the field of cyclone identification. Lguensat et al. [76] introduced Eddynet, which can automatically detect and classify eddy currents from sea surface height (SSH) maps, providing a simple and powerful tool for the marine remote sensing community. In the research of weather phenomenon prediction, a data-driven model based on a neural network, called Lightnet, for lightning prediction was proposed by [77]. The experimental results illustrated that Lightnet can achieve a threefold improvement in equitable thread score for six hours prediction compared with the other three models. The convolutional LSTM (ConvLSTM) network was first proposed by [85] for short-term precipitation prediction, which produced better prediction results than the traditional model. Moreover, ConvLSTM has been widely improved and applied to many fields, such as feature recognition and spatiotemporal prediction [82,86,87]. Similarly, the application of SmaAt-UNet proposed by [78] to short-term precipitation can also make up for the defection of numerical weather forecast to use the latest information for a short-term forecast. The generation of streamline model is also an essential way to analyze the meteorological; Lee et al. [81] described the flow field based on the three-dimensional U-net regression model and line integral convolution (LIC) volume with remarkable speed and visualization effect. With the development of mathematical and physical, many mathematical methods for time series and image processing have been gradually proposed and updated. In order to better approximate the real value and improve the training efficiency of the DL models, some of the latest studies suggest that plenty of advanced methods also can be improved for completing the tasks with higher efficiency. For example, the Big Bird, which is a spark attention mechanism proposed by [88] can greatly improve the performance of various NLP tasks such as answering questions and summarizing. This study also proposes new applications of genomic data. In the research of [89], which is based on the most advanced time series model called transformer, the mathematical improved method was applied to express the self-attention in the transformer as the linear point product of the kernel feature map, and the combination of the matrix product was used to reduce the complexity. The improved method has a speed of up to 4000 times in the autoregressive prediction of very long sequences. Previous studies also have shown that adding mathematical processing methods to the machine learning model will improve the stability and fitting effect of the model to a certain extent. For example, Peng et al. [79] investigated that the CEEMDAN + ConvGRU method can accurately predict the intensity of the South Asian high (SAH) and achieved better stability than the traditional machine learning method. The ensemble empirical mode decomposition (EEMD) combined with CNN + LSTM method proposed by [90] also can predict the El Niño index more accurately and stably. Research Significance and Contribution As per the significant research we mentioned above, the development of DL models has made numerous achievements in atmospheric prediction. Thus, it is feasible to combine the ensemble DL model with advanced mathematical methods to extract the distribution characteristics of the polar vortex for intensity prediction. To the best of our knowledge, there is no research on using the DL method to extract polar vortex image features and predict polar vortex intensity index on an intraseasonal scale. The prediction accuracy of polar vortex intensity index is determined by the abnormal distribution of potential height. Therefore, it is feasible to use CNN and LSTM methods in DL combined with the latest signal processing and data smoothing methods to predict the variation of polar vortex intensity index. Due to that the traditional two-dimensional CNN model cannot meet the prediction of multiple time steps, in order to further capture the spatial characteristics of polar vortex distribution, a three-dimensional convolution neural network (3DCNN) is used to extract the characteristics of polar vortex images after two-dimensional Gaussian smoothing. After completing the convolution process, it is necessary to capture the time series characteristics of polar vortex intensity, and the prediction results of multiday polar vortex intensity index are obtained through training the LSTM network with one-dimensional Gaussian smoothing time series data as input. Furthermore, traditional and advanced DL models are also used for comparison. The accurate prediction of polar vortex intensity can bring important scientific significance to the field of meteorological research. Firstly, it can provide a reference basis for the loss of sea ice and the occurrence of extreme weather phenomena in the Arctic region. After the polar vortex intensity predicted by the DL model is added to the model system as a prediction factor, the numerical prediction model can more accurately predict the weather conditions in the areas affected by vortex. In terms of intraseasonal scale, more accurate prediction of multiday polar vortex intensity index will provide reference for the establishment and optimization of the prediction model of nonlinear system in the atmosphere. In addition, the high robustness and accuracy of machine learning (ML) and DL methods provide a basis for polar vortex prediction in the field of geoscience research [43]. At present, there exist many uncertainties in the research of chemical and physical processes under atmosphere environment by using the variation of polar vortex intensity; the function of weather forecast is to better forecast weather phenomena after finding out the correlation of various systems. Therefore, this study can provide a more accurate basis for the interaction between the polar vortex and various systems in the atmosphere and add further improved sea and air interaction and land and air interaction factors to the model, so as to provide a reference for the in-depth study of the impact of the atmospheric vortex system on weather prediction. The positive and negative phase prediction of AO and ENSO events can also further improve the accuracy under the intraseasonal prediction of polar vortex intensity [8,12]. However, the existing research and dataset did not elaborate on the prediction of many vortex systems in the atmosphere. In order to solve the problem in a proper way, this paper further explores whether the application of the DL model can better solve the prediction problems of nonlinear systems in the atmosphere, such as the variation of polar vortex intensity. How to find an appropriate model for comparative analysis is also one of the key research problems of this paper, according to the spatiotemporal sequence characteristics of polar vortex intensity, whether a new model structure or improved method needs to be proposed by comparing the traditional and advanced DL image prediction and time series prediction models. Since the formation and development of the weather system are affected by many meteorological elements, which can be regarded as prediction factors in DL models, how to add appropriate variables and deal with the relationship between image and time series are the key and difficult points in the prediction process. Can the temperature field, potential vorticity field, and flow field be added to the time sequence information to obtain higher prediction accuracy? Organization of the Paper In this study, we propose a new DL model for predicting time series using image information. It mainly focuses on the polar vortex feature extraction process in the constructed polar vortex distribution image information database, followed by the input timing and image smoothing process. The polar vortex intensity time series and image database are constructed from the geopotential height (GPH) data provided by the National Centers for Environmental Prediction and the National Center for Atmospheric Research (NCEP/NCAR). In order to predict the temporal characteristics of the polar vortex more accurately, we remove noise in the data with Gaussian smoothing techniques. The reconstructed data is input into the 3DCNN-LSTM model to obtain the results. The predicted multiday polar vortex intensity index is compared with the real polar vortex index to obtain the correlation and compared with the traditional neural network model to test the stability of the method and evaluate the ability of the method to predict the nonlinear system in the atmosphere. The main structure of this paper is as follows: firstly, the construction of polar vortex image and polar vortex intensity dataset, Gaussian smoothing method, traditional neural network method, and the proposed innovative neural network model convolution neural network with long short-term memory based on Gaussian smoothing (GSCNN-LSTM) are introduced in the second part; then, the prediction accuracy of various network models is investigated and the stability of the model is evaluated in the third part; the final part summarizes the conclusions of the article and looks forward to the future research direction. Dataset Construction Because the polar vortex is one of the most powerful vortex and weather systems in the NH, its intensity variation can be explained by the variable field of many meteorological elements. In this study, the most standard and widely used polar vortex intensity definition method is adopted, that is, the dynamic field of the polar vortex is expressed by the abnormal variation of geopotential height (GPH) field. Previous studies have shown that the variation of the GPH field has a strong correlation with the dynamic structure of various vortex systems and the distribution of meteorological element field, and the modes and structures of a medium, small-scale, and synoptic-scale vortices can be characterized and studied by GPH, e.g., [12,91,92]. Therefore, in order to construct a set database with a real and sufficient intensity index, we selected the GPH data from the National Centers for Environmental Prediction and National Center for Atmospheric Research (NCEP/NCAR) from January to March and December (DJFM) in 1948-2020 and took the daily average data as the dataset for calculating the polar vortex intensity index. Since the polar vortex has the strongest intensity at 50-10 hPa, we use the GPH field at the height of the strongest polar vortex in winter, that is, the 50 hPa isobaric surface as the horizontal field calculated by the intensity index. The polar vortex intensity is defined by the abnormal variation of GPH combined with the variation of latitudes, as follows: The anomaly of the polar weighted average GPH is used to represent the intensity index of the polar vortex, where Z and Za in Equations (1) and (2) represent the daily averaged GPH and all selected days (8852 in total) averaged GPH, respectively, Z in Equations (2) and (3) represents the anomaly of GPH after removing the effect of year cyclic, and ϕ indicates the latitude variation. In the process of calculating the intensity index, the abnormal value Z of the potential height needs to be calculated first, and the polar vortex intensity index −Zp is opposite to the sign of the GPH anomaly in the polar region, so the positive polar vortex intensity index corresponds to the strong polar vortex, and the negative intensity index indicates the weak polar vortex. The database construction process of polar vortex images and intensity index series is as follows: Primarily, because the selected months (DJFM) of the polar vortex in each year are not all continuous, different from the traditional vortex variation, this study needs to splice the December of the previous year and January to March of the next year to form the variation of the polar vortex in this year. Secondly, because there is a difference between a leap year and normal year in February of each year, the number of days selected in a normal year is 121 days, and that in a leap year is 122 days. Based on the existing NCEP/NCAR database, we first constructed 122 characteristic maps of polar potential height anomaly distribution multiplied by 19, adding 121 characteristic maps multiplied by 54 (122 × 19 + 121 × 54), a total of 8852, then the daily polar vortex intensity index is calculated through Equation (1), and the time series database of polar vortex intensity index with a total length of 8852 is also constructed in the same way. Gaussian Smoothing (GS) Gaussian smoothing (GS) is also called Gaussian blur. This method is similar to convolution. It is widely used in the research of image blur, image classification, and detection. GS can effectively remove the details and noise of the image. In this sense, it is similar to the mean filter, but uses different kernels to represent the Gaussian hump shape (bell shape). The Gaussian kernel used in this study is mainly two-dimensional Gaussian kernel and one-dimensional Gaussian kernel, which are used for the denoising of polar vortex intensity index sequence and the smoothing of polar vortex distribution images, respectively. (4) and (5) is the standard deviation of normal distribution, and its value determines the decay rate of the function. x represents the variable value, which refers to the polar vortex intensity index. The matrix of Equation (6) shows a suitable integer valued convolution kernel, which is similar to the Gaussian distribution at σ = 1.0. Due to this, the Gaussian filter determines the weight by spatial distance, while it cannot consider using color distance to determine the weight. As a result, Gaussian filtering not only removes noise, but also blurs the boundary to a certain extent. Therefore, in order to further reduce the error of polar vortex distribution in the process of GS, we first smooth the global potential height anomaly distribution at 50 hPa, and then eliminate the edge effect. At last, the characteristic map of the polar vortex in the polar region is intercepted and included in the image database. In the field of DL, there are plenty of networks for image learning, including image recognition and image classification, among which the most traditional method is convolutional neural network (CNN). The three-dimensional convolutional neural network (3DCNN) mentioned in this study is composed of convolution layer, average pooling layer, and full connection (FC) layer. The CNN uses multiple convolution filters in the convolution process. Generally, the input channel (depth) of the output shape after convolution is the number of filters. We generally refer to filter as the number of convolution kernels. When extracting image features, the two-dimensional convolution neural network (2DCNN) only performs convolution operation for the image of a single time step. Because this recognition does not take the information of time dimension into account, in order to accurately predict the polar vorticity of multiple time steps (multiple days) in this study, it is necessary for the forecast lead multiday time to use the polar vortex distribution feature information of the previous few days, or even tens of days. According to the requests, a multiday input network will take advantage of 3DCNN functions. This means the time information dimension is added to the two-dimensional image information as the input of the convolution layer, and then the three-dimensional data are convoluted, and local features are extracted to further obtain the feature matrix. The pooling layer mainly downsamples the feature matrix and plays a secondary role to extract the features of images. The pooling operation can reduce the dimension and prevent overfitting. Note that the average pooling layer is used for feature extraction in this study. With the increase of the convolution layer, the image features extracted by 3DCNN will become more and more abstract. After the FC, the prediction results are obtained by outputting the layer activation function relu. Long short-term memory (LSTM) is a model extended from recurrent neural network (RNN), which can also be called fully connected LSTM (FC-LSTM). Traditional RNN has defects in dealing with long-term memory, which is prone to gradient explosion or gradient disappearance. On the basis of RNN, LSTM adds the concept of three gates of memory cells, namely input gate, output gate, and forget gate. Through these three gates, control information is added to or forgotten from the memory unit, so that it can provide clear long-term memory and learn long-term rules. Thus, the problem of gradient explosion or gradient disappearance caused by the increase of RNN data volume is solved. The calculation process of LSTM is mainly as follows: Gorget Gate : f t represents the forget gate; i t represents the input gate; o is the output gate; C t indicates the unit status of the current input, and h t is the final output. "·" represents the element-wise multiplication operation; σ represents the sigmoid activation function; " * " represents the convolution calculation, also known as Hadamard product. It represents the multiplication operation of two matrices with the same dimension. The sigmoid layer outputs a number between 0 and 1 to describe how much each information vector should pass. A value of zero means "do not pass any information", while a value of 1 means "let all information pass". The calculation process of the LSTM network is shown in Equation (7). When the two inputs x t and h t−1 enter the LSTM cell together, the forget gate f t will determine which information in the old state should be forgotten according to the learned weight W f . In Equation (9), by calculating f t and the state C t of the previous cell, the Hadamard product can filter out unimportant information. Next, the input gate calculates the updated information and creates a new cell state C t . As shown in Equations (7) and (8), the information to be updated can be obtained by multiplying the two, and the current state can be updated by adding the filtered information to the information to be updated. Finally, as shown in Equations (11) and (12), the output gate o t will obtain the input value through the sigmoid function and obtain the state C t just calculated by the cell with the tanh function, o t , multiplied by C t to obtain the output of this operation. The classical LSTM structure expands the data into one-dimensional for prediction, which can better solve the time correlation, while FC-LSTM can only extract the time series information and cannot extract the spatial information. Spatial data, especially radar echo data, contains a lot of redundant information, which implies that FC-LSTM cannot appropriately process the datasets. Therefore, convolutional LSTM (ConvLSTM) with convolution structure between the input-to-state and state-to-state of LSTM is proposed by [85]. The structure of ConvLSTM is similar to that of LSTM, except that ConvLSTM introduces convolution operation, and its calculation process is shown in Equations (13) and (17): where " * " represents the convolution calculation, the weight W is a two-dimensional convolution kernel, and the three gates of cell state C t , hidden state H t , and i t , f t , and o t are all three-dimensional tensors. However, compared with ConvLSTM's ability to extract and accurately predict information in spatiotemporal sequence, this study tests and verifies the effect of the model in multi-time step prediction and multi-time step input; the result shows that in the appropriate new network of 3DCNN combined with LSTM, the prediction result is more accurate than ConvLSTM. Therefore, by adjusting the network structure and number of layers of 3DCNN, combined with LSTM, and introducing GS algorithm into the model, more accurate and stable results can be obtained. Innovative Training Methods Similar to many advanced DL neural network models, we apply one-dimensional GS and two-dimensional GS to the 3DCNN and LSTM network. By adjusting the method and sequence of network training, a more accurate method for extracting polar vortex image features and predicting polar vortex intensity index series is constructed. The construction of convolution neural network with long short-term memory based on Gaussian smoothing (GSCNN-LSTM) model is shown in Figure 1. Firstly, the polar vortex intensity index sequence is input into the model as a training set, which is a three-dimensional shape (training × 20 × 1); since our predicted time step is 20, we need to output the polar vortex intensity index series for 20 days. The polar vortex image is directly input into 3DCNN after two-dimensional GS, and the distribution characteristics of polar vortex potential height anomaly are extracted by convolution neural network. Finally, LSTM is used to predict multi-step time series. Figure 1. Architecture of the novel three-dimensional convolutional neural network combined with long and short-term memory network based on Gaussian kernel smoothing method (GSCNN-LSTM) model used for polar vortex intensity prediction. The leftmost two coordinate images represent one-dimensional Gaussian kernel smoothing (GS) and twodimensional GS process, respectively. The middle part represents the input of image matrix and three-dimensional convolution neural network (3DCNN), and the rightmost represents the structure of long-term and short-term memory (LSTM) network. In order to further explain the details and training process of the neural network, we show the setting of various parameters in the novel model, the convolution process of polar vortex image, and the prediction process of polar vortex intensity index in the training process of the neural network model. As shown in Figure 2, the specific prediction process of GSCNN-LSTM model is as follows: Firstly, take the polar vortex distribution image after GS as the input datasets of the model, and select the polar vortex image of the first ten days, that is, the polar vortex image from day t to day t-9. Since the polar latitude range selected in the research process is 60-90 • N and the longitude range is 0-360 • E, the image matrix is expressed as a matrix form of 13 × 144 according to the resolution of NCEP reanalysis data. When we divide the size of training set and test set data, we input the data from 1948 to 1997 as the training set, the data from 1998-2007 as the validation set, and the data of the last 13 years from 2008-2020 as the test set. Since the input time step is 10 and the predicted time step is 20, the number of samples per year will be reduced by 30, and the number of samples in the processed training set is 92 × 15 + 91 × 45 = 5475. The input layer shape is expressed as ten consecutive days of 5475 × 10 × 144 × 13 × 20. Then, we put each image through the set 3DCNN1 convolution kernel (set to 40 in this study), which represents the filter in 3DCNN1. After the operation of the convolution kernel, we need to merge all feature maps. However, in order to more clearly show the structure and calculation process of the neural network, we display the processing methods of each time step separately. The shape size is 5475 × 10 × 144 × 13 × 20 maps. Then, the convolution operation is carried out by using the convolution layer. In this study, the size of the convolution kernel is set to 3 × 3 × 3; after the first convolution layer, further extraction of the features of the output 3D structure and reduction of the amount of calculation is necessary, so we carry out average pooling processing, and obtain the data with shape of (None, 4, 71, 5, 40). None here represents the size of the number of samples. Due to the samples needing to be input into the model for training by batch processing during the training process, the None is not shown here, and the batch size is set to 40 in this study. Note that all trainable parameters in the 3DCNN model are initialized randomly, and then trained by online back propagation (BP) algorithm. After 3DCNN1 outputs the feature map, the same convolution operation is adopted to input the output shape to the 3DCNN2 layer. The obtained feature maps are (None, 20, 680) after repeating the repeat vector layer. Then, in order to extract the timing features, the three-dimensional data is further input into the LSTM layer. Here, the number of neurons in the LSTM layer is set to 80, which also represents the number of hidden layers. Ultimately, the intensity index sequence data of training output is obtained through a FC layer, and the number of neurons is set to 40. Twenty-day prediction time series of polar vortex intensity index indicates that the shape of output layer is (5475, 20, 1). In the testing program, the shape of the output layer used to predict is (1187, 20, 1). Figure 2. Prediction process of polar vortex intensity based on GSCNN-LSTM model. The GSCNN-LSTM model is composed of one-dimensional/two-dimensional GS preprocessing step in order to remove the noise of intensity time series and image data, two 3DCNN layers, two average pooling (AP) layers, a repetition vector (RV) layer, an LSTM layer and a full connection (FC) layer. The input variable is the daily geopotential height (GPH) average image data (in units of gpm) from day t to day t-9 (ten days in total), and the input GPH image range is 0-360 • E and 60-90 • N. Matrixes of 13 × 144 represent the length and width of the input image, respectively. The daily average polar vortex intensity from t + 1 to t + 20 (twenty days in total) is used as a variable for the output layer, and the blue three-dimensional structure in 3DCNN1 highlights the process of convolution. M denotes the number of feature maps, T represents the number of neurons in the RV layer, and N represents the number of neurons in the FC layer. Noted that N was set to 20 or 40 in this study. The detailed neural network hyperparameters of the DL models in the training process are set as follows: The optimizer used in each neural network model in this experiment is Adam, the learning rate is 0.01 by default, the batch size is 40, and the number of epochs is 20. The early-stopping strategy is applied during the model training by obtaining the loss results on the validation set. With the increase of training epochs, if the loss in validation set does not improve in three epochs, the model will stop the training process. Thus, the "patience" hyperparameter value set for the early-stopping is three. As a result, less than 20 iterative experiments were stopped during the process of repeated experiments. In the last FC layer, L2 regularization and dropout method are used to prevent overfitting. It should be noted that the validation set is segmented from the training set in this study. The period of validation set is ten years from 1998 to 2007. Model Comparison In order to further distinguish and analyze the results of each model, we compare the differences between different DL models. The structure of the ConvLSTM model is quite different from the 3DCNN + LSTM model. Convlstm belongs to a variant of LSTM. It can be seen from the structure of LSTM in Equations (7)- (12), which include the variant of x t and h t−1 , and the FC layer inside the LSTM is directly used for input and output. ConvLSTM replaces the FC layer in LSTM with convolution calculation, which means the matrix multiplication is replaced by convolution calculation. The ConvLSTM models will capture the basic spatial features by convolution in multidimensional data in this way. The main difference between ConvLSTM and LSTM is the input dimension. Because the LSTM input data is one-dimensional, it is not suitable for spatial sequence data, such as video, satellite, and radar image datasets. ConvLSTM is designed for 3D data as its input. When 3DCNN + LSTM deals with three-dimensional datasets, it will use the 3DCNN part of the model to extract the spatial characteristics of the input data (polar vortex images) at first, and then inputs the one-dimensional results from the 3DCNN model into the LSTM model for intensity prediction. The main difference between 3DCNN + LSTM and ConvLSTM is that the former only performs convolution calculation for input variant (x t ), while the latter does not process the convolution calculation for h t . The model structure of GSCNN-LSTM is similar to that of 3DCNN + LSTM, except that the data is preprocessed for denoising before input into the model. In this process, we improve the model to adapt to the multi-day input and multi-day output structure for polar vortex multi-day intensity prediction. It should be noted that GS processing can denoise the image efficiently; even though the convolution layer in 3DCNN can preprocess the input images, the preprocessing of time series needs one-dimensional GS for preprocessing before inputting the convolution results of 3DCNN into the LSTM model. In order to make the method simple, save the time of model training, and improve the training efficiency, it is feasible to preprocess the time series and images of polar vortex based on the model. GS preprocessing steps are added into the CNN and ConvLSTM model to provide a fair comparison with the proposed GSCNN-LSTM model. Evaluation of Multi-Models This study adopts two evaluation indexes widely used in the evaluation of DL model: Pearson correlation coefficient and mean absolute error (MAE). Pearson correlation coefficient, also known as Pearson product moment correlation coefficient, is a linear correlation coefficient and the most commonly used correlation coefficient. Ham et al. [30] and others use Pearson correlation coefficient to express the correlation between the predicted El Niño 3.4 index and the real index, so as to further verify the effect of model prediction. It is used to reflect the linear correlation degree of two variables, X and Y, and the value of the Pearson correlation coefficient is [−1,1]. The greater the absolute value is, the stronger the correlation is. The value of Pearson correlation coefficient is expressed by Equation (18); cov(X, Y) is the covariance of two variables, and the denominator is the product of the standard deviation of two variables. µ X represents the average of X, µ Y represents the average of Y, MAE refers to the average of the absolute value of the error between the predicted value and the real value. Its calculation equation is as follows: The smaller the error between the predicted value of the model and the real value is, and the smaller the MAE value is, the better the effect that the model prediction will achieve. In this study, it means that a more accurate polar vortex intensity index sequence is predicted by GSCNN-LSTM model. In this study, we also apply the two model evaluation indexes to 3DCNN, ConvLSTM, 3DCNN-LSTM, and GSCNN-LSTM models to evaluate the error of prediction results of each model and make comparison. In the training process, we also repeated the training, by adjusting the parameters of each model, and recorded the Pearson correlation coefficient and MAE in the training process to evaluate the stability of each model. Segmentation of Datasets The distribution of the polar vortex in the stratosphere presents the shape of a minimum region of GPH. As shown in Figure 3, the average GPH distribution of polar vortex on 1 February of 1948 illustrates an extreme domain at lower latitudes. Due to the influence of the dynamic field, surface temperature, and various weather systems in the atmosphere, the central position of the polar vortex presents a continuous migration process. At the same time, intensity is also changing periodically [1,19,24]. It can be seen from Equation (1) that the calculation of polar vortex intensity in this study uses the daily anomaly-averaged GPH relative to the interannual generation, which is closely related to the latitude of the grid point. The abnormal distribution of the polar vortex in Figure 3 shows that the polar vortex on that day is stronger than the annual average polar vortex. In the center of the polar vortex, the GPH is abnormally large. Because there are great differences in the intensity of polar vortex between different years, and the intensity of polar vortex is obtained from the abnormal distribution of GPH, we extract the characteristics of GPH abnormal distribution at first. Because the sequence of polar vortex intensity index needs to be processed and divided, the one-dimensional Gaussian smoothing (GS) method is utilized for index time series denoising in this study. According to the calculation method of intensity index in 2.1, we constructed the standardized polar vortex intensity time series database by processing the DJFM data from 1948 to 2020 with the abnormal distribution of polar vortex GPH on the 100 hPa isobaric surface. As shown in Figure 4, the red and blue time series lines represent the variation of the original intensity index and the intensity index with Gaussian kernel smoothing, respectively. It can be seen that there is an obvious seasonal variation in the intensity of the polar vortex, which illustrates a trend of increasing first and then decreasing. However, from the perspective of interannual variation, there is no obvious interannual variation trend of polar vortex intensity, which also eliminates the influence of interannual variation in the process of training. We calculated that the standard deviations of the original data and the smoothed data were 265.92 and 254.70, respectively (not shown). It demonstrates that the dispersion of smooth data is reduced to a certain extent, and the effect of denoising is achieved. After denoising the time series and the prediction variables of DL, we also need to carry out two-dimensional GS to smooth the image data of polar vortex anomaly GPH. Figure 5 shows the distribution of some original polar vortex anomaly GPH and the distribution of two-dimensional Gaussian smoothed polar vortex anomaly GPH after the corresponding date. After constructing all polar vortex anomaly distribution maps, we generate an image database, which is divided with the corresponding intensity time series as the input of the new model. As shown in Table 1, image data and intensity time series data are divided into two parts: training set and test set. Since the selection of months each year is not continuous, 121 days of data are selected in normal years and 122 days in leap years. The training period is 60 years, from 1948 to 2017, while the test set is 13 years, from 2008 to 2020. Since this study is based on multi-day to multi-day prediction, by subtracting the length of 10-day series input and 20-day prediction series output every year, the total number in the training sample sets is 5475 days and the total number of test sample sets is 1187 days. The maximum, minimum, and average values of the two time series samples are shown in Table 2. Compared with the magnitude of intensity, the average intensities of the two types of samples are −1.47 and 6.97, respectively. Both of them are close to zero, which is reasonable for each training model. Table 2. Statistical parameter of training and testing datasets of polar vortex intensity, including period and number of each divided datasets (note that, due to the select number of timesteps being ten, and the prediction lead time being twenty days, the daily number of JJA is reduced to 122/121 minus 30, which is equal to 92/91). The maximum, minimum, and mean intensity values of each dataset are shown. In order to speed up the convergence rate of the DL algorithm, the data are often normalized before training to speed up the solution rate of gradient descent. For the time series data of polar vortex intensity, we scale each value into [0,1]. The specific calculation equation is as follows: Data (After Dispose) Number (Year × Days) Period For the image processing of polar vortex anomaly distribution, the same method is applied. Furthermore, the authenticity of the normalized image data is preserved to the greatest extent. Model Comparison After dividing the training set and test set and smoothing the original data, we need to reshape the reconstructed polar vortex intensity series and image database before inputting the images and time series data into the model. The input shape is divided into time steps and predicted. Therefore, according to the model training method in Section 2.4, we predict the polar vortex intensity index for multi-day through the polar vortex GPH anomaly distribution in the first ten days. From the results of several model tests, the forecast lead time can be up to twenty days. Note that, in order to increase the total amount of training samples as much as possible when creating sequence data and image data, we set the sliding window to one step. Thus, even in the 122/121 days sequence every year, 91/92 samples can be generated. For verifying the accuracy and stability of the prediction results of the new model, this study compares the correlation results of traditional DL models with GSCNN-LSTM. In this study, the 3DCNN model is used for training primarily. After determining the size of the convolution kernel and average pooling layer parameters, we adjust each parameter, such as learning rate and epochs, to the most appropriate value that is suitable for polar vortex intensity prediction, and finally obtain the best CNN prediction results. Similarly, for the more advanced DL model ConvLSTM, we also adopt the same parameter adjustment method. Finally, the GSCNN-LSTM proposed by this research was used for training and predicting. The results show that the GSCNN-LSTM model we used has better prediction results than the traditional DL spatiotemporal training model and some improved models. Figure 6 shows the correlation and mean absolute error (MAE) results of polar vortex forecast lead 20-day obtained from 3DCNN, GSConvLSTM, ConvLSTM, 3DCNN + LSTM, GSCNN, and GSCNN-LSTM models. The results show that the prediction effect decreases with the prolonging of predicting time in all training models. From the data where the correlation gradually decreases with the prediction duration, the prediction intensity time series obtained by the GSCNN-LSTM model proposed in this study is more consistent with the original data, and always maintains a high level of correlation with the forecast lead time on the same day. Note that all correlation coefficients in this paper are Pearson correlation coefficient, which is calculated by Equation (18). The predicted polar vortex intensity of a one-day lead can be highly correlated with the original sequence which can up to 0.92. With the increase of forecast lead days, only the GSCNN-LSTM model can reach the correlation coefficient close to 0.5 when the polar vortex intensity forecast lead time is up to 20 days. In order to design a fair comparison in different DL models, all traditional and advanced DL models used a denoised version of the input image data and predicted a denoised polar vortex intensity. Thus, GS preprocessing steps were also added into the CNN model and ConvLSTM model, which illustrated a relatively lower accuracy than the correlation skill of the GSCNN-LSTM model. The MAE series plot can also derive the same conclusion, which means the prediction error of GSCNN-LSTM model is still better than other DL models with the increase of forecast lead time. Ablation Experiment In order to prove that the GSCNN-LSTM model proposed in this paper possesses a preferable and reliable result for the prediction of polar vortex intensity, ablation experiments are carried out. This study adopts the method of ablation experiment to show the importance and necessity of each component (GS preprocessing, 3D-CNN, and LSTM) in the proposed model, and a comparative test is proposed by adding a simple five-point smoothing (FS) during the preprocessing phase of 3DCNN + LSTM model for highlighting the importance of GS preprocessing of the GSCNN-LSTM model. After a simple FS preprocessing is added to the 3DCNN and 3DCNN + LSTM model, it is necessary to compare with the experimental results obtained by one-dimensional and two-dimensional GS denoising process primarily. Furthermore, in order to highlight the importance of LSTM for long time series prediction in the process of polar vortex intensity index prediction, FS combined with CNN model (FS_CNN), FS combined with CNN + LSTM (FS_CNN + LSTM), and GSCNN model are also proposed for a comparative test. The results of the ablation experiment are shown in Figure 7. The results show that each component of the GSCNN-LSTM model is indispensable. Integrating GS into the DL methods can obtain a more accurate prediction result than simple smoothing (such as FS), and by comparing the prediction results of the FS_CNN + LSTM and FS_CNN model, we can draw the conclusion that LSTM has a good performance for polar vortex intensity prediction, and it is also an important component for the GSCNN-LSTM model to accurately capture the characteristics of intensity index time series. The same conclusion can be reached from the prediction results of CNN, GSCNN, and GSCNN-LSTM models. After completing the ablation experiment, we also added a simple basic prediction model to compare with other DL models. The average value of the last twenty days' polar vortex intensity is taken as the predicted intensity. In the different DL models, we choose the time series of the last twenty days as the precondition for prediction. Therefore, it is necessary for the simple experiment to set a twenty-day precondition for a fair comparison with DL models. As shown in Figure 8, the prediction result of the simple model (Simple_pre) is relatively poorer than the DL models, its Pearson correlation coefficient is relatively low, and MAE relatively high. Therefore, DL models and the proposed GSCNN-LSTM model are feasible to predict the polar vortex intensity. Model Validation After evaluating the accuracy of model training, we need to evaluate the stability of the GSCNN-LSTM model. In the process of training the model, it is often necessary to find a suitable neural network to extract spatial features more effectively. Besides, the more training parameters the model possesses, the greater the amount of the calculation is. However, the accuracy may not be effectively improved. Table 3 shows the results of each model in the training process, which provides a basis for selecting the optimal model to predict the polar vortex intensity index. Due to the limitation of time step and the increase of network layers, we choose (3 × 3 × 3) as the convolution filter. The convolution kernel is used as the convolution layer parameter of all 3D structure models. The results show that applying fewer LSTM hidden units can reduce the training parameters and shorten the training time, and the best prediction effect can be achieved in the proposed new network GSCNN-LSTM. Even under different parameters, the GSCNN-LSTM model is better than other models in long-term training prediction. The correlations of one day, five days, and twenty days in advance can reach more than 0.9, 0.85, and 0.48 respectively, which are higher than those of some ensemble models such as 3DCNN + LSTM and ConvLSTM. Table 3. Comparison of 3DCNN, ConvLSTM, 3DCNN + LSTM, and GSCNN-LSTM models on the polar intensity dataset. "−3 × 3 ×3" and "−5 × 5" is consistent with "3 × 3 × 3" and "5 × 5", which represent the corresponding kernel size of the neural network layers. "L120", "L100", and "L80" refer to the number of hidden state in the LSTM layers, and "40" and "20" refer to the number of output filters in the convolution. From the optimal model, the correlation with the highest predicted one-day intensity index of 0.92 is obtained, and then the correlation is analyzed using the scatter-histogram plot. As shown in Figure 9, the position of the scatter points represents the concentration degree between the predicted intensity index results and the true value. The histogram counts the number of scatter points with an interval of three. The results show that, compared with the other three models, the intensity index predicted by the GSCNN-LSTM model is closer to the real intensity value, which can be evenly distributed in the interval and has a higher concentration degree with the real intensity. The four models all show unsatisfactory prediction results in low-value intensity areas, but in other intensity areas, the predicted intensity of the GSCNN-LSTM model can be well separated, and the distribution is relatively uniform, which means the model can better extract the characteristics of intensity time series and polar vortex images. After evaluating the intensity of the model prediction and the concentration degree with the real intensity, the model needs to be repeatedly trained to verify whether the optimal model has strong stability. Figure 10 shows the correlation and MAE box diagram of predicted polar vortex intensity generated by GSCNN-LSTM, GSCNN, 3DCNN + LSTM, GSConvLSTM, ConvLSTM, and 3DCNN models after repeated training. The upper edge, lower edge, median, 25% median, and 75% median of each group of MAE data are shown in Figure 10. The results show that by changing the training dataset and the verification dataset, the uncertainty of the prediction skills generated by the GSCNN-LSTM model is minimal, indicating that the GSCNN-LSTM model can provide proficient real-time predictions, and the model has high stability. The prediction results of the ConvLSTM, GSConvLSTM, and 3DCNN + LSTM model are relatively close, but the training efficiency of ConvLSTM and GSConvLSTM are relatively lower than that of GSCNN and GSCNN-LSTM, and the training time is too long, and overfitting often occurs due to the many training parameters of ConvLSTM and GSConvLSTM model. LSTM always illustrates a positive effect on the prediction of long-time intensity series by comparing the training results of GSCNN and GSCNN-LSTM. Therefore, the improved set model is also carried out on 3DCNN + LSTM. Discussion In this study, a new DL model algorithm for time series prediction is applied to the multi-day prediction of the polar vortex. In the long-term polar vortex sequence prediction, the GPH anomaly distribution features of the polar vortex can be better extracted by the GS algorithm and the training results of the improved 3DCNN-LSTM model, which provide a more accurate and stable result for multi-day prediction while reducing the image information and timing complexity. Compared with the traditional CNN model and its variant set model, this model shows excellent performance and emphasizes the combination of training data processing and model in the spatiotemporal information extraction. (1) The increase/decrease of the polar vortex can also be attributed to the process of entropy increasing/decreasing. In this process, more attention is paid to the variation of the physical state of the polar vortex. Therefore, advanced insight is provided for the capture and prediction of the intensity of the polar vortex and weak/strong events. The strong ability of the neural network to learn characteristic law can be used to further predict the polar vortex intensity variation in the process of the changes that occur to polar vortex morphology and position, which can further explain the laws of its physical development and provide more atmospheric models with practical prediction significance. The stability and accuracy of the GSCNN-LSTM model can further show the application prospect of the ensemble model and provide a reference for the prediction of atmospheric eddy system with entropy increasing/decreasing. (2) Based on the nonlinear physical characteristics of the atmospheric vortex systems and weather phenomena, the GSCNN-LSTM model can effectively remove the less influential factors in the physical features combined with the traditional mathematical method, and then use the 3DCNN network of multi-time step prediction to capture the abnormal distribution features of GPH and extract long-term impact factors from the LSTM. As a result, the prediction of atmospheric eddy systems has been significantly improved. This is also widely demonstrated in TC intensity, eddy identification, cloud detection, and synoptic-scale eddy studies, e.g., [58,64,80,93]; for example, the multiscale feature fusion method can achieve about 98% accuracy of ocean eddy detection [93]. The improved method of CloudLSTM's novel recurrent neural network can also provide an accurate long-term prediction of air quality indicators [80]. The new DL method proposed by us also has a significant effect on capturing the characteristics of nonlinear systems in the atmosphere. (3) Arctic sea ice loss is closely related to global atmospheric circulation and climate warming. Studies have shown that the polar vortex has a strong negative phase response to the loss of sea ice, and then influences the mid-latitude surface temperature through large-scale circulation such as AO [16]. These responses often lead to extreme weather events. The multi-day prediction results of the polar vortex intensity index provide a theoretical basis for the long-term climate change trend and numerical weather forecast results, and the correlation between the two will also provide a reference for the accurate quantification of global temperature change and extreme precipitation events. (4) It is rare to apply the signal or image processing method in mathematics to the model for time series prediction. In this study, the prediction accuracy is applied to a network model through a GS method, which can not only remove the noise in the sequence, but also reduce the redundant information in the images. Compared with the traditional signal denoising method, the advanced Gaussian denoising method has the ability to process multidimensional data and can also be widely used in the fields of artificial intelligence and atmospheric science, as well as other scientific fields. Therefore, the GSCNN-LSTM model provides a good reference for improving the prediction method of vortex index by combining with DL methods. (5) From the perspective of prediction results, we can improve the forecast lead time of polar vortex intensity to 20 days while ensuring prediction accuracy. The definition method of strong and weak polar vortex events is usually defined by extremely weak or very strong polar vortex for 20 consecutive days [1,23], and polar vortex events can reflect the variability of the polar vortex and have a periodic impact on chemicals, e.g., [13,23]. Therefore, in future research, we can consider adding the prediction of polar vortex intensity events to ensure the accurate prediction of intensity index, further improve the accuracy of predicted events, and simulate the important impact of polar vortex intensity variability in historical periods. It provides a feasible scheme for the study of atmospheric circulation in the NH. (6) However, the paper also has some limitations, such as the lack of consideration of multiple predictors. It is not a feasible method to add El Niño index and AO index into the model due to the different latitudes of the characteristics. However, they also exert a significant influence on the variation of polar vortex intensity and position morphology. Therefore, the model with multiple predictors should be further discussed in future research. For example, the images of sea surface temperature (SST) anomalies, sea ice coverage, potential vorticity (PV), and wind field in the Arctic region are input into the model, and the most relevant variables are selected by advanced feature extraction methods. In this study, the prediction effect is relatively poor when the polar vortex intensity is negative exponential. These variables may improve the prediction of the negative intensity index to some extent. (7) Furthermore, the GSCNN-LSTM model needs to be improved. The optimization based on the GSCNN-LSTM model needs to learn from the idea of the aggregate model, which can effectively improve the prediction effect and shorten the training time with the addition of more variables, namely prediction factors, so as to make a timely response to the weather system changes. Some recent research and improved methods can be used as references for future research on vortices, such as the benchmark model for short-time precipitation forecasting proposed by [94]. The prediction of polar vortex intensity can be improved by using the method of the benchmark model. The combination of the self-attention mechanism and the ConvLSTM model adopted in spatiotemporal prediction achieves state-of-the-art results [95]. Using the most advanced attention mechanism to predict a certain day or a certain strong/weak event of the polar vortex in a long time series may achieve better results. This method of model fusion may be widely used in future research and can create new achievements continuously, providing strong support for numerical weather prediction. (8) Ultimately, the polar vortex is an important path for atmospheric dynamic transmission and substance exchange in the chaotic system of the atmosphere. This paper provides a significant theoretical basis for the nonlinear dynamics and entropy increase theory in the atmosphere through revealing the intensity variation of the polar vortex. According to the results of the GSCNN-LSTM model, the intensity information and variation characteristics of the polar vortex were effectively extracted and predicted, which proves that the energy information of many vortex systems of atmosphere can be predicted by DL models. For example, Liu et al. [96] and others used the CNN + LSTM method to extract the temporal and spatial features of partial discharge input signal, which improved the accuracy of partial discharge signal pattern recognition. The accurate numerical prediction is inseparable from the research of information entropy theory in the atmosphere. Combined with the research method of DL, many prediction problems in atmospheric science can be further solved. Conclusions In this study, a new DL model is proposed to predict the variation of multi-day intensity of polar vortex in a more accurate and stable way. Firstly, the dataset of polar vortex intensity and image distribution are constructed by using the long-term NCEP/NCAR historical reanalysis data. Then, the images and intensity series are divided and input into a three-dimensional convolutional neural network combined with long and shortterm memory network based on Gaussian smoothing method (GSCNN-LSTM) model for training and prediction. According to the idea of ensemble models and the construction method of the advanced model, and fully considering the temporal and spatial distribution characteristics of the polar vortex, a high-quality and high-precision DL method for polar vortex intensity index prediction is obtained. During the training process, we input the one-dimensional intensity index time series data disposed by GS and polar vortex GPH anomaly distribution images in the model, extract the features through two-layer 3DCNN convolution and pooling layer, then the repeated vector layer of the data inputs the shaped data into LSTM to extract the time series features, and finally outputs the multi-day polar vortex intensity prediction results. Compared with some traditional and advanced DL methods, the novel GSCNN-LSTM model can obtain a high correlation between the predicted sequence and the original sequence and more accurate prediction results, according to the Pearson correlation coefficient and MAE results. Secondly, the forecast lead time of the 20-day intensity series is the limitation, which can achieve a correlation of 0.49. Finally, the model also illustrates a good performance in stability, and it is better than the general DL algorithm models such as 3DCNN and ConvLSTM. The novel algorithm model can provide some guidance for the prediction of many nonlinear systems in the atmosphere. However, this article still has many limitations. As revealed by the previous studies, many impact factors are often considered in the prediction process of diverse vortex systems, such as the temperature of the underlying surface, the dynamic field, and the interaction between different systems [61][62][63][64]. In this paper, many weather systems affecting the intensity of polar vortex are not considered in detail. Therefore, referring to the prediction of TC intensity and track, multifactor prediction for polar vortex needs to be explored and improved. Secondly, the recent proposed DL network is not used to extract the polar vortex image characteristic. Even if the correlation of the training results has achieved good results, the value of predicted polar vortex intensity still needs improvement, and the advanced image neural network needs to be further explored. According to the hypothesis proposed in this paper, although vortex systems in weather systems have similar dynamic structures and characteristics with polar vortex, the impact factors are different, so the generalization ability has not been well verified at first. Further works are needed to explore more innovative DL models and improve their generalization ability. Secondly, according to the hypothesis of improving DL models, this paper proposes a GS method combined with the DL model, which has achieved good results in polar vortex prediction. Finally, for the hypothesis of adding appropriate variables to the model, the addition of temperature field, potential vorticity, and dynamic field cannot improve the prediction effect of the model and can even reduce the accuracy of the DL models, so the appropriate prediction factors of polar vortex intensity need to be further studied. In this paper, the polar vortex is taken as the research object, which is the most representative vortex system in the Northern Hemisphere (NH). In future research, we can apply this method and diverse improved models to forecast many small and medium-size vortex systems, including the intensity prediction of TC and extratropical cyclones. DL models are commonly used in the image prediction field [97,98], which is similar to the prediction of ENSO events [30]. The intraseasonal scale prediction of polar vortex intensity in this paper can be further developed into the prediction of strong/weak polar vortex events.
15,772.4
2021-10-01T00:00:00.000
[ "Environmental Science", "Computer Science", "Physics" ]
GalaxyRefine: protein structure refinement driven by side-chain repacking The quality of model structures generated by contemporary protein structure prediction methods strongly depends on the degree of similarity between the target and available template structures. Therefore, the importance of improving template-based model structures beyond the accuracy available from template information has been emphasized in the structure prediction community. The GalaxyRefine web server, freely available at http://galaxy.seoklab.org/refine, is based on a refinement method that has been successfully tested in CASP10. The method first rebuilds side chains and performs side-chain repacking and subsequent overall structure relaxation by molecular dynamics simulation. According to the CASP10 assessment, this method showed the best performance in improving the local structure quality. The method can improve both global and local structure quality on average, when used for refining the models generated by state-of-the-art protein structure prediction servers. INTRODUCTION The structure of a protein can be predicted accurately from its sequence by template-based modeling when the sequence identity is sufficiently high (e.g >30%) (1,2). However, even at a high sequence identity, side-chain structure may be less accurate than the backbone structure, whereas at a lower sequence identity, predicted structures may have significant errors in both side-chain and backbone structures. Although ab initio protein structure predictions from sequences are notoriously difficult (3,4), ab initio refinement starting from a reasonable initial model structure is expected to be less difficult. Successful refinement can increase the applicability range of template-based models by providing more precise structures for functional study, molecular design or experimental structure determination (5,6). Since 2008, various refinement methods have been tested in the refinement category of the communitywide protein structure prediction experiment Critical Assessment of techniques for protein Structure Prediction (CASP) (5,6). Several methods were shown to improve the initial model structures (7)(8)(9)(10)(11)(12). Consistent improvements in such refinement experiments is more difficult than the typical refinement tests performed on lower quality initial structures, as the initial structures are selected from the best models submitted by CASP predictors, which have been already refined by other prediction methods (6). In this article, we present a new model structure refinement web server called GalaxyRefine that has shown consistent improvement in CASP10, the most recent CASP held in 2012. GalaxyRefine first rebuilds all side-chain conformations and repeatedly relaxes the structure by short molecular dynamics simulations after side-chain repacking perturbations. Interestingly, this method can improve global and local structure quality. The method can improve global and local structure accuracy as well as physical correctness in 59, 67 and 79% of the CASP10 refinement category targets when measured by GDT-HA (13), GDC-SC (14) and MolProbity score (15). This method has been assessed to be more successful in refining the local structure and side-chain quality than any other methods tested in CASP10. GalaxyRefine also provides four additional models generated by relaxation simulations after larger perturbations on secondary structure elements and loops, resulting in larger changes from the initial model structure. GalaxyRefine can improve the models generated by state-of-the-art structure prediction servers such as I-TASSER (16) and ROSETTA (17) when tested on the server models submitted in CASP10. THE GALAXYREFINE METHOD GalaxyRefine first rebuilds all side-chains by placing the highest-probability rotamers (18), starting from the core and then extending to the surface layer by layer. On detecting steric clashes, rotamers of the next highest probabilities are attached. After attaching all side chains, the number of neighboring C b atoms is counted around each side chain, and the initial side-chain conformation is recovered if the number deviates from the canonical distribution for the amino acid under the same degree of surface exposure. The model with the rebuilt side chains is then refined by two relaxation methods, a mild relaxation and an aggressive one. The lowest energy model of 32 models generated by the mild relaxation is returned as model 1, and four additional models closest to the four largest clusters of 32 models generated by aggressive relaxation are returned as models 2-5. Both of the methods are based on repetitive relaxations (22 and 17 for mild and aggressive relaxations, respectively) by short molecular dynamics simulations (0.6 and 0.8 ps for mild and aggressive relaxations, respectively) with 4 fs time step after structure perturbations. Structure perturbations are applied only to clusters of side chains in the mild refinement, whereas more forceful perturbations to secondary structure elements and loops are applied in the aggressive refinement. The triaxial loop closure method (19)(20)(21) is used to avoid breaks in model structures caused by perturbations to internal torsion angles. The energy functions used for the two relaxation methods are linear combinations of a physics-based energy function complemented by database-derived terms and a harmonic restraint energy derived from the given initial model structure. The relative weight of the restraint energy to the physics-based energy for the mild relaxation is five times larger than that for the aggressive relaxation. The physics-based energy function contains CHARMM22-based molecular-mechanics bonded energy terms (22), Lennard-Jones interaction energy, Coulomb potential energy, FACTS solvation free energy (23) and solvent accessible surface area energy, whereas the database-derived energy function contains hydrogen bond energy (24), dipolar-DFIRE potential energy (25) and side-chain and backbone torsion angle energy (26). Performance of the method The GalaxyRefine method has been extensively tested on (i) the refinement category targets of CASP8 (5), CASP9 (6) and CASP10 (53 proteins), (ii) Zhang-server (I-TASSER) models (84 proteins) (11) and (iii) ROSETTA server models (69 proteins) (17) for CASP10 templatebased modeling targets and (iv) FG-MD benchmark set targets (147 proteins) (8). The test results in terms of improvement of model 1 (and the best refined model out of model 1-5) over initial input models for backbone structure accuracy measured by GDT-HA (13), side-chain structure accuracy measured by GDC-SC (14) and physical correctness measured by MolProbity score (15) are summarized in Table 1. The GalaxyRefine server shows average improvement in all test cases except for the MolProbity score of ROSETTA models, which have exceptionally good MolProbity scores. Although GalaxyRefine can improve GDT-HA and GDC-SC for all test sets, the average improvements are small (<1 and <3%, respectively), suggesting the necessity for further improvement in this field. Improvement in MolProbity score is relatively larger with an average improvement of 0.6 (from 2.58 to 1.96). Typical MolProbity scores for experimental structures are in the range of 1-2. A successful refinement example is illustrated in Figure 1. Hardware and software The GalaxyRefine server runs on a cluster of 4 Linux servers of 2.33 GHz Intel Xeon 8-core processors. The web application uses Python and the MySQL database. The refinement method implemented in the GALAXY program package (28)(29)(30)(31) is written in Fortran 90. The Java viewer JMol (http://www.jmol.org) is used for visualization of predicted structures. Input and output The only required input is a single-chain protein structure without internal gap in the PDB format. The expected run time is generally 1-2 h. Five refined models can be viewed and downloaded from the website (Figure 2). Information on structural changes obtained by the refinement of the input structure is provided in terms of GDT-HA, RMSD and MolProbity score in a separate table. CONCLUSIONS GalaxyRefine is a web server for protein model structure refinement that is particularly successful in improving local structure quality as demonstrated by the tests on CASP refinement category targets and CASP10 server models. On average, it shows moderate improvement in backbone structure quality. The server may be used to refine model structures obtained from available structure prediction methods, including the current best templatebased modeling servers.
1,734.6
2013-05-21T00:00:00.000
[ "Materials Science" ]
The Prospect of Dollarization in Nigeria: An Empirical Review Dollarization has been perceived in literature as a strategy that could help emerging and developing economics achieve price stability via lower inflation rates occasioned by the adoption of a stronger currency. Supporters of dollarization also infer that the strategy has the ability to affect positively real economic variables such as growth and employment through its ability to lower interest rates, increase investment and eliminate currency risk thereby increasing international trade. In this study, we examined the effect of dollarization on selected macroeconomic variables in Nigeria from 1972 to 2017. Using simple regression models, we analysed the impact of real dollarization index on prime lending rates, inflation, unemployment, PCI, FDI, real GDP growth and total trade in Nigeria. Empirical results revealed that dollarization did not exert significant positive effect on the selected macroeconomic variables. The study therefore recommended that government should be intentional about putting measures in place to strengthen the Nigerian naira so that economic agents will see no need to hold their wealth in or transact with a foreign currency. This will discourage dollarization in Nigeria which is perceived to be a major driver of inflation in the country. Introduction In the recent past, Nigeria has struggled with a declining economy. Data from the National Bureau of Statistics [1] revealed that the economy has been growing at an average of 1.8 per cent compared to the average growth of 5 per cent recorded between 2011 and 2015. One of the major menace in the economy has been the high and persisting inflationary trend in the country. Several attempts to curb inflation using monetary policy for instance has failed. Literature however posits that countries that give up the use of their domestic currency, and delegates the operations of their monetary policy to a more stable currency will tend to have lower inflation among other things. This process of surrender is termed dollarization. Dollarization or currency substitution is the use of any foreign currency in place of or alongside a domestic currency. While the term might sound restrictive, dollarization is used in a broader sense to describe the adoption of a foreign currency in a domestic economy. Dollarization has been suggested as a policy that might, among other goals, promote international trade between a country and the country whose foreign currency is adopted and also drive economic development and prosperity particularly in developing countries [2][3][4]. There is increasing evidence that the use of a common currency may induces a substantive increase in trade, which in turn may fuel economic growth [5]. Studies show that a currency union increases bilateral trade among its members, and the effect is both large and statistically significant [6][7]. Dollarization has also been noted to affect nominal exchange rate, and prices level [8]. Dollarization is not official in Nigeria. However, since the 1980s, the U.S. dollar has been increasingly used as a medium of exchange within the Nigerian markets [9]. In 2012, the then CBN Governor, Sanusi Lamido Sanusi, decried the dollarization of the Nigeria economy, stating that the situation was becoming worrisome [10]. To stem the tide of dollarization, in a circular dated May 21, 2015 the Central Bank of Nigeria (CBN) affirmed that the pricing of goods and services in Nigeria shall continue to be in Naira only, implying that dollarization in all its forms was a criminal offence in Nigeria [11]. On March 14 2017, the Nation Newspaper reported that the House of Representatives resolved to probe foreign schools in Nigeria collecting fees in foreign currency [12]. This was in pursuant of Sections 15,20 (1) and (5) of the CBN Act, which made it illegal to price or denominate the cost of any product or service in any foreign currency in Nigeria other than the Naira. While the Nigeria government has not officially sanctioned the dollarization of the economy, the process appears to be gaining acceptance. The desire to hold foreign currency may be due to the incessant bouts of inflation and currency devaluation in the country which weakens the Naira, eroding its purchasing power and the value of personal wealth. Some economists have argued that pursuing a dollarization strategy helps developing countries grow their economies through the stabilization of inflation, increased investment and trade opportunities. Others however discourage a dollarization strategy because it causes these economically vulnerable countries to relinquish control over their own monetary policy. Argument favouring dollarization is that it lowers interest rates and stimulates investment. The Nigerian economy over the years has been struggling with the problem of inflation and high level of unemployment arising from low levels of investment in the country. While it is presently considered as being undesirable for the economy, dollarization may proof to be the most viable solution to the problem of inflation and poor standard of living in Nigeria. This work is unique because most studies on dollarization in Nigeria have been majorly interested in examining the extent and determinants of dollarization in the country with the exception of the work by [13], without looking at how dollarization influences macroeconomic variables in the country. This study filled this gap by computing a dollarization index for Nigeria and investigating its impact on selected macroeconomic variables in Nigeria. The variables of interest are lending rates, inflation, unemployment, PCI, FDI, real GDP growth and total trade which extends the scope of variables covered in earlier works [13]. The aim of this paper is therefore, to examine areas in which Nigeria might benefit should the country decides to dollarize the economy. The benefit is defined in relation to the signs of the coefficients of the selected macroeconomic variables. Structurally, the paper is arranged into five sections. Following the introductory section, section two provides a brief review of the concept of dollarization with highlights on its variants and also presents a theoretical literature on dollarization. Section three discusses the experiences of countries that has been dollarized to draw example for Nigeria and also a discuss on macroeconomic performance in Nigeria relative to dollarized economies. Section four discusses the methods, sources of data and results from empirical analysis while section five presents concluding remarks and recommendations. Dollarization: Definition and Scope Dollarization can be described as a situation in which a foreign currency is replaces a country's currency in performing several functions of money [4]. When the inhabitants of a country use a foreign currency in parallel to or instead of their domestic currency, then the country is dollarized [14]. The concept also describes a dual-currency utilization since the term is more appropriately connected with the official designation of the United States dollar as the national currency or adopting a stronger foreign currency such as the United States Dollar, Euro, or Yen [15][16]. Summarily, dollarization means that the country adopts the currency of another country (for example, the dollar) as a means of payment and unit of account [17]. Therefore, while many people associate dollarization with the United States dollar, the association is not exclusive. The euro, South African rand, Russian rubble, New Zealand and Australian dollars, Japanese Yen are currencies that have been used by other countries. Though dollarization and currency substitution are often used interchangeably, dollarization is most related to the use of foreign currency as a unit of account and a store of value but not necessarily as a medium of exchange while currency substitution primarily indicates the replacement of domestic currency by foreign currency as a medium of exchange [18]. It is however obvious that the two concepts are largely defined based on the relatedness to the three major functions of money [19]. Currency substitution can however be regarded as a subset of dollarization, which can also be defined as a process of substituting foreign currency for a domestic currency to fulfil the essential functions of money as medium of exchange (currency substitution), and/or as a store of value (asset substitution) [20]. Accordingly, dollarization comprises of both currency and asset substitution, and both are related to the functions of money as medium of exchange and store of value. Dollarization is a generic term that can fall into different categories. It can be official or unofficial. Unofficial dollarization occurs when residents of a country hold a large share of their financial wealth in assets denominated in foreign currency, though the foreign currency lacks the legal tender privileges that domestic currency enjoys [21][22]. Unofficial dollarization has existed in many countries for years but has attracted little or no political attention because it is somewhat beyond the control of governments, though it constitutes a major issue of interest to economist [23]. Particularly in developing countries, foreign currencies such as the dollar is widely used and accepted in private transactions though it is not classified as legal tender by the country's government [24]. This makes its use unofficial. An IMF study measuring unofficial dollarization by the ratio of foreign-currency deposits to the broad money supply (M2 or M3) found that in 1995, 18 countries had high unofficial dollarization (exceeding 30 percent), with the average degree of dollarization of 45 percent. Another 34 countries had a moderate unofficial dollarization, averaging about 16 percent of the broad money supply. However, unofficial dollarization was not limited to developing countries. For instance, foreign currency deposits were about 22 percent of broad money in Greece and more than 15 percent even in the United Kingdom [21]. Unofficial dollarization is essentially the rational response of economic agents to a loss of confidence in the domestic economy, often resulting from episodes of inflation, currency devaluation and/or currency confiscation [22]. Unofficial dollarization may often be related to the growth of underground or unrecorded economic activities since currency, particularly foreign currency is often the preferred medium of exchange for such transactions [19]. Official dollarization or full dollarization is a complete monetary union with a foreign country from which a country "imports" a currency, by making the foreign currency full legal tender and reducing its own currency, if any, to a subsidiary role [25]. In officially dollarized countries, there is no domestic currency, no currency risk and, therefore, no risk of currency crises. Full dollarization does not mean that a foreign currency is the only legal tender; freedom of choice provides some protection from being stuck using a foreign currency that becomes unstable [25]. It is more of a portfolio shift away from domestic currency to foreign currency, to fulfil the main functions of money -store of value, unit of account, and medium of exchange. It is typically a result of unstable macroeconomic conditions and a rational response of people seeking to diversify their assets in the face of heightened domestic currency risk [26][27][28]. Thus dollarization will mostly occur when there is high inflation and macroeconomic instabilities and particularly in vulnerable developing economies. Theoretical Literature Dollarization is a form of fixed exchange rate and a special case of monetary unification-a situation where some economies come together to adopt a common currency and establish a common central bank to which they surrender monetary authority. Monetary union is said to be desirable when the economies operate in an optimum currency area (OCA). An OCA describes an entire region where economic efficiency is maximized because they share a single currency. The theory of OCA postulates that countries that share strong economic ties (such as trading relationships) may benefit from a common currency. This theory was formally presented in an article title "A Theory of Optimum Currency Areas," in 1961 by Robert Mundell [29]. The intent of the article was to address the economic criteria that would necessitate various regions of the world to adopt a common currency or engage in a monetary union. Mundell used factor (labour and capital) mobility as its most important criterion -principle or standard-which should necessitate the adoption of a common currency. Mundell developed a cost-benefit analysis of the monetary union. He identified the benefits to include a reduction of the various transaction costs generated by the existence of various currencies and a gain in the liquidity of the currency, elimination of exchange-rate uncertainty, and enhanced credibility for the monetary authority. This gain he attributed mainly to the expansion of the currency's area of transactions. The inability of a country to conduct independent monetary policy which included loss of seignorage, inability to devalue or revalue domestic currency for stabilization purposes, the elimination of the exchange rate between participants in the union were identified as cost. In Mundell's framework, the main force that favours a common currency is the transactions cost benefit associated with the exchange of goods or services and incurred in overcoming market imperfections. The use of the same money facilitates exchange of goods and services and also financial exchanges. The expansion of trade, or globalization, has revealed the increasing importance of the transaction cost benefit. [30]. The OCA theory was further extended as openness was identified as a superior criterion for pursuing currency union or creating OCAs [31]. If the country is relatively open, the flexible exchange rate will greatly influence the internal price level since this form of exchange rate responds to external forces of demand and supply. The more open an economy is, the more accessible to a fixed exchange rate it should be [31]. This is to suggest that foregoing an exchange rate does not entail a serious loss of policy independence for member countries in a monetary union that are very open to international trade. The nominal exchange rate will no longer be an important adjustment tool for very open countries, because changes in its nominal value are quickly followed by changes in domestic prices, leaving the real exchange rate unaffected. Therefore, countries in OCA will experience stable real exchange rate. Product diversification was also suggested as a crucial criterion for OCA [32]. A well-diversified economy rarely encounters demand shocks because positive changes in other areas caused by diversification of the economy will be offset by negative changes in the affected area. Product diversification lowers the probability of asymmetric shocks and reduces their negative effects. Thus a fixed exchange rate regime which can be facilitated by dollarization is more advantageous for a well-diversified economic structure. Literature Review The U. S. dollar which has been Panama's legal tender for 114 years, and this "self-denying ordinance", [33] has given the country a degree of monetary stability. Dollarization in Panama has been observed to eliminate foreign exchange risks, currency mismatches, and speculative attacks so common in other countries with central banks and "sovereign" money [34]. The superb performance of inflation in Panama has been attributed to dollarization. The dividends of dollarization have been sustained in Panama basically because of the stable value of the US dollar. The country has been severally described as one of the best performing countries in Latin America. In 2014, the misery index -an informal measure of the state of an economy generated by adding together its rate of inflation and its rate of unemployment -for Panama was the lowest with a value of 9.39 amongst 18 Latin American countries. In 2014, economic growth in Latin America and the Caribbean was a measly 0.8%. Again, in 2014, her economic growth rate has been sustained at 6.2% unlike the average of 0.8% in Latin America and the Caribbean [35]. Panama's economy is considered to be among the fastest growing and best managed in Latin America. As Panama's relative performance was summarized in three points. First, Panama's experience confirms that an exchange rate peg, with dollarization being the extreme example, generates low and stable inflation. In this regard, it seems that the extreme pegs deliver even better inflation performance than do currency boards. Second, this gain in inflation performance is achieved without compromising average GDP growth, third, the absence of monetary financing did not preclude Panama from having large, persistent fiscal deficits that were no better than the typical Latin American country [37]. Ecuador embraced full dollarization in 2000 after the collapse of its financial system in 1998-1999 [17]. Following the financial banking crisis of 1999, the U.S. dollar became legal tender in Ecuador on March 13, 2000, and sucre notes (Ecuador's monetary unit) ceased being legal tender on September 11. Ecuador dollarized in 2000 in the midst of a severe economic crisis with a collapsing banking system, a sliding local currency, and after defaulting on its Brady bonds in late 1999 [16]. The regime was implemented in an attempt to reduce inflation, bring stability to the economy, and gain credibility with international investors. Since dollarization, Ecuador's inflation has been significantly reduced to single digits. Reports of the effects of dollarization in Ecuador is mixed. While some praise dollarization for stabilizing the economy, others feel that the supporters and opponents of dollarization have overstated the policy's effects on the Ecuadorian economy. For instance, from 2015, thousands of Ecuadorians have crossed the bridge from Tulcán, Ecuador to the border town of Ipiales, Colombia to go shopping [38]. Ecuadorians purchase goods in Colombia en masse due to a simple fact: prices in Colombia have become significantly cheaper. The situation became a political concern such that the president issued a "call of conscience" to Ecuadorians, asking his compatriots to "offer support to the national production" by buying Ecuadorian products. Wang further noted that dollarization is not a sole remedy for all economic problems, neither is having a national currency. El Salvador implemented its dollarization plan in 2001. This was followed by a fall in interest rate on consumer mortgage from 17 to 11 percent. However, El Salvador's economic growth since adopting the dollar as the official currency in 2001 has not performed any better [39]. It does appear as if El Salvador saw higher growth rates in the years prior to the adoption of the dollar though it is difficult to directly attribute the country's failure to obtain a higher growth rate solely to dollarization. For instance, it was observed that El Salvador's exports slowed because countries like China were trading with their own undervalued currency while El Salvador traded with the dollar which made El Salvador's exports relatively more expensive than Chinese exports. The main driver of dollarization in many countries is the attempt by residents to protect the value of their wealth and income from being eroded by inflation and exchange rate depreciation [15]. For instance, in heavily dollarized economies, periods of sharp devaluations of the domestic currency are often met with a shift of financial assets and liabilities towards foreign currency, intensifying downward pressure on the exchange rate [40]. This would suggest that economies with higher inflation rates would have relatively high ratios of dollarization as savers shelter the real value of their wealth. Dollarization in Tanzania was found not to respond to inflation in a manner that is predicted by the literature [41].) Using the Chow test (Chow (1960) to see if the variables of the exchange rate volatility and inflation rate fluctuation contributed to dollarization in Tanzania, results were equally negative and insignificant [42]. Another study however found that the higher the domestic inflation rate visà-vis foreign inflation, the higher the level of foreign currency holdings [43]. Inflation was also found to increases as a result of an increase in dollarization suggesting a bidirectional association [44]. The effect of dollarization on the economy are mixed. While countries such as Panama are lauding its effects, dollarization was the first challenge and obstacle preventing Somaliland from achieving development and higher economic growth [45]. Dollarization may also complicate the process of setting monetary policy objectives [8]. Furthermore, dollarization brings many challenges that could have adverse effect on inflation targeting [46]. According to an IMF team, experience shows that dollarization is often difficult to reverse [47]. While the use of a foreign currency as a store of value or for domestic transactions has increased, there are very few cases in which the trend had been significantly reversed [48]. This difficulty may be attributed to the fact that many particularly in developing countries exercise more confidence in foreign currencies which are often more valuable than domestic currencies [49]. Practices regarding the use of currencies for the settlement of transactions change slowly and only when there are significant benefits to be gained from switching currencies [21]. Many SMEs hesitate to expanding into international markets because of the variation in national business environments which often times require the conversion of currencies [50]. In addition to the political and economic factors which influence currency exchange, investors also have to consider whether the country in which they are investing has a stable currency. Dollarization has been identified as a viable solution to this problem because it reduces the currency risk and makes it easier for businesses to expand outside of their home markets. However, while investigating Capital structure decisions in a highly dollarized economy, it was established that firms with higher returns on assets, and thus stronger capacity to generate internal funds, are more sensitive to currency mismatch risks in relying on US dollar borrowing than firms with lower returns on assets in Cambodian firms [51]. In relation to international trade, dollarization was noted to increase bilateral trade with dollarized countries, while also promoting a bilateral trade within the country that dollarized and other dollar-zone countries as well [52]. Their study reevaluated the average treatment effect of dollarization on bilateral US trade with the six dollarized countries and on bilateral trade of the dollarized countries and carefully controlled for non-random selection of policy adoption. They found strong and robust evidence that dollarization not only significantly increases bilateral US trade with dollarized countries, but promotes trade among dollar-zone countries as well. Their results suggested that the trade-enhancing effects of dollarization are substantial. In Nigeria, studies have investigated the relationship between nominal exchange rate volatility and dollarization in Nigeria using the Granger causality test from 1986 to 2003 [13]. Results from the study revealed a bi-directional relationship between exchange rate volatility and dollarization, though the causality was stringer from dollarization to exchange rate volatility [53]. The study recommended that policies aiming to reduce exchange rate volatility in Nigeria must include measures that specifically address the issue of dollarization in the country. Another study investigated the existence and extent of currency substitution and also examined the impact of same on the demand for money in Nigeria from 1980-2014. The study estimated six models based on Cuddington's currency substitution framework [53]. Findings revealed that currency substitution has increased over time in Nigeria and the determinants of currency substitution were expected rate of depreciation, inflation rate, election period, and crude oil price as well as foreign rate of interest. Additionally, activities such ceiling on the interest rate for US dollar deposits and the interest rate differentials have also been identified as important factors affecting deposit dollarization [54]. The performance of Nigeria's macro economy since its independence has not been commendable though the country has the potential of being a force to be reckoned with considering its abundant human and natural resources. The country has over the years struggled with poor macroeconomic performance. Recently, it was reported that Nigeria has overtaken India as world's poverty capital with 87 million Nigerians living in extreme poverty [55]. Most government policies to address these poor performing macrovariables have proved unsuccessful over the year and the Nigerian government may need to rethink its policy stance to address the hardship wrecking Nigerians. The country has been struggling with persistent inflationary pressures over the years. Beginning from 1969 with the Nigerian civil war, and in the 70s following government's effort to reconstruct the war-damaged country, inflation rate in Nigeria has persistently remained very high, peaking in 1995 at 72.8% [56]. Government's effort to achieve single digit inflation over the years with different monetary policy regimes has been unsuccessful as the trend is hardly sustained for more than two years. Compared to a dollarized economy like Panama, Nigeria's instability calls for pragmatic actions on the part of policy makers as evident in figure 1. According to data [57], in 2017, real GDP growth in Nigeria was 0.8% placing Nigeria in the 164 th position out of 190 countries. Pathetically, when compared to Libya with the highest growth rate of 64% in 2017, Nigeria had 98.74% lower real GDP growth. In terms of per capita GDP, Nigeria has one of the lowest and it has been on the decline. Compared to that of Equatorial Guinea's $34,865-the richest country in Africa, Nigeria's pathetic situation becomes all the more obvious [58]. Again compared to Panama, with identical population growth rate, Nigeria is crawling as a nation. In 2010, when the Vision 20:2020 was enacted, it was projected that for Nigeria to be amongst the 20 most industrialised economies in the world by 2020, her GDP must grow at an annual rate of 14%. The highest growth rate recorded for the country since then has been 7.84% an indication that the country is very far from realising her vision. Nigeria's per capita GDP has remained one of the lowest globally. This indicator which serves as a measure of a country's living standards makes comparison of the prosperity level of countries with different population sizes possible. While the countries with the highest GDP per person often have prosperous economies and few residents, the study by United Nations Development Program in 2007 revealed that some countries have had relatively rapid population growth alongside a rapid increase in per capita GDP [59]. Nigeria and Panama have relatively identical population growth rate but very diverse PCI as shown in figure 3. Using the Solow model, it was explained that higher levels of savings and investment contributes to higher level of PCI [60]. Nigerians have a poor saving culture. Coupled with the high level of poverty in the country, the low PCI is only but a reflection of the prevailing macroeconomic conditions in Nigeria. Unemployment has remained a bane particularly among the youths in Nigeria. The Labour Statistics report of Nigeria in Q4 2017 revealed that 7.9 million youths between the ages of 15-34 were unemployed while the National Bureau of Statistics reported 61.6% rate of youth employment as at June 5, 2017. Comparatively, unemployment rate is much higher in Nigeria than in Panama a dollarized economy as In relation to the strength of its currency, the purchasing power of the naira as reflected by the values of its real effective exchange rate (REER) reveals a weak currency whose purchasing power is consistently declining. This indicator which compares a country's currency to a basket of other countries' currencies is often used to compare a domestic currency's performance to the country's most important trade partners. This is illustrated in figure 5. In figure 5, the REER for Nigeria was compared with two of its prominent trading partners-China and the United States of America. Though the indicator is relatively identical for the three countries, Nigeria's REER began a rather steep decline in 2014 compared to China and the USA whose marginal decline began a year later in 2015. This indicates a deeper weakening of the naira compared to the yuan and dollar. Given the dismal performance of most of the macroeconomic indicators in Nigeria, it is evident that the country needs a viable means of intervention to drive economic progress. Dollarization is increasingly being considered a viable economic tool that can help economies like Nigeria achieve stability, growth, and prosperity while increasing some level of credibility in governance. The experience of Panama might be repeated in Nigeria with full dollarization of the economy. It has been suggested that vulnerable emerging-market countries can always do away with their central banks and domestic currencies, replacing them with a sound foreign currency to enjoy the benefits associated with the strong currencies [35]. Methods There are different ways of measuring dollarization in an economy, and deposit dollarization is one way. Deposit dollarization is when economic agents seek to conserve the value of their wealth in foreign assets such that they switching from holding domestic currency into holding foreign currencies in bank accounts domiciled in domestic banks [21]. Options have been proposed to construct a dollarization index which measures the extent of a country's reliance on foreign currency [61][62]. In this study we adopt the relationship proposed in an International Monetary Fund (IMF) working paper [63]. Since foreign currency deposits (FCDs) are in different foreign currencies, exchange rate movements could lead to big swings in the dollarization ratio even with constant stock of FCDs in foreign currency terms. Thus, foreign exchange holdings need to be adjusted for exchange rate changes to prevent an exchange-rate bias. Therefore, they proposed the generation of a "real" deposit dollarization index by converting both foreign currency deposits and bank deposits to dollars and multiplying both (back to domestic currency) by a fixed base-year nominal exchange rate. The "real" deposit dollarization indicator is thus derived as a constant-exchange rate indicator as shown. = + aFCD is adjusted foreign currency deposit and it is derived by the relationship below. * Where NER is nominal exchange rate (local currency per dollar). NER for Year 2010 is used as a fixed base-year NER for the adjustment since 2010 is currently the base year in Nigeria. The index computed is used as the explanatory variable in the study to examine its impact on selected macroeconomic variables in Nigeria using simple regression analysis to summarize and reveal the existing relationship between dollarization and macroeconomic variables in Nigeria. Model Specification The simple linear regression model for the study is thus specified as Sources of Data Data for the study were sourced from Central Bank of Nigeria Statistical Bulletin and the World Economic Outlook Database (WEO). The study covers a period of 46 years, from 1972 to 2017. Results and Discussions Trend analysis of the data was performed to trace the behaviour of the variables overtime and to explain the nature of the data. Unit Root Test The presence of unit root has significant implications for econometric modelling. It produces spurious regressions where what appears to be a significant coefficient is frequently really not at all significant. It could also produce high r-squared values even if the data is uncorrelated. To avoid this, the unit root test was conducted to determine the order of integration of the variables. The results are presented in Table 2. From Table 2, the Augmented Dickey Fuller Test reveals that only two of the eight variables are stationary at level while the others are stationary at first difference. Therefore, to run our regression, the values of GRGDP and INFLR will be used at levels while the differenced values will be used for the remaining six variables. Descriptive Statistics To describe the basic features of the data employed in the study, we performed descriptive statistics and these are presented in table 3. From table 2, since the values of the mean and median are not too wide in all cases, it means none of the data set has obvious outliers. Also the difference between the minimum and maximum values reveals the variability or the spread of data and we can deduce that the data set are widely spread out and not clustered together. The descriptive statistics reveal that none of the data are perfectly symmetrical which must have skewness value of zero. However, DRDI is almost symmetrical with a value of 0.78. DLOGPCI and DPLR are fairly symmetrical with values between -0.5 and 0.5. DLOG FDI and GRGDP on the other hand are moderately skewed while INFLR and DLOGTTRD are highly skewed and positively while DUNER is highly but negatively skewed. The value of kurtosis for all the variables shows that the variables all have heavy tails thus they are all leptokurtic. Results of Regression Analysis After fitting the models, we checked the residual plots to be sure our estimates were unbiased (see appendix for the residual plots). Since the plots were all randomly dispersed or with a random pattern around the horizontal axis, our linear regression model was considered appropriate. The regression analyses were performed to determine how changes in the independent variables are associated with changes in the dependent variable. The estimated equations are shown below. The estimated linear equations are shown below. To determine whether the coefficients of our independent variables are really different from 0 and that our independent variable (RDI) has genuine effect on the dependent variables we consider the individual standard errors and ρ-values from the regression output. If a coefficient is large compared to its standard error (SE), then it is probably different from 0 and if it is, then the ρ-value will be 0.05 or less indicating that the coefficients are statistically significant. Thus lower values of SE are preferred. The goodness of fit for each of the model is expectedly measured using the R-squared which indicates the percentage of the variance in the dependent variable that the independent variable (RDI) explains. However, before relying on the value of the R-squared, we examined the residual plots to ensure that the models were not biased. The F-statistic is used to determine the overall significance of the model. If the p-value for the F-statistic is less than 0.05 then we will conclude that our regression model fits the data better than the model with no independent variables otherwise, we will conclude that the R-squared is equal to zero. Table 4, the coefficient of RDI for FDI is negative meaning that as RDI increased, FDI will fall. This was against our expectation. The result is however not statistically significant as reflected by the p-value. The value of the R2 reveals that only 2.81 percent of the variations in FDI is caused by changes in the RDI. This result is however not useful as the F-statistic reveals that the model for FDI is not statistically significant thus the value of the R 2 generated is not different from zero. Also the GRGDP which was supposed to respond positively to changes in RDI has a negative coefficient. The values of the SE and p also reveals that RDI is statistically insignificant in explaining changes in GRGDP. The explanatory power of the model (0.06 percent) is also very weak and statistically insignificant with a fstatistic of 0.0264 and p-value of 0.8718 Counterintuitively, results revealed that an increase in RDI led to a fall in both INFLR and TTRD. These results are completely against findings by others. This means that if Nigeria becomes officially dollarized, then the government might not achieve its goal of lowering not increasing volume of trade in Nigeria. However, the result for INFLR is not statistically significant thus there is no evidence to show that the coefficient is statistically different form zero though statistical significance is established for TTRD at 10 percent. Further evidence reveal that the two models are very weak and statistically insignificant in explaining changes in both INFLR and TTRD in Nigeria as revealed by the respective values of R 2 and F-statistics and the accompanying p-values. The PLR, PCI and UNER all had the expected signs. For PLR and UNER, an increase in RDI will lead to a fall in prime lending rate and unemployment in Nigeria while it will lead to an increase in PCI. However, RDI was not found to be statistically significant in explaining changes in these variables neither did the models exhibit strong explanatory power nor statistical significance to show that they are not different from models with no independent variables. The R 2 values for PLR, PCI and UNER were 0.05 percent, 0.17 percent and 0.07 percent respectively. The p-values associated with the f-statistic were all greater than 0.05 and even 0.10. Conclusion and Recommendation The basic objective of this study was to examine the impact of dollarization on selected macroeconomic variables in Nigeria. The basic motivation for this study the suggestion in literature that countries that dollarize tend to enjoy low inflation rates. Besides the consensus on the effect of dollarization on inflation, its effects on economic variables such as growth in real output, unemployment rate and income levels are sometimes believed to be beneficial as the case in Panama (Hanke, 2017). Therefore, the a-priori expectations were that RDI will have negative influence on PLR, UNER, and INFLR while it will have a positive influence on FDI, GRGDP, PCI and TTRD. The, a priori expectation was met for three variables PLR, PCI and UNER but not for GRGDP, INFLR, PCI and TTRD. Meaning that dollarization of the Nigerian economy will have mixed effects. However, at here was no evidence to show from the study that the coefficients generated were statistically different from zero based on the p-vales generated. Again, in addition to the very weak R 2 values generated in all the models, the f-statistic associated with each of the models also revealed that none of the models were statistically significant. These results are indication that the experience of Panama due to dollarization may not be repeated in Nigeria. The study therefore recommends that government should be intentional about discouraging dollarization in Nigeria since it is not a major determinant of the performance of macroeconomic variables in the country. Also measures must be put in place to strengthen the Nigerian naira and to fight inflation so that Nigerians will not see the need to hold their wealth or transact with a foreign currency.
8,675.8
2019-07-26T00:00:00.000
[ "Economics" ]
Multiplicity and concentration results for some nonlinear Schr\"odinger equations with the fractional $p$-Laplacian We consider a class of parametric Schr\"odinger equations driven by the fractional $p$-Laplacian operator and involving continuous positive potentials and nonlinearities with subcritical or critical growth. By using variational methods and Ljusternik-Schnirelmann theory, we study the existence, multiplicity and concentration of positive solutions for small values of the parameter. Introduction In the first part of this paper we focus our attention on the existence, multiplicity and concentration of positive solutions for the following fractional p-Laplacian type problem where ε > 0 is a parameter, s ∈ (0, 1), 1 < p < ∞, N > sp, W s,p (R N ) is defined as the set of the functions u : R N → R belonging to L p (R N ) such that |u(x) − u(y)| p |x − y| N +sp dxdy < ∞. Here (−∆) s p is the fractional p-Laplacian operator which may be defined, up to a normalization constant, by setting |u(x) − u(y)| p−2 (u(x) − u(y)) |x − y| N +sp dy (x ∈ R N ) for any u ∈ C ∞ c (R N ); see [18,34] for more details and applications. Now we introduce the assumptions on the potential V and the nonlinearity f . We require that V : R N → R is a continuous function satisfying the following condition introduced by Rabinowitz [38]: and we consider both cases V ∞ < ∞ and V ∞ = ∞. Concerning the nonlinearity f : R → R we suppose that (f 1 ) f ∈ C 0 (R, R) and f (t) = 0 for all t < 0; (f 2 ) lim |t|→0 |f (t)| |t| p−1 = 0; (f 3 ) there exists q ∈ (p, p * s ), with p * s = N p N −sp , such that lim |t|→∞ |f (t)| |t| q−1 = 0; (f 4 ) there exists ϑ > p such that 0 < ϑF (t) = ϑ t 0 f (τ ) dτ ≤ tf (t) for all t > 0; (f 5 ) the map t → f (t) t p−1 is increasing in (0, +∞). Since we deal with the multiplicity of solutions to (1.1), we recall that if Y is a given closed set of a topological space X, we denote by cat Y (Y ) the Ljusternik-Schnirelmann category of Y in X, that is the least number of closed and contractible sets in X which cover Y ; see [32] for more details. Let us denote by Our first main result can be stated as follows: Theorem 1.1. Let N > sp, and suppose that V satisfies (V ) and f verifies (f 1 )-(f 5 ). Then, for any δ > 0 there exists ε δ > 0 such that problem (1.1) has at least cat M δ (M ) positive solutions, for any 0 < ε < ε δ . Moreover, if u ε denotes one of these solutions and x ε ∈ R N its global maximum, then lim Due to the variational nature of problem (1.1), we look for critical points of the functional defined on a suitable subspace of W s,p (R N ). Since f is only continuous, we can not apply the Nehari manifold arguments developed in [3] in which the authors considered the corresponding local problem to (1.1) under the assumptions f ∈ C 1 and (f 6 ) there exist C > 0 and σ ∈ (p, p * s ) such that f ′ (t)t 2 − (p − 1)f (t)t ≥ Ct σ for all t ≥ 0. To overcome this difficulty, we use some variants of critical point theorems due to Szulkin and Weth [41]. As usual, the presence of the fractional p-Laplacian operator makes our analysis more delicate and intriguing. In order to obtain multiple critical points, we employ a technique introduced by Benci and Cerami [12], which consists in making precise comparisons between the category of some sublevel sets of I ε and the category of the set M . Then, after proving that the levels of compactness are strongly related to the behavior of the potential V (x) at infinity (see Proposition 3.1), we can apply Ljusternik-Schnirelmann theory to deduce a multiplicity result for problem (1.1). Finally, we study the concentration of positive solutions u ε of (1.1). More precisely, we first adapt the Moser iteration technique [36] in the fractional setting (see Lemma 3.15 in Section 3) in order to obtain L ∞ -estimates (independent of ε) of u ε 's. Then, taking into account the Hölder estimates recently established in [27] for the fractional p-Laplacian, we can also deduce C 0,α -estimates of u ε uniformly in ε. These informations allow us to infer that u ε (x) decay at zero as |x| → ∞ uniformly in ε. Moreover, we prove that our solutions have a polynomial decay; see Remark 3.3. In the second part of this work we consider the following fractional problem involving the critical Sobolev exponent: with s ∈ (0, 1), 1 < p < ∞ and N ≥ sp 2 . In order to deal with the critical growth of the nonlinearity we assume that f verifies (f 1 ), (f 2 ), (f 3 ), (f 5 ), hypothesis (f 4 ) with ϑ ∈ (p, q), and the following technical condition: (f ′ 6 ) there exist q 1 ∈ (p, p * s ) and λ > 0 such that f (t) ≥ λt q 1 −1 for any t > 0. Then we are able to obtain our second result: Theorem 1.2. Let N ≥ sp 2 , and suppose that V satisfies (V ) and f satisfies (f 1 )-(f 5 ) and (f ′ 6 ). Then, for any δ > 0 there exists ε δ > 0 such that problem (1.2) has at least cat M δ (M ) positive solutions, for any 0 < ε < ε δ . Moreover, if u ε denotes one of these solutions and x ε ∈ R N its global maximum, then lim We note that Theorem 1.2 improves and extends, in the fractional setting, Theorem 1.1 in [23] in which the author assumed f ∈ C 1 . The approach developed in this case follows the arguments used to analyze the subcritical case. Anyway, this new problem presents an extra difficulty, due to the fact that the level of non-compactness is affected by the critical growth of the nonlinearity. To overcome this hitch, we adapt some calculations performed in [35] and using the optimal asymptotic behavior of p-minimizers established in [13] we are able to prove that the functional associated to (1.2) satisfies the Palais-Smale condition at every level , and D s,p (R N ) = u ∈ L p * s (R N ) : [u] W s,p (R N ) < ∞ . Let us point out that the restriction N ≥ sp 2 is crucial to apply Lemma 2.7 in [35] to estimate the L p -norm of p-minimizers; see Lemma 4.10 and Lemma 4.11 in Section 4. When p = 2, equations (1.1) and (1.2) become fractional Schrödinger equations of the type which has been widely investigated in the last decade: see [4,6,[8][9][10]20,22,24,28,39,40] and references therein. The study of (1.3) is strongly motivated by the seeking of standing waves solutions for the time-dependent fractional Schrödinger equation namely solutions of the form ψ(x, t) = u(x)e − ıEt ε , where E is a constant. Equation (1.4) is a fundamental equation of the fractional Quantum Mechanics in the study of particles on stochastic fields modelled by Lévy processes; see [29,30] for more physical background. In recent years there has been a surge of interest around nonlocal and fractional problems involving the fractional p-Laplacian operator, and several existence and regularity results have been established by many authors. For instance, Franzina and Palatucci [26] discussed some basic properties of the eigenfunctions of a class of nonlocal operators whose model is (−∆) s p (see also [31]). Mosconi et al. [35] used an abstract linking theorem based on the cohomological index to obtain nontrivial solutions to the Brezis-Nirenberg problem for the fractional p-Laplacian operator. Di Castro et al. [17] established interior Hölder regularity results for fractional p-minimizers (see also [27]). In [5] the first author obtained the existence of infinitely many solutions for a superlinear fractional p-Laplacian equation with sign-changing potential. Fiscella and Pucci [25] investigated the existence and asymptotic behavior of nontrivial solutions for some classes of stationary Kirchhoff problems driven by fractional integro-differential operators and involving a Hardy potential and different critical nonlinearities. Belchior et al. [11] studied the existence of ground state solutions for a fractional Choquard equation with the fractional p-Laplacian and involving subcritical nonlinearities. However, for what concerns the existence and multiplicity results for problems (1.1) and (1.2) with p = 2, the literature seems to be rather incomplete. The goal of this paper is to consider the question related to the existence and multiplicity of positive solutions for fractional Schrödinger equations with the fractional p-Laplacian when ε → 0. More precisely, we aim to extend the results obtained in [24] and [40] in which the authors dealt with equation (1.3) and involving nonlinearities with subcritical and critical growth respectively. Unfortunately, the operator (−∆) s p is not linear when p = 2, so more technical difficulties arise in the study of our problems. For instance, we can not make use of the s-harmonic extension by Caffarelli and Silvestre [15] commonly exploited in the recent literature to apply well-known variational techniques in the study of local degenerate elliptic problems. Moreover, the arguments used in the study of (1.3) (see for example [2,4,6,7,24,40]) seem not to be trivially adaptable in our context. Indeed, some appropriate technical lemmas (see Lemma 2.2, Lemma 2.6, and Lemma 4.8) will be needed to overcome the non-Hilbertian structure of the fractional Sobolev spaces W s,p (R N ) when p = 2. We would like to emphasize that, to our knowledge, this is the first time that the Ljusternik-Schnirelmann theory is applied to get multiple solutions for fractional Schrödinger equations in R N driven by the fractional p-Laplacian operator with p = 2, and involving nonlinearities with subcritical and critical growth. The paper is organized as follows: in Section 2 we collect some facts about the involved fractional Sobolev spaces and we provide some useful technical lemmas. In Section 3 we study the existence, multiplicity and concentration of solutions to (1.1) proving some convenient properties for the autonomous problem associated to (1.1). In Section 4 we consider critical problem (1.2) and the corresponding autonomous critical one. Preliminaries In this preliminary section we recall some facts about the fractional Sobolev spaces and we prove some technical lemmas which we will use later. Let 1 ≤ r ≤ ∞ and A ⊂ R N . We denote by |u| L r (A) the L r (A)-norm of a function u : R N → R belonging to L r (A). We define D s,p (R N ) as the closure of C ∞ c (R N ) with respect to . We begin recalling the following embeddings of the fractional Sobolev spaces into Lebesgue spaces. Theorem 2.1. [18] Let s ∈ (0, 1) and N > sp. Then there exists a sharp constant S * > 0 such that for any u ∈ D s,p (R N ) . Moreover W s,p (R N ) is continuously embedded in L q (R N ) for any q ∈ [p, p * s ] and compactly in L q loc (R N ) for any q ∈ [1, p * s ). Proceeding as in [22,39] we can prove the following compactness-Lions type result. Then, Hölder and Sobolev inequality yield, for every n ∈ N, that τ . Now, covering R N by balls of radius R, in such a way that each point of R N is contained in at most N + 1 balls, we find Exploiting (2.1) and the boundedness of {u n } in W s,p (R N ), we obtain that u n → 0 in L τ (R N ). By using an interpolation argument, we get the thesis. The lemma below provides a way to manipulate smooth truncations for the fractional p-Laplacian. Let us note that this result can be seen as a generalization of the second statement of Lemma 5 in [37] to the case of the space W s,p (R N ) with p = 2. Proof. Since φ r u → u a.e. in R N as r → ∞, 0 ≤ φ ≤ 1 and u ∈ L p (R N ), it follows by the Dominated Convergence Theorem that lim r→∞ |uφ r − u| L p (R N ) = 0. In what follows, we show the first relation of limit. Let us note that Taking into account that |φ r (x) − 1| ≤ 2, |φ r (x) − 1| → 0 a.e. in R N and u ∈ W s,p (R N ), the Dominated Convergence Theorem yields B r → 0 as r → ∞. Now, we aim to show that A r → 0 as r → ∞. Firstly, we observe that since R 2N can be written as We are going to estimate each integral in (2.2). By using 0 ≤ φ ≤ 1, |∇φ| ≤ 2 and by applying the Mean Value Theorem, we can see that Regarding the integral on X 3 r we obtain At this point, by using the Mean Value Theorem and noting that if (x, y) ∈ (R N \ B 2r (0)) × B 2r (0) and |x − y| ≤ r then |x| ≤ 3r, we get (2.6) Let us observe that for any K > 4 it holds Let us note that if (x, y) ∈ (R N \ B Kr (0)) × B r (0), then |x − y| ≥ |x| − |y| ≥ |x| 2 + K 2 r − 2r > |x| 2 , and by using Hölder inequality we can see that Therefore, taking into account (2.7) and (2.8), we get Putting together (2.2)-(2.6) and (2.9), we obtain Now, fixed ε > 0, we introduce the following fractional Sobolev space In view of assumption (V ) and Theorem 2.1, it is easy to check that the following result holds. Lemma 2.3. The space W ε is continuously embedded in W s,p (R N ). Therefore, W ε is continuously embedded in L r (R N ) for any r ∈ [p, p * s ] and compactly embedded in L r loc (R N ) for any r ∈ [1, p * s ). Moreover, when V is coercive, we get the following compactness lemma. Then W ε is compactly embedded in L r (R N ) for any r ∈ [p, p * s ). Proof. We argue as in [42]. Let r = p. By using Lemma 2.3 we know that W ε ⊂ L p (R N ). Let {u n } be a sequence such that u n ⇀ 0 in W ε . Then, u n ⇀ 0 in W s,p (R N ). Let us define M := sup n∈N u n ε < ∞. (2.10) Since V is coercive, for any η > 0 there exists R = R η > 0 such that Hence, for any n ≥ n 0 , by using (2.10)-(2.12), we have Therefore, u n → 0 in L p (R N ). For r > p, using the conclusion of r = p, interpolation inequality and Theorem 2.1, we can see that , which yields the conclusion as required. The next two results are technical lemmas which will be very useful in this work; their proofs are obtained following the arguments developed by Brezis and Lieb [14]. Proof. From the Brezis-Lieb Lemma [14] we know that if r ∈ (1, ∞) and {g n } ⊂ L r (R k ) is a bounded sequence such that g n → g a.e. in R k , then we have and taking in (2.13), we can see that Lemma 2.6. Let w ∈ D s,p (R N ) and {z n } ⊂ D s,p (R N ) be a sequence such that z n → 0 a.e. and [z n ] W s,p (R N ) ≤ C for any n ∈ N. Then we have and p ′ = p p−1 is the conjugate exponent of p. Proof. Firstly we consider the case p ≥ 2. We resemble some ideas in Lemma 3 in [1]. By using the Mean Value Theorem, Young inequality and p ≥ 2, we can see that for fixed ε > 0 there exists C ε > 0 such that (2.14) Taking N+sp p in (2.14), we obtain |(z n (x) + w(x)) − (z n (y) + w(y))| p−2 ((z n (x) + w(x)) − (z n (y) + w(y))) Let us define the function H ε,n : R 2N → R + by setting We can see that H ε,n → 0 a.e. in R 2N as n → ∞, and By using the Dominated Convergence Theorem, we have From the definition of H ε,n , we deduce that so we obtain and by the arbitrariness of ε we get the thesis. Now we deal with the case 1 < p < 2. By using Lemma 3.1 in [33], we know that we can conclude the proof in view of the Dominated Convergence Theorem. Lemma 2.7. Let {u n } be a sequence such that u n ⇀ u in W ε , and v n := u n − u. Then we have and sup Proof. We begin proving (2.15). Let us note that In view of (f 2 ) and (f 3 ), for any δ > 0 there exists C δ > 0 such that By using (2.17) with δ = 1 and (a + b) r ≤ C r (a r + b r ) for any a, b ≥ 0 and r ≥ 1, we can see that By applying the Young inequality ab ≤ ηa r + C η b r ′ with 1 r + 1 r ′ = 1 and η > 0 to the first and third term on the right hand side of (2.18), we can deduce that Then G η,n → 0 a.e. in R N as n → ∞, and 0 ≤ G η,n ≤ C ′ η (|u| p + |u| p * s ) ∈ L 1 (R N ). As a consequence of the Dominated Convergence Theorem, we get On the other hand, from the definition of G η,n , it follows that The arbitrariness of η ends the proof of (2.15). Now we prove (2.16). Arguing as in Lemma 5.7 in [19] we can find a subsequence {u n j } such that for all η > 0 there exists r η > 0 satisfying In view of Lemma 2.2 we can see that By using (2.20) we obtain On the other hand, from (2.19) and the definition ofũ j , it follows that Putting together (2.24) and (2.26), we can deduce that (2.23) holds true. Finally we verify that Noting that |Cã j ∩ W δ j | ≤ |W δ j | → 0 as j → ∞, we can argue as before to infer that there exists j 0 ∈ N such that In view of (2.20), we can find j 1 ∈ N such that j 1 ≥ j 0 and Taking into account (2.29), (2.31) and the boundedness of {v n j } we can see that for all j ≥ j 1 Now, we recall the following inequalities for all a, b ∈ R Assume p ∈ (1, 2]. By using |g(t)| ≤ C(1 + |t| q−p ), Hölder inequality and (2.31) we have When p > 2, we can deduce that From the above estimates, and using (2.30) and uniformly in w ε ≤ 1, which together with (2.28) yields (2.27). Subcritical case 3.1. Functional setting in the subcritical case. After a change of variable, we are led to consider the following problem Weak solutions of (P ε ) can be obtained as critical points of the functional On the other hand, hypothesis (f 5 ) implies that By using Lemma 2.3, it is easy to see that I ε ∈ C 1 (W ε , R) and its differential I ′ ε is given by for any u, ϕ ∈ W ε . Now, let us introduce the Nehari manifold associated to I ε , that is Let us note that I ε possesses a mountain pass geometry. Lemma 3.1. The functional I ε satisfies the following conditions: Choosing ξ ∈ (0, V 0 ), there exist α, ρ > 0 such that Since f is only continuous, the next results are very important because they allow us to overcome the non-differentiability of N ε . We begin proving some properties for the functional I ε . (iii) Without loss of generality, we may assume that u ε = 1 for each u ∈ K. For u n ∈ K, after passing to a subsequence, we obtain that u n → u ∈ S ε . Then, by using (f 4 ) and Fatou's Lemma, we can see that Under the assumptions of Lemma 3.2, for ε > 0 we have: By using (f 5 ), it is easy to verify the uniqueness of a such t u . (ii) By using (3.1) and Lemma 2.3 we can see that for any u ∈ N ε From the proof of (ii), we can see that Since W is compact, we can find u ∈ W such that u n → u in W ε and u n → u a.e. in R N . By using Lemma 3.2-(iii), we can deduce that I ε (t un u n ) → −∞ as n → ∞, which gives a contradiction because (f 4 ) implies that (iv) Let us define the mapsm ε : W ε \ {0} → N ε and m ε : S ε → N ε by settinĝ In view of (i)-(iii) and Proposition 3.1 in [41] we can deduce that m ε is a homeomorphism between S ε and N ε and the inverse of m ε is given by m −1 ε (u) = u u ε . Therefore N ε is a regular manifold diffeomorphic to S ε . (v) For ε > 0, t > 0 and u ∈ W ε \ {0}, we can see that (3.2) yields so we can find ρ > 0 such that I ε (tu) ≥ ρ > 0 for t > 0 small enough. On the other hand, by using (i)-(iii), we know (see [41]) that Now we introduce the following functionalsΨ ε : wherem ε (u) = t u u is given in (3.4). As in [41] we have the following result: Under the assumptions of Lemma 3.2, we have that for ε > 0: Moreover the corresponding critical values coincide and We conclude this section proving the following useful result. Proof. By using assumption (f 4 ) we have and being ϑ > p we get the thesis. 3.2. Autonomous subcritical problem. Let us consider the autonomous problem associated to The corresponding functional is given by Clearly, J µ ∈ C 1 (X µ , R) and its differential J ′ µ is given by for any u, ϕ ∈ X µ . Let us define the Nehari manifold associated to J µ , that is Arguing as in the previous section and using (3.6), it is easy to prove the following lemma. Lemma 3.6. Under the assumptions of Lemma 3.2, for µ > 0 we have: Now we define the following functionalsΨ µ : X µ \ {0} → R and Ψ µ : S µ → R by settinĝ Then we have the following result: Lemma 3.7. Under the assumptions of Lemma 3.2, we have that for µ > 0: Moreover the corresponding critical values coincide and Lemma 3.8. Let {u n } ⊂ N µ be a minimizing sequence for J µ . Then, {u n } is bounded and there exist a sequence {y n } ⊂ R N and constants R, β > 0 such that Proof. Arguing as in the proof of Lemma 3.5, we can see that {u n } is bounded in X µ . Now, in order to prove the latter conclusion of this lemma, we argue by contradiction. Assume that for any R > 0 it holds lim Since {u n } is bounded in X µ , from Lemma 2.1 it follows that Fix ξ ∈ (0, µ). By using J ′ µ (u n ), u n = 0, (3.1) and the fact that {u n } is bounded in X µ , we have . In view of (3.8), we can conclude that u n → 0 in X µ . Now, we prove the following useful compactness result for the autonomous problem. Proof. From (v) of Lemma 3.6, we know that c µ > 0 for each µ > 0. Moreover, if u ∈ N µ verifies J µ (u) = c µ , then m −1 µ (u) is a minimizer of Ψ µ and it is a critical point of Ψ µ . In view of Lemma 3.7, we can see that u is a critical point of J µ . Now we show that there exists a minimizer of J µ | Nµ . By applying Ekeland's variational principle [21] there exists a sequence {ν n } ⊂ S µ such that Ψ µ (ν n ) → c µ and Ψ ′ µ (ν n ) → 0 as n → ∞. Let u n = m µ (ν n ) ∈ N µ . Then, thanks to Lemma 3.7, J µ (u n ) → c µ and J ′ µ (u n ) → 0 as n → ∞. Therefore, arguing as in Lemma 3.5, {u n } is bounded in X µ and u n ⇀ u in W s,p (R N ). It is easy to check that J ′ µ (u) = 0. Now, we prove that J µ (u) = c µ . Assume u ≡ 0 and we aim to show that In fact, once proved the previous limit, we can use Lemma 2.5 to deduce that u n → u in X µ , and recalling that J µ (u n ) → c µ , we obtain the thesis. Now, we prove (3.9). Let us observe that Fatou's Lemma yields Let us note that Recalling that lim sup n→∞ (a n + b n ) ≥ lim sup n→∞ a n + lim inf n→∞ b n and ϑ > p, we can see that Fatou's Lemma, (3.11), (3.12) and J ′ µ (u) = 0 produce which gives a contradiction. Finally, we consider the case u = 0. Arguing as in the proof of Lemma 3.8, we can find a sequence {y n } ⊂ R N and constants R, β > 0 such that Set v n := u n (· + y n ). Then, by using the invariance by translations of R N , it is clear that {v n } is a (P S) cµ for J µ , {v n } ⊂ N µ and v n ⇀ v = 0 in W s,p (R N ). Thus, we can proceed as above to deduce that {v n } converges strongly in W s,p (R N ). Remark 3.2. Let us observe that the ground state obtained in Lemma 3.9 is positive. Indeed, which implies that u − = 0, that is u ≥ 0. Arguing as in Lemma 3.15 below, we can see that u ∈ L ∞ (R N ) ∩ C 0 (R N ), and by applying the maximum principle [16] we deduce that u > 0 in R N . 3.3. Existence result for (1.1). In this section we focus on the existence of a solution to (1.1) provided that ε is sufficiently small. Let us begin proving the following useful lemma. Lemma 3.10. Let {u n } ⊂ N ε be a sequence such that I ε (u n ) → c and u n ⇀ 0 in W ε . Then, one of the following alternatives occurs (a) u n → 0 in W ε ; (b) there are a sequence {y n } ⊂ R N and constants R, β > 0 such that Proof. Assume that (b) does not hold true. Then, for any R > 0 it holds Since {u n } is bounded in W ε , from Lemma 2.1 it follows that u n → 0 in L t (R N ) for any t ∈ (p, p * s ). Now, we can argue as in the proof of Lemma 3.8 to deduce that u n ε → 0 as n → ∞. In order to get a compactness result for I ε , we need to prove the following auxiliary lemma. Lemma 3.11. Assume that V ∞ < ∞ and let {v n } ⊂ N ε be a sequence such that Proof. Let {t n } ⊂ (0, +∞) be such that {t n v n } ⊂ N V∞ . Claim 1: We aim to prove that lim sup n→∞ t n ≤ 1. Assume by contradiction that there exist δ > 0 and a subsequence, still denoted by {t n }, such that t n ≥ 1 + δ for any n ∈ N. (3.14) Since In view of t n v n ∈ N V∞ , we also have Putting together (3.15) and (3.16) we obtain By hypothesis (V ) we can see that, given ζ > 0 there exists R = R(ζ) > 0 such that Now, taking into account v n → 0 in L p (B R (0)) and the boundedness of {v n } in W ε , we can infer that Thus, Since v n → 0 in W ε , we can apply Lemma 3.10 to deduce the existence of a sequence {y n } ⊂ R N , and two positive numbersR, β such that Let us considerv n = v n (x + y n ). From condition (V ) and the boundedness of {v n } in W ε , we can see that v n Taking into account that W s,p (R N ) is a reflexive Banach space, we may assume thatv n ⇀v in W s,p (R N ). By (3.19) there exists Ω ⊂ R N with positive measure and such thatv > 0 in Ω. By using (3.14), assumption (f 5 ) and (3.18) we can infer Letting the limit as n → ∞ and by applying Fatou's Lemma we obtain ζC for any ζ > 0, and this is a contradiction. Now, we distinguish the following cases: Case 1: Assume that lim sup n→∞ t n = 1. Thus there exists {t n } such that t n → 1. Recalling that Now, by using condition (V ), v n → 0 in L p (B R (0)), t n → 1, (3.17), and On the other hand, since {v n } is bounded in W ε , we can see that Hence, putting together (3.21), (3.22) and (3.23), we obtain At this point, we show that Indeed, by using the Mean Value Theorem and (3.1) we have and taking into account the boundedness of {v n } in W ε we get the thesis. Now, putting together (3.20), (3.24) and (3.25) we can infer that and passing to the limit as ζ → 0 we get d ≥ c V∞ . Case 2: Assume that lim sup n→∞ t n = t 0 < 1. Then there is a subsequence, still denoted by {t n }, such that t n → t 0 (< 1) and t n < 1 for any n ∈ N. Let us observe that Exploiting the facts that t n v n ∈ N V∞ , (3.3) and (3.26), we obtain Taking the limit as n → ∞ we get d ≥ c V∞ . At this point we are able to prove the following compactness result. Proof. It is easy to see that {u n } is bounded in W ε . Then, up to a subsequence, we may assume that u n ⇀ u in W ε , u n → u in L q loc (R N ) for any q ∈ [p, p * s ), u n → u a.e. in R N . Now, we prove that I ′ ε (v n ) = o n (1). By using Lemma 2.6 with z n = v n and w = u we get Arguing as in the proof of Lemma 3.3 in [33], we can see that Hence, by using Hölder inequality, for any ϕ ∈ W ε such that ϕ ε ≤ 1, it holds and in view of (2.16) of Lemma 2.7, (3.29), (3.30), I ′ ε (u n ) = 0 and I ′ ε (u) = 0 we obtain the thesis. Now, we note that by using (f 4 ) we can see that Since I ′ ε (v n ), v n = o n (1) and applying (3.32) we can infer that v n p ε = o n (1), which yields u n → u in W ε . We end this section giving the proof of the existence of a positive solution to (P ε ) whenever ε > 0 is small enough. Proof. From (v) of Lemma 3.3, we know that c ε ≥ ρ > 0 for each ε > 0. Moreover, if u ε ∈ N ε verifies I ε (u) = c ε , then m −1 ε (u) is a minimizer of Ψ ε and it is a critical point of Ψ ε . In view of Lemma 3.4 we can see that u is a critical point of I ε . Now we show that there exists a minimizer of I ε | Nε . By applying Ekeland's variational principle [21] there exists a sequence {v n } ⊂ S ε such that Ψ ε (v n ) → c ε and Ψ ′ ε (v n ) → 0 as n → ∞. Let u n = m ε (v n ) ∈ N ε . Then, from Lemma 3.4 we deduce that I ε (u n ) → c ε , I ′ ε (u n ), u n = 0 and I ′ ε (u n ) → 0 as n → ∞. Therefore, {u n } is a Palais-Smale sequence for I ε at level c ε . It is standard to check that {u n } is bounded in W ε and we denote by u its weak limit. It is easy to verify that I ′ ε (u) = 0. Let us consider V ∞ = ∞. By using Lemma 2.3 we have I ε (u) = c ε and I ′ ε (u) = 0. Now, we deal with the case V ∞ < ∞. In view of Proposition 3.1 it is enough to show that c ε < c V∞ for small ε. Without loss of generality, we may suppose that Let µ ∈ R such that µ ∈ (V 0 , V ∞ ). Clearly c V 0 < c µ < c V∞ . By Lemma 3.9, it follows that there exists a positive ground state w ∈ W s,p (R N ) to the autonomous problem (P µ ). Let η r ∈ C ∞ c (R N ) be a cut-off function such that η r = 1 in B r (0) and η r = 0 in B c 2r (0). Let us define w r (x) := η r (x)w(x), and take t r > 0 such that J µ (t r w r ) = max t≥0 J µ (tw r ). Now we prove that there exists r sufficiently large for which J µ (t r w r ) < c V∞ . Assume by contradiction J µ (t r w r ) ≥ c V∞ for any r > 0. Taking into account w r → w in W s,p (R N ) as r → ∞ in view of Lemma 2.2, t r w r and w belong to N µ and by using assumption (f 5 ), we have that t r → 1. Therefore, which leads to a contradiction being c V∞ > c µ . Hence, there exists r > 0 such that J µ (τ (t r w r )) and J µ (t r w r ) < c V∞ . (3.33) Now, condition (V ) implies that there exists ε 0 > 0 such that Therefore, by using (3.33) and (3.34), we deduce that for all ε ∈ (0, ε 0 ) which implies that c ε < c V∞ for any ε > 0 sufficiently small. 3.4. Multiple solutions for (1.1). This section is devoted to the study of the multiplicity of solutions to (1.1). We begin proving the following result which will be needed to implement the barycenter machinery. Proof. Since I ′ εn (u n ), u n = 0 and I εn (u n ) → c V 0 , we know that {u n } is bounded in W ε . From c V 0 > 0, we can infer that u n εn → 0. Therefore, as in the proof of Lemma 3.10, we can find a sequence {ỹ n } ⊂ R N and constants R, β > 0 such that Let us define v n (x) := u n (x +ỹ n ). In view of the boundedness of {u n } and (3.35) we may assume that v n ⇀ v in W s,p (R N ) for some v = 0. Let {t n } ⊂ (0, +∞) be such that w n = t n v n ∈ N V 0 , and we set y n := ε nỹn . Thus, by using the change of variables z → x +ỹ n , V (x) ≥ V 0 and the invariance by translation, we can see that Then we can infer J V 0 (w n ) → c V 0 . This fact and {w n } ⊂ N V 0 imply that there exists K > 0 such that w n V 0 ≤ K for all n ∈ N. Moreover, we can prove that the sequence {t n } is bounded. In fact, v n → 0 in W s,p (R N ), so there exists α > 0 such that v n V 0 ≥ α. Consequently, for all n ∈ N, we have |t n |α ≤ t n v n V 0 = w n V 0 ≤ K, which yields |t n | ≤ K α for all n ∈ N. Therefore, up to a subsequence, we may suppose that t n → t 0 ≥ 0. Let us show that t 0 > 0. Otherwise, if t 0 = 0, from the boundedness of {v n }, we get w n = t n v n → 0 in W s,p (R N ), that is J V 0 (w n ) → 0 in contrast with the fact c V 0 > 0. Thus t 0 > 0, and up to a subsequence, we may assume that w n ⇀ w : From Lemma 3.9, we deduce that w n → w in W s,p (R N ), that is v n → v in W s,p (R N ). Now, we show that {y n } has a subsequence such that y n → y ∈ M . Assume by contradiction that {y n } is not bounded, that is there exists a subsequence, still denoted by {y n }, such that |y n | → +∞. Firstly, we deal with the case V ∞ = ∞. By using {u n } ⊂ N εn and a change of variable, we can see that By applying Fatou's Lemma and v n → v in W s,p (R N ), we deduce that which gives a contradiction. Let us consider the case V ∞ < ∞. Taking into account w n → w strongly in W s,p (R N ), condition (V ) and using the change of variable z = x +ỹ n , we have which is an absurd. Thus {y n } is bounded and, up to a subsequence, we may assume that y n → y. If y / ∈ M , then V 0 < V (y) and we can argue as in (3.36) to get a contradiction. Therefore, we can conclude that y ∈ M . At this point, we introduce a subset N ε of N ε by taking a function h : R + → R + such that h(ε) → 0 as ε → 0, and setting Fixed y ∈ M , from Lemma 3.12 we deduce that h(ε) = |I ε (Φ ε (y)) − c V 0 | → 0 as ε → 0. Hence Φ ε (y) ∈ N ε , and N ε = ∅ for any ε > 0. Moreover, we have the following lemma. Thus, recalling that {u n } ⊂ N εn ⊂ N εn , we deduce that which implies that I εn (u n ) → c V 0 . By using Proposition 3.2, there exists {ỹ n } ⊂ R N such that y n = ε nỹn ∈ M δ for n sufficiently large. Thus Since u n (·+ỹ n ) converges strongly in W s,p (R N ) and ε n z + y n → y ∈ M , we can infer that β εn (u n ) = y n + o n (1), that is (3.45) holds. Now we show that (P ε ) admits at least cat M δ (M ) positive solutions. In order to achieve our aim, we recall the following result for critical points involving Ljusternik-Schnirelmann category. For more details one can see [32]. Theorem 3.2. Let U be a C 1,1 complete Riemannian manifold (modelled on a Hilbert space). Assume that h ∈ C 1 (U, R) bounded from below and satisfies −∞ < inf U h < d < k < ∞. Moreover, suppose that h satisfies Palais-Smale condition on the sublevel {u ∈ U : h(u) ≤ k} and that d is not a critical level for h. Then Since N ε is not a C 1 submanifold of W ε , we can not directly apply Theorem 3.2. Fortunately, from Lemma 3.3, we know that the mapping m ε is a homeomorphism between N ε and S ε , and S ε is a C 1 submanifold of W ε . So we can apply Theorem 3.2 to Ψ ε (u) = I ε (m ε (u))| Sε = I ε (m ε (u)), where Ψ ε is given in Lemma 3.4. 3.5. Concentration of solutions to (1.1). Let us prove the following result which will play a fundamental role to study the behavior of maximum points of solutions to (1.1). Then v n ∈ L ∞ (R N ) and there exists C > 0 such that |v n | L ∞ (R N ) ≤ C for all n ∈ N. Moreover, lim |x|→∞ v n (x) = 0 uniformly in n ∈ N. Proof. For any L > 0 and β > 1, let us consider the function where v L,n = min{v n , L}. Let us observe that, since γ is an increasing function, it holds Define the functions Fix a, b ∈ R such that a > b. Then, from the above definitions and applying Jensen inequality we get In similar fashion, we can prove that the above inequality is true for any a ≤ b. Thus we can infer that In particular, by (3.47) it follows that as test-function in (3.46), in view of (3.48) we have Since Γ(v n ) ≥ 1 β v n v β−1 L,n , from the Sobolev inequality we can deduce that On the other hand, from assumptions (f 2 )-(f 3 ), we know that for any ξ > 0 there exists C ξ > 0 such that Choosing ξ ∈ (0, V 0 ), and using (3.50) and (3.51), we can see that (3.49) yields where w L,n := v n v β−1 L,n . Now, we take β = p * s p and fix R > 0. Observing that 0 ≤ v L,n ≤ v n , we can deduce that Since v n → v in W s,p (R N ), we can see that for any R sufficiently large Putting together (3.52), (3.53) and (3.55) we get and taking the limit as L → ∞, we obtain v n ∈ L (p * s ) 2 p (R N ). Now, using 0 ≤ v L,n ≤ v n and by passing to the limit as L → ∞ in (3.52), we have from which we deduce that . Let us define . By using an iteration argument, we can find C 0 > 0 independent of m such that Taking the limit as m → ∞ we get |v n | L ∞ (R N ) ≤ K for all n ∈ N. Moreover, by using Corollary 5.5 in [27], we can deduce that v n ∈ C 0,α (R N ) for some α > 0 (independent of n) and [v n ] C 0,α (R N ) ≤ C, with C independent of n. Since v n → v in W s,p (R N ), we can infer that lim |x|→∞ v n (x) = 0 uniformly in n ∈ N. Remark 3.3. We can also provide a more precise estimate on the decay of v n at infinity. Indeed, by using (f 2 ) and lim |x|→∞ v n (x) = 0, we can see that there exists By using Theorem A.4 in [13], we know that Γ(x) = |x| − N−sp p−1 is a weak solution to for all r > 0. In view of the continuity of v n and Γ, there exists C 1 > 0 such that w n (x) = v n (x) − C 1 Γ(x) ≤ 0 for all |x| = R (with R larger if necessary). Taking φ = max{w n , 0} ∈ W s,p 0 (B c R (0)) as test function in (3.56) and using (3.57) withΓ = C 1 Γ, we can deduce that Therefore, if we prove that To achieve our purpose, we first note that for all a, b ∈ R it holds Taking b = v n (x) − v n (y) and a = Γ(x) − Γ(y) we can see that where I(x, y) ≥ 0 stands for the integral. Since we can infer that (|b| p−2 b − |a| p−2 a)(φ(x) − φ(y)) ≥ 0, that is (3.59) holds true. As a consequence, we can conclude that v n (x) ≤ C|x| Proof. Assume by contradiction that |v n | L ∞ (R N ) → 0 as n → ∞. By using (f 2 ), there exists n 0 ∈ N such that < V 0 2 for all n ≥ n 0 . Therefore, in view of (f 5 ) we can see that which is impossible. Now, we end this section studying the behavior of maximum points of solutions to (1.1). If u εn is a solution to (P εn ), then v n (x) = u εn (x +ỹ n ) is a solution to (3.46). Moreover, up to subsequence, v n → v in W s,p (R N ) and y n = ε nỹn → y ∈ M in view of Proposition 3.2. If p n denotes a global maximum point of v n , we can use Lemma 3.15 and Lemma 3.16 to see that p n ∈ B R (0) for some R > 0. As a consequence, the point of maximum of u εn is of the type z εn = p n +ỹ n and then ε n z εn = ε n p n + ε nỹn → y because {p n } is bounded. This fact and the continuity of V yield V (ε n z εn ) → V (y) = V 0 as n → ∞. Critical case 4.1. Functional setting in the critical case. In this section we deal with critical problem (1.2). Since many calculations are adaptations to that presented in the early sections, we will emphasize only the differences between the subcritical and the critical case. By using a change of variable we consider the following problem The functional associated to (P * ε ) is given by which is well defined on W ε . Let us introduce the Nehari manifold associated to I ε , that is Arguing as in Section 3.1 we can prove that the following lemmas hold true. Lemma 4.1. The functional I ε satisfies the following conditions: (i) there exist α, ρ > 0 such that I ε (u) ≥ α with u ε = ρ; (ii) there exists e ∈ W ε with e ε > ρ such that I ε (e) < 0. (i) for all u ∈ S ε , there exists a unique t u > 0 such that t u u ∈ N ε . Moreover, m ε (u) = t u u is the unique maximum of I ε on W ε , where S ε = {u ∈ W ε : u ε = 1}. (ii) The set N ε is bounded away from 0. Furthermore N ε is closed in W ε . (iii) There exists α > 0 such that t u ≥ α for each u ∈ S ε and, for each compact subset W ⊂ S ε , there exists C W > 0 such that t u ≤ C W for all u ∈ W . (iv) For each u ∈ N ε , m −1 ε (u) = u u ε ∈ N ε . In particular, N ε is a regular manifold diffeomorphic to the sphere in W ε . Then we have the following result: Lemma 4.4. Under the assumptions of Lemma 4.2, we have that for ε > 0: (i) Ψ ε ∈ C 1 (S ε , R), and Finally, it is easy to prove that Autonomous critical problem. Let us consider the following autonomous critical problem with N ≥ sp 2 . The functional associated to the above problem is defined as , and the Nehari manifold associated to J µ is given by It is standard to check that J µ has a mountain pass geometry. Moreover we have the following useful results: Lemma 4.6. Under the assumptions of Lemma 4.2, for µ > 0 we have: (i) for all u ∈ S µ , there exists a unique t u > 0 such that t u u ∈ N µ . Moreover, m µ (u) = t u u is the unique maximum of J µ on W ε , where S µ = {u ∈ X µ : u µ = 1}. (ii) The set N µ is bounded away from 0. Furthermore N µ is closed in X µ . (iii) There exists α > 0 such that t u ≥ α for each u ∈ S µ and, for each compact subset W ⊂ S µ , there exists C W > 0 such that t u ≤ C W for all u ∈ W . (iv) N µ is a regular manifold diffeomorphic to the sphere in X µ . (v) c µ = inf Nµ J µ > 0 and J µ is bounded below on N µ by some positive constant. Then we obtain the following result: (i) Ψ µ ∈ C 1 (S µ , R), and In order to obtain the existence of a nontrivial solution to the autonomous critical problem, we need to prove the following fundamental result. In particular c µ < s N S N sp * . Before giving the proof of the above lemma, we recall some facts which will be crucial to estimate the mountain pass level c µ . For any ε > 0, let us define where U ∈ D s,p (R N ) is a solution to As showed in [13], we know that U ∈ L ∞ (R N ) ∩ C 0 (R N ) is a positive, radially symmetric and decreasing function with We also have the following interesting estimates: [13] There exist constants c 1 , c 2 > 0 and θ > 1 such that for all r ≥ 1, and Let θ be the universal constant in Lemma 4.9 that depends only on N, p and s. For ε, δ > 0, set Let us observe that g ε,δ and G ε,δ are nondecreasing and absolutely continuous functions. Now, we consider the radially symmetric nonincreasing function which, in view of the definition of G ε,δ , satisfies We recall the following useful estimates established in Lemma 2.7 in [35]: Lemma 4.10. There exists C = C(N, p, s) > 0 such that for any ε ≤ δ 2 the following estimates hold In what follows, we prove an upper bound for the L p -norm of u ε,δ : Lemma 4.11. There exists a constant C = C(N, p, s) > 0 such that for any ε ≤ δ Proof. Firstly, we consider the case N > sp 2 . Let us observe that from the definition of u ε,δ it follows that u p ε,δ dx =: I + II. (4.4) Now we estimate the two integrals on the right hand side of (4.4). By using a change of variable, Lemma 4.9 and the fact that ε ≤ δ 2 , we can infer that where C is a positive constant. Since U ε is radially nonincreasing, for any δ ≤ r ≤ θδ, we have By using the definition of U ε , δ ε ≥ 2 and Lemma 4.9 we obtain (4.6) Putting together (4.4)-(4.6) we get the thesis. Let us consider the case N = sp 2 . Then, we can see that Therefore, being log( δ ε ) ≥ log(2) if ε ≤ δ 2 , we can conclude that Thus, recalling that for C, D > 0 it holds for all t ≥ 0, and using (4.8), we can see that Now, in view of the following elementary inequality and gathering the estimates in Lemma 4.10 and Lemma 4.11, we get Hence, if N > sp 2 , we deduce that q 1 > p > N (p−1) N −sp and by using Lemma 4.12, we have provided that ε > 0 is sufficiently small. When N = sp 2 , we get q 1 > p = N (p−1) N −sp , and in view of Lemma 4.12 we obtain Observing that q 1 > p yields lim ε→0 ε sp 2 −s(p−1)q 1 ε sp (1 + log(1/ ε)) = ∞, we again get the conclusion for ε small enough. Now, we prove the following lemma. Proof. It is easy to check that {u n } is bounded in X µ . Now, we assume that for any R > 0 it holds From the boundedness of {u n } and Lemma 2.1 it follows that u n → 0 in L r (R N ) for any r ∈ (p, p * s ). (4.11) By using (3.1), (3.2) and (4.11) we deduce that Since V (x) ≥ V 0 and {u n } is bounded in X µ , we can pass to the limit as ξ → 0 in (4.12) and (4.13) to see that R N f (u n )u n dx = o n (1) and R N Moreover, we use Lemma 4.13 instead of Lemma 3.8. 4.3. Existence result for the critical case. Arguing as in Lemma 4.13 we can prove the "critical" version of Lemma 3.10. Lemma 4.15. Let d < s N S N sp * and let {u n } ⊂ N ε be a sequence such that I ε (u n ) → d and u n ⇀ 0 in W ε . Then, one of the following alternatives occurs (a) u n → 0 in W ε ; (b) there are a sequence {y n } ⊂ R N and constants R, β > 0 such that The next result can be obtained following the lines of the proof of Lemma 3.11. Proof. Since I ε (u n ) → c and I ′ ε (u n ) = 0, we can see that {u n } is bounded in W ε and, up to a subsequence, we may assume that u n ⇀ u in W ε . Clearly I ′ ε (u) = 0. Now, let v n = u n − u. By using the Brezis-Lieb Lemma [14] and Lemma 3.3 in [33], we know that Since {v n } is bounded in W ε , we may assume that v n p ε → ℓ and |v n | p * s L p * s (R N ) → ℓ, for some ℓ ≥ 0. Let us show that ℓ = 0. If by contradiction ℓ > 0, by using the fact that I ε (v n ) = d + o n (1), we get Taking the limit as n → ∞ we have that s N ℓ = d, that is ℓ = d N s . Therefore we get a contradiction. Hence, ℓ = 0 and u n → u in W ε . Finally we have the existence result for problem (1.2) for ε > 0 small enough. has a subsequence which converges in W s,p (R N ). Moreover, up to a subsequence, {y n } := {ε nỹn } is such that y n → y ∈ M . For any δ > 0, let ρ > 0 be such that M δ ⊂ B ρ (0). Let χ : R N → R N be defined as Let us consider the barycenter map β ε : N ε → R N given by Arguing as in the proof of Lemma 3.13 we can prove the following result.
13,896.4
2017-09-12T00:00:00.000
[ "Mathematics" ]
FORMATION OF CUBE TEXTURE IN NOMINALLY PURE ALUMINUM WITH FINE PARTICLE DISPERSION An X-ray method to observe in-situ cube grain growth during recrystallization has been devised using a hot stage to measure the growth kinetics. Complementary studies, using electron channelling contrast, electron backscattered pattern and X-ray textural analysis, revealed that specific thermal-mechanical history can precipitate out Fe solutes such that the matrix is sufficiently pure to undergo continuous recrystallization even though a fine distribution of precipitate are initially formed. These precipitates control via Zener drag the grain size upon complete recrystallization. On the other hand, 2 to 3 ppm (atomic) of Fe is sufficient to impose discontinuous recrystallization where the final grain size is controlled by grain impingement. The apparent activation energies for grain growth during recrystallization can be separated into two categories, the one for continuous recrystallization is smaller than that for Fe diffusion in A1 and the one for discontinuous process is larger. Examination of volume fraction for each textural component reveals that during recrystallization the amount of cube texture growth correlates best with the decrease of the brass component. The increase in volume fraction of cube texture during recrystallization corresponds very well to the hot stage observations of cube grain growth. INTRODUCTION The contributions of Hsun Hu to the field of texture and recrystallization in research and practice can only be partially recognized by allusions to his papers by all the authors in this commemorative volume. Hu's (1952) early pole figure determinations are found in standard texts of X-ray methods (Cullity, 1978) and the caveat that the surface texture can differ from that of the mid-plane is attributed to his early study. The recent text by Humphreys and Hatherly (1995) points out that many contradicting conclusions can be drawn from specific experiments, where the role of the alloy chemistry is not. determined or determinable. In the present paper we wish to examine a new quantitative technique to reveal the role of composition and the deformed microstructure on recrystallization and cube grain growth. This important topic was initially reviewed by Hu (1967) in the first ICOTOM meeting. Toyo Aluminum K.K. R&D Laboratory, 4-1 Aioi-Cho, Yao-Sei, Osaka, 581, Japan. 246 S. SAIMOTO ETAL. Design of Experiment This preliminary report is based on an unique, method to monitor in-situ cube grain growth while the specimen is annealed in a programmable hot stage mounted in a Huber diffractometer incorporating the Euler cradle with a gap. The precision of this system is excellent, such that residual stresses during temperature cycling can be determined (Clarke, Saimoto and Ho, 1994;Clarke and Saimoto, 1995). The dynamic data was supplemented by prior and post observations using orientation distribution function (ODF) texture analysis from X-rays, electron channelling contrast (ECC) in the scanning electron microscope (SEM) and by electron back-scattered pattern analysis (EBSP). These latter techniques and their usefulness in recrystallization studies have been recently discussed by Woldt and Juul-Jensen (1995). Moreover, a novel technique to determine the solvus for A1-Cr (Diak, Whitehead and Saimoto, 1994) was used to determine the amount of Fe in solution for the starting material. This measurement permits an estimate of the volume fraction (fv) of the intermetallic precipitates present and the values are found in Table 1. For the current purpose, A13Fe is assumed although A16Fe is a possibility. Furthermore, an approximate precipitate size can be determined from the critical Zener drag conditions for a given grain size after complete recrystallization. For the study of recrystallization of these sheets, the role of tool contact on rolled sheet and its effect on texture evolution is important (Li, Saimoto and Sang, 1994). However, it is impractical to measure the reactions at just the mid-plane since the resultant cube texture is highly dependent upon regions nearer to the surface (Saimoto, 1986). To quantitatively embody the role of the variation in through thickness texture, a phenomenon first appreciated by Hsun Hu (1952), pole figures were analyzed using monoclinic symmetry, as described earlier (Saimoto et al., 1993). This method retains during the ODF analysis the asymmetry of the X-ray intensifies attributable to the unequal occurrence of $1, $2 and Cul, Cu_ texture components. The following results EXPERIMENTAL PROCEDURES A nominally pure aluminum ingot 435 mm thick was prepared with impurity composition in ppm (wt) of 19Si-16Fe-24Cu-3Mn-2Cr-13Zn. The outer 10 mm surface layer was scalped from each side of the ingot and then it was homogenized at 610C for 10 hours. The ingot was then hot rolled down to 8 mm thickness in the temperature range of 863 to 523 K (590 to 250C). The 8 mm thick plate was divided into two parts, A and B, and each given a different annealing schedule. Annealing of sample A consisted of 10 hours at 608 K (335C), whereas annealing of sample B consisted of hour at 608 (335C) followed by 1 hour at 673 K (400C). The procedure described by Diak et al. (1994) was used to measure the solute Fe content (Diak and Saimoto) which is listed in Table 1. As expected, sample A has considerable less amount of Fe in solution than B. This difference is reflected in the recrystallized grain size measurements following the post hot-roll treatment (Table 1). Samples A and B were then both cold rolled from 8 mm down to a final thickness of 0.35 mm. Pieces approximately 22 x 14 mm were cut from continuous strips parallel to the roiling direction such that the surface embossing effects were as identical as possible from specimen to specimen. To permit ECC observation and to remove the major surface damage (Li, Saimoto and Sang, 1994), 3 to 4 /m was electropolished from the surface using a dilute perchloric acid-alcohol solution at about 263 K (-10C). After subsequent anneals in air in the in-situ hot stage, SEM ECC image resolution was not detectably different. This condition is expected from the fact that aluminum oxide coatings thicken by only a factor of 2 upon heating to temperatures of 613 K (340C) (Olefjord and Karlson, 1986). The JEOL 840 SEM was used for all the ECC observation with a working distance of 10 mm and an acceleration voltage of 10 kV. The EBSP were obtained using a LINK camera and the patterns were analyzed using the software purchased from N-H. Schmidt (Randers, Denmark). The texture and hot-stage goniometer utilized an Euler cradle with a gap and its performance is described by co-workers (1994, 1995). Four incomplete X-ray pole figures were taken using Cr radiation and the ODF with monoclinic symmetry was analyzed using the modified Van Houtte software (Saimoto et al., 1993). All the anneals were performed in-situ in the programmable hot stage such that the rates of heating and isothermal holding times were recorded simultaneously with the (111) or (200) peak scans, which took about 40-50 seconds. For the as-received cold-rolled sheets, the primary beam cross-section at the specimen was about 1 x 4 mm2. Using this spot size and the (111) reflection, the maximum intensities were located using the + Z angle drive of the diffractometer. After heating to beyond 523 K (250C) where some cube grains were detectable, the width of the primary beam was increased so that an area 4 x 8 mm was irradiated for the (200) reflection. Since the Z angle is near zero for maximum intensity, this widening of the slit improved the counting statistics of the partially recrystallized structure and the large grains which resulted after grain growth. All the quoted kinetic studies were measured in this way except for Figure 1, which used the narrower slit. RESULTS To initially examine the practicality of studying the kinetics of cube texture growth in the hot stage, the as-rolled sheets without electropolishing were annealed with heating programs shown in Figure 1. The dramatic difference between samples A and B is obvious. The A sample appear to depict Zener drag characteristics, whereas B samples show grain impingement. Electropolishing was essential to remove the damaged layer and permit ECC observation, but an unexpected feature manifested itself. The position of maximum cube texture appears to vary in both Z and (rotation about specimen normal) angles. In fact some peaks decrease with annealing time, whereas others increase. Thus some searching is necessary to locate the stable orientations which grow in order to measure the growth kinetics. The measured solute content and grain sizes of the starting 8 mm plate are given in ). Observations of identical regions using ECC did not show any indication of grain boundary migration in A samples till 673 K (400C) when some preferential grain growth was observed in keeping with discontinuous growth. Thus the annealing characteristics of the starting A material is consistent with the cold-worked and recrystallized grain growth pattern depicted in Figure 1. Metallographic Examination In Figure 2, the micrographs of the identical area which were relocated by fiducial markers are shown for the sequences; as-rolled, annealed at 523 K (250C) for 40 minutes, and annealed at 543 K (270C) for 20 minutes. The heating programs for all specimens are listed in Table 2. The banded effect is due to the elongation of original grains parallel to the RD. EBSP examination showed that some of the small grains ( Figure 3a) less than 5 pm were recrystallized whereas the corrugated structure appear to be recovered with undulating orientation about a mean parent (deformed) orientation ( Figure 3b). These structures were found within the different bands in Figure 2b and appear to be highly dependent on the initial orientation of each grain. However, in A samples these recrystallization resistant grains were in the minority. An initial fast recrystallization stage with grain sizes below 10 pan was recently noted by Furu and Nes (1993). Annealing a different A specimen at 563 K (290C) for 20 minutes ( Figure 4) shows almost complete recrystallization with the largest grains not more than twice of that after anneals at 543 K (270C) (Figure 2c). Thus the characteristics of recrystallization of the A samples appears to be: ii) numerous recrystallization sites within each grain, the growth of which is retarded by fine particle distribution. although quantitative statistical analysis has not been completed, the grain growth observed in the 523 K (250C) to 563 K (290C) range is primarily driven by larger grains devouting finer ones which formed during the initial fast recrystallization stage, that is the driving force is grain boundary energy. On the other hand, the B samples after 40 minutes at 533 K (260C) (Figure 5a) show very little recrystallization, but recrystallization is almost complete after 40 minutes at 568 K (295C) (Figure 5b). A different B specimen showed that the growing grains, which originate near the grain boundary of the prestrained structure is much larger during the partially recrystallized stage and the corrugated structure is much more prevalent than in A samples. Moreover in Figure 6, from an adjoining area in Figure 5b, EBSP analysis shows that even these remnant portions retain the recovered structure rather than a fine grain recrystallized one. Thus the characteristics of recrystallization of the B sample appear to be: preferential nucleation around prior grain boundaries. grain growth in the 533 to 568 K (260 to 295C) range occurs by transforming the recovered (deformed) grains, that is the driving force is stored work in the recovered original grains. The above clearly indicates that the purer .4, matrix undergoes continuous recrystallization whereas the less pure B one undergoes discontinuous recrystallization. Evidence for continuous recrystallization in 99.996% A1 was also found by Rosen et al. (1993) after cold rolling to 50% reduction and annealing at 473 and 523 K (200C and 250C). ODF Analysis For the sake of space conservation, ODF diagrams are not presented, but the analytical results are presented in the form of volume fractions of texture components as described by Hirsch and Lticke (1985) and listed in Table 2 using a Gaussian spread angle, , of 11 . The remarkable feature of the as-rolled texture (near the surface) in both A and B samples is that the copper texture, Cu and Cu2, components are very low compared to the deformed texture $2 {123}<634>, and the brass, Bs. On the other hand, Hirsch and Lticke (1985) found that the volume fraction of texture components HT from mid-plane of pure A1, calculated using orthorhombic symmetry were S, Cu, Bs in decreasing order. This difference may be due to the fact that the present measurements were purposely taken near the surface where the aforementioned asymmetry of deformation increases as the plane of examination moves away from the mid-plane of the sheet. The large divergence in the S and $2 components is attributed to this fact. For anneal of 40 min at 523 K (250C), continuous recrystallization was almost complete and is in agreement with Rosen et al. (1993) but cube texture did not start to grow till 543K (270C). Similar observations have been made in high purity A1 at lower temperatures (Heller, Slakhorst and Verbraak, 1977). Figure 7 is a collective plot of volume fraction for the cube texture component of both A and B samples. Although this graph is not a true isochronal plot, it depicts the primary recrystallization process and supplements Figure Table 2. FORMATION OF CUBE TEXTURE 257 appear to continuously decrease, but a large portion is retained even upon heating to 693 K (420C). If comparison is made between specimens A4 and A6 at 653 and 663 K (380 and 390C), respectively (Table 2), the cube component does not correlate to the large differences in $2. The large amount of S component retained after recrystallization has been discussed by Hirsch and Lticke (1985), but any clarification requires further in-depth study which is beyond the scope of this current study. In contrast, B sample which has higher solute content (Table 1) results in much higher Bs component in the as-rolled condition than the case for A, but the amount of cube texture formation is much less. This is consistent with the general findings that cube texture formation decreases as the amount of Fe in solution increases (Ito, Musick and Lticke, 1983). Kinetic Study of Cube Texture Growth Figure 8 shows that the intensity changes with temperature using the (111) is evidence of the asymmetry previously noted (Saimoto et al., 1993). It is clearly seen that the intensities do not change very much with increase in temperature at a temperature ramp rate of 3/minute up to 523 K (250C). However, upon holding at 523 K (250C), the intensity gradually decreases and correlates to the metallographic evidence of incipient recrystallization if an A specimen is held at 523 K (250C) for 40 minutes (Figure 2b). In some cases a measurable increase is observed during the heating range of 398 to 523 K (125 to 250C), which could correlate to the aforementioned increase in Bs component since the S_ monotonically decreases. Figure 9 shows that the (200) intensity increases during-isothermal holding times and that the change of slope occurs upon rapid temperature change of 20/minute. From such temperature change tests, an apparent activation energy, Q, can be determined. Although there are difficulties experimentally in achieving instantaneous temperature increases, this determination is model independent unlike the conventional method. The value for A sample is depicted in Figure 9 and is about 1.9 + 0.2 eV (182 kJ/mol). The large scatter in the data of A sample is attributed to the occurrence of the aforementioned continuous recrystallization resistant deformed grains and to the ambiguity as to whether the Z, position are located for only grains which grow. Thus the Q determinations could vary depending on the selection of data points for the regression analysis. The designated values in the figures used the following criterion. The data points taken during the transition and one after the change were dropped since each scan took about 40 seconds. Thus the true Q value for cube grain growth after the rapid initial recrystallization stage may be lower but above 1.27eV (122 kJ/mol) for self diffusion of A1, QA (Hood, 1986). Obviously at temperatures beyond 563K (280C) the kinetics is being interfered by the particle drag mechanism which retards grain boundary migration as the grain size approaches the critical Zener drag condition as revealed metallographically. Figure 10 shows a similar plot for a B sample. The measured Q values are larger than those for A. Since the scatter in the data points are smaller in this series, except for the inserted figure data where the number of counts is small, the measured values near 4.0eV (386 kJ/mol) should be more reliable. Moreover, the observed value is higher than that of 2.46eV (237.4 kJ/mol) (Rummel et al., 1993) for Fe migration in A1, QFe-A. Such high values for random grain boundary migration have been previously reported ' (Fridman et al., 1973). The plateau in cube texture growth at 568 K (295C) reduced the growth rate. This is in keeping with the discontinuous recrystallization phenomenon. The isothermal portion of specimen B4 at 563 K (290C) ( Figure 10) suggests a slow monotonic decrease in slope. Although further systematic study is required for validation, it may be that this curvature is indicative of a growth rate which varies inversely with time (Humphreys and Hatherly, 1995). DISCUSSION The results listed in Table 1 and 2 can be summarized as follows: ii) iii) iv) Primary recrystallization in the purer A sample occurs by continuous recrystallization whereas in the less pure B one, by discontinuous recrystallization. Although the volume fraction of A13Fe is only 25% larger for A versus B, the grain refinement in A is more than twice as effective indicating that the particle sizes in these cases are very different. Cube texture development under continuous recrystallization conditions is more effective than under the discontinuous case. The apparent activation energy for the purer matrix A is above that for A1 self diffusion but less than that for Fe in A1. For B matrix, the apparent activation energy is higher than that for Fe in A1. Detailed EBSP examination of specific grains before and after heat treatment and limited population counts of cube grains in partially recrystallized microstructure are consistent with the above gross metallographic and textural analysis. However, such studies are more cost effective for large grain growth above 608 K (335C) which will be reported later. For the present discussion, we will focus on the difference between the apparent activation energies and on the possible mechanistic reasons. Earlier work by Gordon and E1-Bassyouni (1965) suggests that Fe is the most important element in affecting grain growth. Thus present dramatic difference must be due to this precipitation of Fe during 10 hours anneal at 608 K (335C) versus hr at 673 K (400C). The measured Fe impurity levels in the matrix stabilized at the above temperatures are 0.83 ppm for A and 2.26 ppm for B. Fridman et al. (1975) who investigated the motion of <100> tilt boundaries in A1 bicrystals reported that the migration activation energy for the random high angle boundaries increases dramatically from 0.7eV (67 kJ/mol) to a maximum of 2.8eV (269 kJ/mol) in the total solute range of 2 to 20 ppm (atomic). Unfortunately the Fe composition was not separated from the other elements, but it ranged from 0.5 to 5 ppm. Remarkably the matrix compositions of A and B samples reside within this range. Although their experiments were carried out at 667, 769 and 909 K (394, 496 and 636C) considerably higher than in the current studies, their argument that the increasing activation energy values are due to adsorption of foreign atoms at the mismatch positions in the interface appears reasonable. As shown by Lticke and Stiiwe (1971) earlier, the apparent activation energy for the migration of grain boundaries with adsorbed foreign atoms can be expressed as Q Qsoute + Uo-2kT, where Qsoute QF-A in the present discussion and Uo is binding energy between grain boundary and the solute. This derivation assumes that as the grain boundary moves the segregated solutes which become displaced from the moving FORMATION OF CUBE TEXTURE 261 interface will resegregate and this action will cause a drag force on the moving boundary. The refinement by Fridman et al. (1975) is to quantitatively incorporate the effect of the degree of segregation. For the present purpose it can be concluded that the B sample must have complete segregation since it was thermally stabilized at 673 K (400C) and the matrix is supersaturated at the recrystallization temperature. Thus the maximum estimated Uo Q 2.47 / 0.1 1.6eV (155.4 kJ/mol). A value of 0.78eV (75 kJ/mol) was estimated by Fridman et al. (1975) using the total composition of 10 ppm rather than just the Fe content. However Fe is a transition element which gives rise to a large QFe-A/QAI ratio. Since the effect of Fe solute on the recrystallization temperature is so large (Humphreys and Hatherly, 1995), the present finding of a large binding energy should not be surprising. On the other hand, the A sample manifested apparent activation energies lower than that for QFe-A1-In the model of Fridman et al. (1975), for an incompletely segregated grain boundary, the interface may not be plane but has many protrusions with ledge structure. This geometry may permit the solute atoms which give rise to boundary drag to migrate by grain boundary diffusion as well as bulk diffusion which will lower the measured Q values as observed. Thus qualitatively the present results can be interpreted by the grain boundary segregation and drag model. The present results are complicated since the observed discontinuous recrystallization must be due to the stabilizing effect of Fe in solution. Previous study (Legace and Saimoto, 1986) of Cr in A1 has shown that transition elements do heterogeneously segregate to dislocations and coherently precipitate. Since the degree of segregation in the current study seems to be very sensitive in the 1 to 2 ppm of Fe range, the heating rate and intermediate anneals should greatly affect recrystallization. This precipitation during recrystallization have been confirmed by resistivity measurements (Ito, Musick and Liicke, 1983), but its effect on the kinetics has not been investigated. To improve design of experiments or processing, a more precise Fe-A1 phase diagram at low temperatures than currently available (Ito, Musick and Liicke, 1983), is necessary and will be discussed elsewhere (Diak and Saimoto). Another question which arises is the role of the preexisting precipitates due to the initial thermal stabilization treatment. These incoherent particles which become attached to the grain boundary can coarsen and act as sinks for the solute. Thus the change in particle size distribution before and after recrystallization may be measurable. Furthermore the boundary drag mechanism may be validated if the solute composition profile behind the moving boundary can be measured; Fe in A1 presents an optimum system for such study. To carry out such experiments, the preparation of ideal specimens is necessary. This study has delineated the thermal mechanical processing history which could result in such specimens for observation in the high resolution transmission electron microscope. CONCLUSIONS The role of thermal-mechanical processing history on continuous and discontinuous recrystallization of nominally pure A1 has been illustrated. The observable microstructural differences can be correlated to the kinetic data but the difference in the driving forces cannot be quantitatively measured. The mobility of the grain boundaries are highly dependent on the matrix solute composition which in turn affects the recovery of the stored energy. Thus subsequent studies on recrystallization should ascertain the solute composition directly or indirectly as was the case in this study. The apparent activation energy is highly variable due to the scatter in data due to the inherent nature of the deformed microstructure. Nevertheless, the activation energy for continuous recrystallization is below that of Fe diffusion in A1 whereas that of discontinuous recrystallization is above it. Hence in the latter case segregated solutes near the moving grain interface must cause a drag indicating that a large binding energy exists between Fe solutes and the grain boundary.
5,581.8
1996-01-01T00:00:00.000
[ "Materials Science" ]
Changes in the functional diversity of birds due to habitat loss in the Brazil Atlantic Forest Landscape changes due to habitat loss and fragmentation can result in complex changes in biodiversity and functional diversity. On the other hand, the functional diversity changes also reflect the modifications in the ecosystem functions, patterns of resources use by the species, and species interactions. In the present work, we evaluated how habitat loss at a landscape scale influences the functional diversity of different bird communities (total community, frugivorous, and insectivorous birds) in landscapes of 5–60% of forest cover in the Bahia Atlantic Forest. In a sample design that aimed to minimize the effects of some landscape-scale possible bias, we randomly selected twelve 6 km × 6 km landscapes, and we surveyed eight plots randomly located in forested areas within each landscape. We focused on the species classified as forest-dependent. We calculated the total richness and each species’ relative abundance in each landscape. To evaluate functional diversity, 19 functional traits were chosen for the total community, 11 for the frugivore birds, and 12 for the insectivore birds. The choice of traits represents how species use their resources and the use of these in other studies of functional diversity. As biodiversity changes to habitat loss could be non-linear, we evaluated the response pattern of bird functional diversity to habitat loss using three different metrics (FRic, FEve, and FDiv) for all communities (total community, frugivorous and insectivorous birds). Model selection was used to evaluate the response models (null, linear, and logistical). Our results indicated that as forest amount decreases, we found a sharp decrease in FRic, significantly below 30% forest cover. That suggests a reduction in resource use by species in those landscapes. FEve also showed a sharp decline in landscapes below 15% of habitat, indicating a possible reduction in the structural complexity. Fdiv also decreases dramatically in landscapes below 15% of forest amount, which suggests a decrease in functional dissimilarity between species, probably due to environmental filtration, which can lead to taxonomic homogenization. Therefore, we assessed the importance of forests for providing the resources for the permanence of species and their functions, and as a population source. Our study provides quantitative indicators of the relationship between functional diversity and habitat loss, which can be crucial in implementing more robust conservation actions to preserve the Atlantic Forest and its ecosystem services. Landscape changes due to habitat loss and fragmentation can result in complex changes in biodiversity and functional diversity. On the other hand, the functional diversity changes also reflect the modifications in the ecosystem functions, patterns of resources use by the species, and species interactions. In the present work, we evaluated how habitat loss at a landscape scale influences the functional diversity of different bird communities (total community, frugivorous, and insectivorous birds) in landscapes of 5-60% of forest cover in the Bahia Atlantic Forest. In a sample design that aimed to minimize the effects of some landscape-scale possible bias, we randomly selected twelve 6 km × 6 km landscapes, and we surveyed eight plots randomly located in forested areas within each landscape. We focused on the species classified as forest-dependent. We calculated the total richness and each species' relative abundance in each landscape. To evaluate functional diversity, 19 functional traits were chosen for the total community, 11 for the frugivore birds, and 12 for the insectivore birds. The choice of traits represents how species use their resources and the use of these in other studies of functional diversity. As biodiversity changes to habitat loss could be non-linear, we evaluated the response pattern of bird functional diversity to habitat loss using three different metrics (FRic, FEve, and FDiv) for all communities (total community, frugivorous and insectivorous birds). Model selection was used to evaluate the response models (null, linear, and logistical). Our results indicated that as forest amount decreases, we found a sharp decrease in FRic, significantly below 30% forest cover. That suggests a reduction in resource use by species in those landscapes. FEve also showed a sharp decline in landscapes below 15% of habitat, indicating a possible reduction in the structural complexity. Fdiv also decreases dramatically in landscapes below 15% of forest amount, which suggests a decrease in functional dissimilarity between species, probably due to environmental filtration, which can lead to taxonomic homogenization. Therefore, we assessed the importance of forests for providing the resources for the permanence of species and their functions, and as a population source. Our study provides quantitative indicators of the relationship between functional diversity and habitat loss, which can be crucial in implementing more robust conservation actions to preserve the Atlantic Forest and its ecosystem services. Introduction Landscape changes due to habitat loss and fragmentation can result in complex changes in biodiversity, particularly in functional diversity (Zambrano et al., 2019). Functional diversity is a component of biodiversity that evaluate the diversity and distribution of functional traits in communities (Flynn et al., 2009;Meynard et al., 2011). These traits are related to the species functional roles (ecological functions) within a community (De Coster et al., 2015;Dehling et al., 2016). Thus, habitat loss, can drive an erosion or a turnover of functional traits within communities (De Coster et al., 2015;Farneda et al., 2015;Almeida-Gomes et al., 2019), causing changes in the functions provided by these communities. There is a need for research beyond taxonomic diversity (species richness), which can be achieved through functional approaches (Cadotte et al., 2011;Mouillot et al., 2013). Linking the consequences of habitat changes that shape communities to the ecosystem functioning can be essential to maintain a greater diversity of ecological functions (Cadotte et al., 2011;Mouillot et al., 2013;Chapman et al., 2017). Functional diversity can be considered a measure of diversity that better assesses the functioning of the ecosystem than the species richness (Cadotte et al., 2011) because it does not consider species as equivalent, as is the case with species richness. Species with different characteristics play different roles in the ecosystem functioning (Carmona et al., 2017), and they can reveal effects of habitat changes other than species richness. For example, functional diversity can witness how the extinction of functionally different species can have a more significant impact on the functioning of the ecosystem than species with similar traits (Díaz et al., 2007;Cadotte et al., 2011;Mouillot et al., 2013). Therefore, understanding how community patterns influence changes in ecosystem functions through responses to functional diversity can be of great value (McKinney, 2008;Marzluff, 2017). In addition, different from taxonomic diversity, there has yet to be a consensus on how functional diversity can persist in landscapes altered by human actions (Riemann et al., 2017). It is known that factors such as intraspecific variation, species substitution, and niche overlap can influence functional diversity to behave differently from species richness (Díaz and Cabido, 2001). Besides, the sensitivity of surviving species to changes in habitat structure will be influenced by their functional traits (Burivalova et al., 2015). These characteristics can affect the dispersion capacity of individuals or influence the establishment of new habitats (fragments) or the permanence of the existing ones (Tscharntke et al., 2012;Zambrano et al., 2019). So, functional diversity can remain constant or decline regardless of how species richness changes (Cadotte et al., 2011). Additionally, communities of redundant species for the same function can be functionally nested in impacted landscapes. In this way, their functions may be prone to disappear faster than others as the impact increases (Almeida-Gomes et al., 2019). Thus, it is still being determined whether the remaining habitats derived from habitat loss can maintain functional diversity comparable to before of impact and consequently maintain functions (Riemann et al., 2017). Some evidence has already found reduced functional diversity in some taxa after modifications and intensity of habitat use (Riemann et al., 2017). For example, De Coster et al. (2015) and Boesing et al. (2018) observed a reduction in functional integrity and functional diversity, respectively, with an increase in habitat loss in tropical forests. However, studies that tested how environmental impacts and landscape changes affect functional diversity are still scarce and limited to small spatial gradients (Barbaro and Van Halder, 2009;Lohbeck et al., 2012;Magioli et al., 2015). Consequently, our knowledge remains limited to changes in species composition; therefore, we lack knowledge about state of ecosystem functions (De Coster et al., 2015). Assessing the relationship between functional diversity and habitat loss on a landscape scale can be an excellent way to elucidate the mechanisms that drive changes in functional diversity and infer how these can influence ecosystem processes. Soon, as birds have a wide variety of functional traits and are impacted by different aspects of environmental change (Alexander et al., 2019), they become a valuable model to evaluate changes in habitat structure or functioning of the ecosystem and for functional diversity studies (Bregman et al., 2016;Prescott et al., 2016). In addition, they are widely known for performing essential ecological functions such as seed dispersal, pollination, pest control, nutrient cycling, and soil formation (Sekercioglu, 2006). Therefore, this study evaluated how habitat loss at the landscape scale influences the functional diversity of birds for different groups (bird community, frugivores, and insectivorous birds) in a gradient of forest cover from 5 to 60% at the landscape scale in Bahia Atlantic Forest. This study is expected to elucidate aspects of the relationship between functional diversity and habitat loss, which can also help build more robust knowledge, especially for threatened biomes, such as the Brazilian Atlantic Forest. Materials and methods This study was part of a larger multi-taxa project on Extinction thresholds due to habitat loss at the landscape scale, developed by a research team of the Federal University of Bahia. We aimed to investigate the effects of the habitat amount at the landscape scale over different groups (CNPq/FAPESB research founding PNX0016_2009). The conceptual basis for this study was built in simulated landscapes (Andrén, 1994;Fahrig, 2003), and predicted several landscapes and patch features essential to population dynamics, such as the mean distance between patches, edge density metrics, and mean patch size, to be dependent and correlated (linearly or non-linearly) to habitat amount at the landscape scale (Gustafson and Parker, 1992). Also predicted that biodiversity persistence in the landscape will depend on the habitat amount at the landscape scale. Theoretical models were built for populations but furthermore were tested with communities' responses in real landscapes (e.g., Rigueira et al., 2013;Lima and Mariano-Neto, 2014;Morante-Filho et al., 2015). Pardini et al. (2010) also pointed out that the correlation between biodiversity metrics and local landscape metrics will also depend on the habitat amount: in landscapes with large amounts of habitat or very scarce habitat cover, biodiversity metrics will have a poor correlation with the patch size. In the first case, rescue effect from large fragments could maintain biodiversity even in small patches. And in the former, the scarcity of habitat increases the distance between patches and makes recolonization unfeasible after local extinctions, and all fragments will tend to suffer biodiversity erosion with time. Study area The study area includes areas of the Bahia Atlantic Forest, which is currently very fragmented (Ribeiro et al., 2009). This region is formed by several forest formations that extend throughout Brazil, such as ombrophilic forest, mountainous forest, seasonal semideciduous forest, sandbank, and mangrove (Tonhasca, 2005), with an annual average temperature of 25 • C. Landscape selection Atlantic Forest landscapes were sampled in Bahia, Brazil, in a wide region of 600 km × 150 km along the Atlantic coast, approximately 93.500 km 2 . Between latitudes 11 • 80 and 18 • 49 S and longitudes 21 • 24 and 40 • 08. We used forest cover maps (SOS Mata Atlântica and Instituto Nacional de Pesquisas Espaciais, 2008) in a Geographic Information System (GIS), and in the entire region, we allocate 1,500 non-overlapping cells of 6 km × 6 km (36 km 2 ). We consider these cells as landscapes wide enough to test the effects of the landscape at community levels. In this universe we calculate percentages of forest habitat coverage in each landscape, and we randomly choose 12 landscapes in a range of forest percental cover from 5 to 60% with a 5% step. We allowed a variation of ±2% in each desired percentual. To reduce undesirable variability and ensure more homogeneous landscape contexts, the following criteria were also considered for landscapes validation: (1) matrix composition: 80% of the 6 km × 6 km landscape matrix must be composed of nonforest physiognomies, that prevented the matrix from acting as an alternative habitat for the species (Dixo and Martins, 2008); (2) external source areas: we aimed to reduce the likelihood of large forest remnants in the vicinity of the landscape acts as source areas. We considered a larger 18 km × 18 km landscape surrounding the 6 km × 6 km target landscapes (the eight neighbor landscapes), these larger landscapes could not have an LPI (LPI -Larger Patch Iindex, McGarigal and Marks, 1995) greater than the 6 km × 6 km target landscape LPI. LPI is a metric that assesses whether the adjacent forest remnants could act as areas of origin around the 6 km × 6 km landscape; and (3) the 18 km × 18 km landscape must have a native vegetation cover like the 6 km × 6 km landscape. After checked these criteria, a landscape of each forest cover percentual was selected randomly from the universe of possible (Figure 1). Each landscape comprised areas of the Atlantic Forest in intermediate, or advanced stages of regeneration and a predominantly non-forested and non-urban matrix (fields, pastures, agriculture). After selecting the landscapes, they were validated in the field to verify that all criteria were accomplished. The validation occurred before sampling through visits in landscapes verifying all spatial criteria. Bird sampling In each landscape, the bird community was sampled in eight plots of 0.6 km × 0.6 km randomly distributed only in forests, as this is a landscape scale survey, plots could be in different forest patches. One plot was sampled each day. Our sampling strategy consisted of defining in each plot four sampling points 100 m away and at least 50 m from the forest edge (Bibby et al., 1992). We recorded all birds seen and heard at each sampling point for 20 min. Sampling was conducted from 5:30 am to 9:00 am, an interval that included the period of most significant activity of the birds (5:30 am-10:00 am). After identifying the species, the classification system proposed by Parker et al. (1996) lists species dependent on forest, classified as those that only had forest habitats. The species richness of each landscape was calculated by the sum of the species of each plot, and the relative abundance was calculated using the occurrence of each species in each plot divided by the total plots (eight plots). Bird functional diversity Only species dependent on forest habitats (n = 210) were used to evaluate the functional diversity because they are more sensitive to habitat loss. To evaluate whether there would be differences in the responses of species communities to habitat loss we assessed the functional diversity of the total community, frugivores, and insectivorous birds. For this, 19 functional traits were chosen for the total community, 11 for the frugivore birds, and 12 for the insectivore birds ( Table 1). The choice of traits in the evaluation of this study represents how species use their resources and the use of these in other studies of functional diversity. The functional traits used in this study were obtained from Parker et al. (1996), Dunning (2008), Sigrist (2013) and the websites WikiAves (2017) 1 and Del Hoyo et al. (2014) but also from personal knowledge (foraging period). Due to the inherent difficulty in finding traces for all species, some standardization has been made. For species that did not have data from the "clutch size" category (n = 37) we used the value of "2 eggs." Because according to Jetz et al. (2008) this is the general average of egg laying. In the category "nest location, " for the species that we did not find information (n = 11), we used the data from phylogenetically close species, following the classification by Clements et al. (2017). Functional diversity metrics evaluate different aspects of species functionality in the community, such as uniformity and dispersion of functional traits, and none single metric can evaluate all these aspects simultaneously (Villéger et al., 2008). Therefore, three independent metrics were used to assess functional diversity (FRic, FEve, and FDiv). According to Villéger et al. (2008), these three metrics complement and constitute a suitable combination to assess functional diversity. Fric is a metric that considers the total area occupied in the functional space (Villéger et al., 2008) and represents the amount of functional space filled by the community (niche) (Villéger et al., 2008). A low functional richness (FRic) could indicate that the available resources are not used (Mason et al., 2005). FEve measures whether the species' traits are evenly distributed in the occupied space. In other words, low values of FEve may demonstrate the existence of overuse of resources by some species in the community (Schleuter et al., 2010), associated with less efficiency in the resources use. On the other hand, FDiv measures the relative abundance of species with unique traits, which can indicate a niche differentiation. Therefore, a low FDiv represents a low abundance of species with unique characteristics, with possibly increased competition between species (Schleuter et al., 2010). Each metric (FRic, FEve, and FDiv) was calculated for each community in the sampled landscapes (three metrics × three communities × twelve landscapes). As we had continuous and categorical variables (traits), Gower's distance was used to estimate functional diversity in all communities (Podani and Schmera, 2006;De Bello et al., 2010). Data analysis The estimated values of each element of the functional diversity were used to model the responses of the communities (total bird community, frugivorous, and insectivorous birds) to changes in forest cover at the landscape scale. To evaluate the response type, we used a model selection approach, with three possible responses of the bird functional diversity to forest cover, which was: No effect of forest cover on the metrics of functional diversity (null model), with linear effect on (f (x) = ax + b) (gradual and linear change of functional diversity) and, with effect non-linear and abrupt change of functional diversity, modeled with a four-parameter logistic function (f (x) = d + (a/(1 + exp ((bx)/c))). We selected the best model using the Akaike Information Criterion corrected for small sample sizes (AICc), which gives the probability of a model being the best model and is calculated based on its likelihood. The model with the highest AICc weight (ranging from 0 to 1), which considers AICc values and parameter amount, was accepted as the most plausible. After the model selection, we also analyzed the best models' residual distributions and the parameters' confidence intervals. Models with AICc weights values with a difference of less than two decimals was considered equally probable. Further analysis of residuals (overdispersion and heterocedasticity) and parameters confidence intervals was used to decide which model to consider. In addition, a Pearson correlation analysis of the estimated values of functional diversity (FRic, FEve, and FDiv) with the species richness of each community was used to evaluate its influence (species richness) on functional diversity. All analysis was performed in R 2.15 (R Development Core Team, 2012) and the packages used for analysis were: FD, stats, and bbmle. Habitat loss effects on functional diversity We registered a total of 273 bird species in the landscapes belonging to 48 families, 210 of which were classified as forestdependent species. All 210 species were used in the community functional diversity analysis, 104 for insectivorous, 32 for frugivorous, 38 for omnivorous species, 20 for granivore species, 9 for nectarivore species, and 8 for carnivore species. The linear model was selected as the best relation between forest amount at landscape scale and the total community FRic (Wi = 0.570). For FEve and FDiv the logistic model was selected (Wi = 0, 35; Wi = 0.691, respectively). For the frugivorous birds, the logistic model was selected (Wi = 0.526) for the FRic metric, and null models were selected for the FEve (Wi = 0.811) and FDiv (Wi = 0.770) metrics. To the insectivorous birds, null models was selected for the FRic metric (Wi = 0.728) and FDiv (Wi = 0.837), and the logistic model for the FEve metric (Wi = 0.040) (Figure 2 and Table 2). Functional diversity metrics and the habitat loss The three metrics used in this study measure different aspects of functional diversity in a community, and thus there is a complexity in conceptualizing changes in space with specific ecological issues by single metric (Boersma et al., 2016;Kuebbing et al., 2018). Our study found different results from the effect of habitat loss for each community depending on the metric of functional diversity that was used. The Fric declined linearly with forest reduction for the total community and non-linearly for frugivorous species. It is a metric that assesses the volume of functional space occupied by species, and thus, communities with high FRic values may have a greater variety of functional traits, which potentially corresponds to greater use of resources by species (Cannon et al., 2019). It is known that FRic could be correlated with species richness (Cadotte et al., 2011), and we also found this in all communities. Which means that the larger and more diverse the community in landscapes with larger amounts of forests, the larger the volume of the functional space (Pakerman, 2011). As forest cover decreases, there is a consequent change in the landscape structure (e.g., distance and patches size), with a decrease in the number of species followed by a decrease in the diversity of functional traits, decreasing the volume of the functional space (FRic) for the total and frugivorous bird community. Other studies have found evidence of a decrease in FRic with specific changes in landscape structure due to habitat loss. For example, Bovo et al. (2018) found a decrease in traces of frugivorous birds when there was a decrease in the patches size, Santillán et al. (2019) found a decrease in FRic in fragmented forests, and Cannon et al. (2019) found a decrease in FRic with an increase in the distance between continuous forests. Additionally, these communities responded differently to forest amount decrease, with a linear decrease in FRic for the total community (Figure 2D), and non-linear for the frugivorous community, with a sharp decrease from 30% forest cover ( Figure 2E). Possibly, for the total community, if conditions are favorable to species richness increase in larger amounts of habitat, the assemblies will be characterized by redundant species (Pakerman, 2011). Therefore, despite the non-linear decline in species richness in the total community, following forest reduction the decrease in functional traits occurred more slowly (Figures 2A, B). However, for frugivorous birds, the decline in FRic may have been fast (non-linear) precisely because of the smaller number of redundant species in the community. According to Ibarra and Martin (2015), assemblies with few species are expected to show low functional richness due to the absence of functional redundancy. We find a marked richness decrease of frugivores from approximately 40% of the forest cover (Figure 2B), which may have reflected in a smaller variety of functional traits and a FRic decline in this community. So, as FRic is also associated with the amount of use of resources and the functions performed by species in the community, it is possible that ecosystem functions performed by frugivores birds may already be compromised in landscapes below 30% of habitat. However, there was no effect of forest cover on the insectivore community FRic, although the species richness of this community decreased with the forest loss ( Figure 2C). According to Murray et al. (2017), changes in species richness without effect on FRic would only occur if all species added or removed in communities were functionally redundant, and this may have occurred in our study for the insectivorous bird community. Insectivorous birds seem to have high levels of redundancy due to the high number of species, thus allowing a greater capacity to adapt to changes in the landscape (Luck et al., 2013). In addition, at least 50% of the birds are essentially insectivorous (in our study 49%), which may have been why this community had no effect of FRic with habitat loss. Responses of functional diversity variables and species richness to habitat loss, rows represent responses of the species richness FRic, FEve, and FDiv metrics from different bird communities (columns). The total community (A,D,G,J), Frugivores (B,E,H,K), and Insectivorous (C,F,I,L). Atlantic Forest of Bahia, Northeast Brazil. We found a smooth non-linear decrease in FEve when forest cover increased from 15% for the total community and the insectivorous bird, with no effect for the frugivorous. High FEve values may indicate that communities use the resource efficiently, as abundances are evenly distributed (Cannon et al., 2019) while low FEve values represent that some parts of the functional space are empty while others are densely populated. Although a slight decrease in FEve, in landscapes with larger amounts of forest could have been an increased competition caused by higher species richness in those landscapes, and a greater competition for resources caused a decrease in FEve. Likewise, our result corroborates the studies by Pakerman (2011) and Ding et al. (2013), who found an increase in FEve due to increased disturbance in habitats and fragmentation. Therefore, low levels of FEve were indicative of locals with little disturbance and, therefore, habitats that the competition has a lot of importance in the structure of the communities (areas with low disturbances). FEve can be high in more disturbed habitats where competition should be less important in structuring the community (Pakerman, 2011). (Fric, FEve, and FDiv) and the forest cover (FC), to different bird communities (total community, frugivores, and insectivorous). dAICc is the difference between the Akaike information criterion value, corrected to small samples (AICc) of the model and the best model; K is the number of the model parameters; Wi is Akaike weight. In bold are the selected models. N = 12 for all analyses. Atlantic Forest of Bahia, Northeast Brazil. In addition, the FEve of the frugivorous bird community may not have felt the effect of habitat loss precisely because there are fewer redundant species and consequently low competition, and his community was able, in theory, to maintain efficient use of the resource in relation to the gradient of habitat loss. In addition, some empirical and simulated studies (Villéger et al., 2008;Mouchet et al., 2010;Ibarra and Martin, 2015) also did not find a relationship between FEve of bird communities when the richness increases, while other studies found a lower FEve in pasture areas when compared to remnants of forests (Prescott et al., 2016), or equivalents between remnants of forests and monocultures (oil palm) (Edwards et al., 2013). Thus, it is not yet clear what are the FEve patterns of bird communities to forest loss, and thus, as suggested by Sayer et al. (2017), further studies are needed to better understand this relationship. For the FDiv metric, there was a smooth non-linear decrease below 15% of its coverage for the total community while for the frugivorous and insectivorous birds, there was no effect of forest cover on this metric. FDiv assesses levels of niche differentiation (Cannon et al., 2019), and the decline in FDiv can be associated with low dissimilarity between the most abundant species and other species, and with taxonomic homogenization (Ibarra and Martin, 2015). This relationship between FDiv and forest cover for the total community possibly happened because when the forest cover decreases, it also does the habitat area, and the number of niches and resources to be explored by the species (Tews et al., 2004). Also may have reflected in the decrease in functionally unique species when forest cover declined below 15%. This decline can indicate that the use of resources in these landscapes is less efficient, compromising the role of birds in the functioning of the ecosystem (Mason et al., 2005). Additionally, habitat loss did not affect FDiv from the frugivorous and insectivorous birds. This probably occurred because in these communities the most abundant species did not have very distinct functional traits (same functional group) and are not grouped around the average values of the traits (Ding et al., 2013) when compared with the total community. That is, the frugivorous and insectivorous bird communities are more like each other, and probably these communities did not have functionally unique species, which probably influenced the FDiv. Habitat loss and functional diversity According to our results, finding three patterns of habitat loss influence on functional diversity was possible. First, we observe a reduction in the amount of resource use by species when the habitat loss increased (FRic total community and frugivorous birds). According to Tilman et al. (1997), a greater diversity of traits increases the likelihood of less niche overlap (complementarily), a greater use of the resource, and an increase in the number of functions in the ecosystem. Thus, as species respond differently to the disturbance (Henle et al., 2004) when forest cover decreases below 30%, frugivorous birds dramatically decrease the amount of resource use. Bovo et al. (2018) found that the dispersion of seeds by large frugivorous birds in small fragments can be reduced, which is responsible for the dispersion of seeds. There is a predicted decrease in the size of fragments as habitat area decreased in the landscapes (Gustafson and Parker, 1992, and we also found this result in our real landscapes ( Table 3). This means that functions performed by birds can likely be compromised below 30%. Approximately 90% of woody species are dispersed by animals (Jordano, 2016), being birds an important group in the absence of other vertebrates (Holbrook et al., 2002). Considering the importance of the bird's role in forest regeneration through seed dispersal (Silva and Tabarelli, 2000) our results are worrisome. Also, in the long run, the decline of seed dispersion could decrease the recruitment of plants (Galetti et al., 2013) important to birds, and increase the vulnerability of this group. It is possible that landscapes below this percentage of forest (< 30%) have a reduced capacity for regeneration and resilience to forest disturbances. Second, the decrease of forest cover may have caused a decrease in the structural complexity of the habitat in landscapes below 15%, and influenced how species consume their resources. According to García-Morales et al. (2016) and Schleuter et al. (2010), high values of FEve (e.g., total community and insectivores in landscapes <15%) may suggest that the habitat is not structurally complex, besides this metric is associated with quite disturbed environments (Pakerman, 2011;Ding et al., 2013). Indeed, increased habitat loss can increases the edge habitats, causing changes in communities through abiotic changes, and changes in biotic interactions (Murcia, 1995). These can lead to a decrease in the viability of resources and how it is consumed by the species, especially in landscapes below 15% forest cover. Third, if habitat loss indicates a decrease in habitat's structural complexity, it is possible that increased habitat loss may be leading to environmental filtration. In other words the environment act as a filter, selecting only species with functional traits that can tolerate the habitat changes (Knapp and Kühn, 2012). Our results indicate that below 15%, there was a smooth decline in species dissimilarity, a possible taxonomic homogenization, as forests become habitable only for species with functional traits that can exploit resources in these environments. Our results show that the increase in habitat loss can lead to less consumption and a decrease in the viability of resources by species (FRic and FEve), in addition, the competition can have increased between species as already highlighted by Schleuter et al. (2010) (Fdiv). All these factors together will act as environmental filters for the persistence of species in less forested landscapes, decreasing their resistance to disturbances, and its resilience. Implications for conservation Our results corroborate with other studies on the importance of forested landscapes as population sources and necessary resources for the species' permanence (Gilroy and Edwards, 2017;Cannon et al., 2019). This way, conservation efforts that aim at maintaining and increasing forested habitats above extinction thresholds would be of great value because, above this percentage, natural landscapes can keep their ecosystem functions and provide important ecosystem services for humanity. It also indicates that landscapes around the extinction threshold (∼30%) may receive attention and be indicated as priorities to restoration efforts. Avoiding their functioning compromised, as Pardini et al. (2010) suggested. On the other hand, in landscapes smaller than 15% of habitat, it is possible that there will be a bird taxonomic homogenization due to structural changes in the landscapes, and this lead to severe changes in the functions and services performed by this group. The situation becomes more worrying in Brazilian scenario because the Brazilian Atlantic Forest is extremely fragmented (more than 240,000 patches) and with only 16% of its original cover, 42% of which are fragments of less than 250 ha (Ribeiro et al., 2009). Birds functional decrease risk compromising ecosystem services such as restoring disturbed ecosystems, trees reproduction, insect control, rodent regulation, nutrient cycling, and economic and cultural uses such as birdwatching tourism (Sekercioglu et al., 2004;Barbaro et al., 2017). Data availability statement The original contributions presented in this study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author.
7,785.4
2023-02-16T00:00:00.000
[ "Environmental Science", "Biology" ]
Some Possible Particles Decays from pp Collisions at LHC Experiment Some of the possible decays of pp collisions at LHC experiment are considered. The vector bosons mediating in the electroweak interactions and right-handed leptons except neutrinos are assumed to be the most resultant particles from the pp collisions. Neutrinos and anti-neutrinos will be observed when one-double electron charged vector bosons are the resultant particles. The charge conservation is thought to be the dominant factor of these decays. The amplitude transitions for Feynman diagram of these decays are written. Introduction The Standard Model (SM) is in difficulties with defining the problems: 1) The Higgs particle mass, 2) Including the gravitational interactions to the SM, 3) The dark matter representation in the model, 4) The masses for neutrinos, 5) Large gap energy between electroweak scale and Planck scale, 6) The unification of gauge couplings of the electroweak interactions at some energy scale if the four fundamental forces are the result of a local gauge theory with a higher fundamental symmetry. The recent experiments mainly LHC and the future colliders will solve these problems.The electromagnetic field interactions and weak interactions are combined in a single representation by the Glashow-Weinberg-Salam Model of the electroweak interactions based on the gauge group SU(2) × U(1) which is in good agreement with the experimental results.Later the strong interactions are included to the theory and the group is written as SU(3)C × SU(2)L × U(1)Y.This group defines the SM particles.Since the four fundamental forces; weak, electromagnetic, strong, and gravity; are equal at the Grand Unified Theory (GUT) energy scale, the some of the additional neutral gauge bosons might be discovered at CERN LHC experiment as well as the lightest supersymmetric stable particle of the dark energy and the particles which have at the same time the properties of the leptons and quarks or the states changes to each other named as lepto-quarks or exotic fermions.The fermionic particles of the SM are grouped in the representation of the three generations as , , The first two particles of each group are the leptons with charges 0, -1 respectively in the unit of the electron charge.The last two are quarks with the electron charge +2/3 and -1/3 for u and d type quarks.Each particles described have their anti-particles.These particles all together build up the matter around us. Some Particles Decaying from pp Collisions at LHC The breaking of 6 E gives other gauge groups.Concentrating on the SU(2)L × U(1)Y × U(1) effective theory, the ( ) U η are the possible U(1)'s in broken E6 GUT's and the extra U( 1) is an Abelian symmetry with its associated "hypercharge" Y [1]; The symmetry will be spontaneously broken by a Higgs sector consisting of one doublet and one singlet [1]. The decays pp W ll ′ → → have the same validity as the decays given by pp Z ll ′ → → , where l refers to anti-leptons except right-handed neutrinos.This is true because in [2] [3] the mass of the Z ′ is estimated by calculating the Z ′ decay width as the same procedure of calculating the 0 Z boson decay width.Using the re- lation [4] cos given for the Z ′ and W ′ , there is no doubt in writing the relation as cos In [2] [3] Z ′ mass is estimated around 630 GeV and the Weinberg angle is taken as at GUT scale.Therefore, Using the conservation of the protons' charges, some of the possible pp collisions decays will be such that (Figure 1 and Figure 2).When it is assumed that the anti-leptons are the resultant particles as the vector bosons then the right-handed neutrinos couldn't be seen as the resultant particles.In these decays instead of the intermediate vector bosons the neutral and plus-charged Higgs bosons could take the place.The Higgs bosons would be the plus ones replaced in the resultant particles such that ll .The neutral Higgs boson could be the one replaced in neutral mediating vector bosons , Z W ′ ′.The scattering amplitude for the diagrams in Figure 1 and Figure 2 can be written down by using the amplitude evaluated for the K π + + scattering in [5] as where ( ) ( ) ( ) , p p are the four momentum of the resultant particles.The 4-curent for proton is: Since the protons have the same 4-momentum it is true to write down: Therefore, the exponential term not written in Eqn. ( 10) is 1.The scattering amplitude can be written as Again the waves for both of two protons must be the same, so 1 2 is taken in Equation (12).In Equation (12) the N , 3 N , and 4 N are the normalization factors; 2 ig q µν − is the propagator for the mediating bosons; q refers to the momentum differences between the final and the initial states, and the four-momenta conservation is given by the term ( ) ( ) for incoming and outgoing particles. Conclusion Although the SM has some difficulties, it is strongly a correct model to extend at the GUT scale.By using the previous works the W' gauge boson mass is estimated to be around 498 GeV.The resultant particles from pp collisions are assumed to be a double positive-electron charged vector boson with a neutrino or anti-neutrino, or, two vector bosons each with one positive-electron charge, instead of the latter one; two anti-leptons are also accepted except the neutrinos.The pp collisions at GUT scale will give the scattering amplitudes as done in this work. Figure 2 . Figure 2. The diagram for the decay , pp V V ll
1,311.8
2014-06-30T00:00:00.000
[ "Physics" ]
BOS at LSCDiscovery: Lexical Substitution for Interpretable Lexical Semantic Change Detection We propose a solution for the LSCDiscovery shared task on Lexical Semantic Change Detection in Spanish. Our approach is based on generating lexical substitutes that describe old and new senses of a given word. This approach achieves the second best result in sense loss and sense gain detection subtasks. By observing those substitutes that are specific for only one time period, one can understand which senses were obtained or lost. This allows providing more detailed information about semantic change to the user and makes our method interpretable. Introduction LSCDiscovery is a shared task on Lexical Semantic Change Detection (LSCD) in Spanish (D. Zamora-Reina et al., 2022). The participants were provided with two corpora in Spanish, corresponding to 1810-1906 and 1994-2020 respectively, and were asked to solve two subtasks. In the first subtask the participants were asked to rank the given list of about 4K words according to the degree of their semantic change. The second subtask required to determine for each given word if its senses occurring in two corpora are different (and optionally, if it has acquired some new senses, and if it has lost any old ones). Background Our approach is based on the bag-of-substitutes (BOS) representation of word meaning in context (Başkaya et al., 2013;Arefyev and Zhikov, 2020). Lexical substitutes are those words that can replace a given target word in a given text fragment without making this fragment ungrammatical or substantially changing the meaning of the target word. For ambiguous words, lexical substitutes depend on their meaning expressed in a particular context. For instance, some reasonable substitutes for the word fly in the sentence A noisy fly sat on my shoulder are bug, beetle, butterfly, firefly, insect, etc. But in the sentence We will fly to London they are different: walk, run, bike, etc. In order to generate lexical substitutes, we employ the XLM-R 1 masked language model (Conneau et al., 2020). This model was pre-trained on 2.5T of data in 100 languages as a masked language model, i.e. it received text fragments with some tokens hidden (replaced with the special <mask> token) and was trained to guess those hidden tokens by their context. This kind of pre-training is partially aligned with the lexical substitution task because the model can predict words compatible with the given context. However, there are no guarantees that these words are similar or related by meaning to the target word. Suitable types of lexical substitutes (e.g., synonyms, hypernyms, cohyponyms) and suitable degree of their similarity to the target word depend on the target task and can be controlled with various techniques explored in . In our solution, we employ the dynamic patterns proposed by Amrami and Goldberg (2018) and explained in 3.2. Unlike the traditional bag-of-words representation, which contains those words that occur in a text fragment, the BOS representation is built from lexical substitutes. Thus, it better represents the meaning of some specific target word in a given text fragment rather than the whole fragment in general. Clustering of the BOS vectors is a successful approach to solve the Word Sense Induction (WSI) task, i.e. to discover senses of ambiguous words. This approach was explored in many papers, including (Başkaya et al., 2013;Goldberg, 2018, 2019;Arefyev et al., 2019 among others. Also, a substitution-based WSI model was employed to solve the LSCD task in (Arefyev and Zhikov, 2020;Arefyev and Bykov, 2021). However, in our solution we avoid solving the more Table 1: For the word actas (reports) in ayer recibimos dos actas literales (yesterday we received two verbatim reports), 5 most probable substitutes with 1 or 2 subwords are shown. The patterns with y (and) and incluso (including). general and probably more difficult WSI task that requires clustering. Instead, we propose methods to directly obtain LSCD predictions from the BOS vectors. (Giulianelli et al., 2020;Laicher et al., 2021), we will denote this average as the Average Pairwise Distance (APD). Notice that our vector representation is very different from those works. Model description For the second subtask, if APD is greater than a certain threshold, we predict that this word has changed its meaning. To determine whether it has acquired new senses and whether any old senses were lost, we propose three different methods based on pairwise distances. Collected data For each target word w i , we lemmatize 2 both corpora and retrieve all examples with w i in different grammatical forms. Then we take the same number Substitute generation For each example we generate several types of substitutes with different dynamic patterns, post- Table 2: In LS_m1_7, we employ 7 single-subword patterns with y (and), incluso (including) and por ejemplo (for example) with the specified weights. process them and combine together to get a single vector representation. Dynamic patterns are similar to the Hearst patterns by nature (Hearst, 1992). They were proposed in (Amrami and Goldberg, 2018) to obtain from masked language models those substitutes that do not only fit the given context, but also are similar or related to the target word by meaning. For instance, using patterns with the Spanish conjunction y (English: and) we hope to obtain mostly co-hyponyms of the target word, while patterns with the adverb incluso (English: including) shall bias the model towards generating hypernyms or hyponyms, depending on the position of the target word. Table 1 shows some examples. Table 2 lists all dynamic patterns we use. All patterns contain the special token <mask> that XLM-R is asked to recover, and some of them contain the variable T representing the target word. Given a pattern and an example for some target word, first we replace the target word with this pattern, and then replace the variable T (if any) back with the target word. For simplicity, let us consider an example in English. Given the sentence We can fly to London and using the pattern <mask> (and T), we first obtain We can <mask> (and T) to London, and finally have We can <mask> (and fly) to London. The vocabulary of XLM-R consists of 250K subwords in 100 different languages, which are sometimes whole frequent words, but most often pieces of words. To better describe word meaning, we generate substitutes consisting of different number of subwords. To achieve this, we apply patterns with several <mask> tokens, for instance,<mask><mask> (y T). To find probable sequences of subwords that could fill the <mask> tokens, we apply a slightly modified greedy decoding strategy. For the leftmost <mask> token, topK = 150 most probable subwords are predicted first. Then for each of those subwords we generate one continuation using greedy decoding. Below we will say that a substitute is not generated for a particular pattern in a particular example if it was not among topK substitutes generated this way. For computational reasons, we generated only substitutes with one or two subwords and did not apply beam search for decoding. Examples of two-subword substitutes are in table 1. Substitute post-processing and combination Next, we post-process all substitutes for each example: convert them to lower case, remove all words except for the last one from multi-word substitutes, apply stemming. 4 After post-processing, we sum the probabilities of duplicated substitutes. For each example, we combine substitutes generated for different patterns by calculating the weighted average of the corresponding probability distributions. In LS_m1 and LS_m2 (Lexical Substitution with one-subword substitutes and twosubword substitutes respectively), for combination we use patterns and weights presented in Tables 2 and 3. The weights were selected based on a few experiments on the development set consisting of 20 words, so these weights are likely suboptimal. It is possible that one of the substitutes is not generated by XLM-R for a certain pattern. In this case, during combination we assume that the corresponding probability is equal to the minimal probability among all substitutes generated for this pattern. (4) our results # LS_m1_7+APD -0.125 (9) -0.129 (8) our post-evaluation results BOS vectors For each target word w i we build 2N i BOS vectors for old and new examples. These vectors are basically bag-of-word vectors built for topK most probable substitutes for each example. Only substitutes that were generated for more than 3% and less than 90% of examples of the target word are taken into account 5 . Graded Change Discovery APD (Average Pairwise Distance). After building the BOS vectors, we calculate the cosine distance from each old to each new example, resulting in a matrix of size N i × N i . The APD is calculated by averaging all cells in this matrix. Finally, we sort test words according to their APDs and submit their ranks as the predicted change scores. 6 Binary Change Detection For the main Binary Change Detection subtask, if the calculated APD is greater than the certain threshold 7 , then we predict that this word has changed its meaning. In this case we also try to determine if it has acquired new senses and if it has lost some old ones (sense loss and sense gain detection subtasks). We try three methods to determine that. 5 We used CountVectorizer from scikit-learn, where min_df = 0.03 was selected in range from 0 to 0.05 with 0.01 step and max_df = 0.9 was selected in range from 0.85 to 1 with 0.01 step. 6 There was a mistake in the original implementation of the ranking procedure. After the competition we fixed it, which significantly improved the results of this method (see table 4 for comparison). 7 threshold = 0.8 was selected on the development set in the range from 0.7 to 0.9 with 0.05 step. CH, F1 GAIN, F1 LOSS, F1 baselines baseline1 0.537 (9) NaN (8) NaN (6) baseline2 0.222 (10) 0.211 (7) 0.000 (6) best results of other teams myrachins 0.716 (1) 0.491 (3) 0.688 (1) dteodore 0.709 (2) 0.000 (8) 0.000 (6) AID (Average Inner Distance). We calculate APDs between only new examples AID 1 and between only old examples AID 2 . If AID 1 > (AID 2 − b 1 ), we predict that a new sense appeared. If AID 2 > (AID 1 − b 2 ), we predict that an old sense is lost. 8 Thus, we assume that a difference in average inner distances for two sets of examples indicates that there is a difference in underlying sets of senses. min. We calculate an N i × N i matrix of pairwise distances from old to new examples and assume that if some new sense appeared, then a new example exists that is far from all old examples. Thus, if there is at least one new example whose minimal distance to the old examples is greater than some threshold 9 , we predict that a new sense appeared. Sense loss is determined symmetrically. perc. (percentile). This is similar to the previous method, but we calculate the 5th percentile instead of the minimum, i.e. we allow at most 5% of examples from the old corpus to be closer to an example of the new sense from the new corpus than the specified threshold. We assume that this should make the model less sensitive to noisy examples and more stable. Phase 1: Graded Change Discovery In this subtask, it was required to rank about 4K target words according to their degree of semantic change (the higher rank, the stronger change). The final quality of ranking was evaluated for 60 hidden words only by the Spearman's correlation with the gold ranks (Bolboaca and Jäntschi, 2006). Table 4 provides the results for the first phase. Our original implementation of the ranking procedure had mistakes in the ranking procedure, so the results are poor. After the competition, we fixed the mistake and obtained the correct results, which are comparable to the 3rd best participant in the leaderboard. LS_m1_2 and LS_m2_2 differ only in the number of masks in the used patterns. So comparing their scores, we can say that using two-subword substitutes is more preferable than one-subword substitutes. In LS_m2_7 seven patterns are combined compared to two patters in LS_m1_2, this gives a significant improvement despite somewhat arbitrarily selected weights. Developing some principled ways of finding promising dynamic patterns and weights for their combination is a reasonable direction for future work. LS_m1_7 has a slightly higher JSD,SPR score, but its COMPARE,SPR score is lower and it uses a more complex pattern combination than LS_m2_2. A more detailed investigation is presented in Appendix A. Phase 2: Binary Change Detection In this subtask the participants were asked to determine if target words have changed their meanings. And if so, how exactly (have acquired and/or have lost senses). Three F1-scores are calculated: Binary Change Detection (CH), Sense Gain Detection (GAIN), Sense Loss Detection (LOSS). Results are presented in Table 5 where we have the 2nd best submission for GAIN and LOSS optional subtasks. LS_m1_2 + APD and LS_m2_m2 + APD have 0.628 and 0.636 CH,F1 scores respectively, which means that using two-subword substitutes is slightly better than one-subword. But in the case of LS_m1_7 + APD we already get 0.658 CH,F1 resulting in the 4th rank. Using AID method does result in good GAIN,F1 and LOSS,F1 scores (Table 6). At the same time min and percentile show a better results but they highly depend on used LS patterns, i.e., in the some cases these methods improves only GAIN,F1 or LOSS,F1 scores, but not both of them. Discriminative substitutes The main advantage of LS-based models is their interpretability. We can roughly understand word meanings looking at the discriminative substitutes, i.e. the substitutes specific for a particular subset of examples. Table 7: Discriminative substitutes generated for the <mask> (y T) pattern. The probabilities P (w|M ) and P (w|O) are shown for each substitute. Documentos is 'documents', señal is 'signal', memoria is 'memory' and canal is 'channel'. From the Table 7 we can see that disco (disc) and satélite (satellite) have acquired new senses as a data storage device and satellite television respectively. Efficiency The set of the target words proposed in Phase 1 was supposed to be a challenge for participants due to its size. For 4385 words given we have collected about 777K examples. Generation of substitutes for all examples took 13 GPU-hours and 310 GPUhours for each one-mask and two-mask pattern respectively on V100 GPUs. All other steps took incomparably less time. Conclusion We have proposed an interpretable approach to lexical semantic change detection. This approach shows the 2nd best result for sense loss and sense gain detection subtasks. It provides techniques to understand which senses were obtained or lost by a word. A Substitute analysis Our models mostly depend on the used LS patterns and ways of their combination. So it is important to make some investigations about them. In this section we study the following questions. • Which single-subword pattern gives the best results and how these results depend on the number of substitutes generate (topk)? • Is it better to use single-subword or multisubword substitutes? • Do brackets and dashes affect the results? For brevity, we will use M instead of <mask> in the pattern descriptions. In the follow-ing figures mask position describes the position of the <mask> token. For example, if the pattern is M (y T) / T (y M), mask position=left refers to the pattern M (y T), and mask position=right refers to T (y M). Finally, mask position=combination denotes the combination of these patterns with equal weights. A.1 Single-subword patterns In LS_m1_7 we use 7 patterns with different weights, which were selected after only a few experiments on the development set. In this section we study how the results depend on the patterns and try to find simpler and more intuitive ways of the substitute combination. Figures 1 and 2 show JSD,SPR and COMPARE,SPR for different patterns. It is interesting that in all cases the left patterns give better results than the right ones, except for the incluso-based patterns. Also in all cases the combination averages the results of both patterns, again except for the combination of inclusobased patterns which on the contrary improves the results. A.2 One-subword substitutes vs. two-subword substitutes We assume that using more masks should improves results because this allows to generate more diverse substitutes. Figure 3 provides comparison of patterns with different number of masks. As we suspect, using T (y MM) pattern gives a much better results than T (y M). However combination of twomask patterns results in just slightly higher score and one-mask pattern M (y T) even outperforms MM (y T). A.3 Patterns without brackets and dashes In the patterns discussed above we have extra dashes which were added by mistake and potentially could affect the results, so firstly we remove them from patterns. Also we have assumption that using brackets is not common thing in Spanish so such patterns could spoil generated substitutes and final results. To prove it we decide to compare y-based patterns with and without brackets and dashes. In the Figures 4 and 5 we can see that in all cases refusal to use brackets and dashes improves our results quite well, especially the right pattern get around 0.1 growth in JSD,SPR and COM-PARE,SPR scores.
4,009.6
2022-06-07T00:00:00.000
[ "Computer Science" ]
Effect of Transmission Range on Ad Hoc on Demand Distance Vector Routing Protocol The necessary background as well as the details of simulation was presented to simulate and evaluate the performance of the ad hoc on-demand distance vector routing protocol in mobile ad hoc network with the help of the network simulator NS2 using the common transmission range to deliver the data packets at the destination node. The number of participating nodes played an important role to predict the conditions for the best performance of the protocol with respect to throughput, delay, packet delivery ratio, drop packets, consumed and residual energy of the network. Further, the efforts can be put to control the transmission range dynamically and overheads for reducing the energy consumption in the network and improving its lifetime of the nodes and the lifespan of the network. Introduction The mobile ad hoc network (MANETs) is a collection of zero configurable mobile nodes without any physical infrastructure and centralized computing.The participating mobile nodes are free to move and may join, leave and rejoin the network at any time without any prior information and permission [1].The topology of MANET is highly dynamic and unpredictable.The frequent movement of the participating mobile nodes leads to route failures calling for the route maintenance and frequent activation of route discovery process.Besides, due consideration should be given in MANET to reduce the transmission power and loss of energy.This is because on one hand higher transmission powers cause an increase in the overheads during the transmission of data from one node to another, and on the other hand lower transmission powers adversely affect the participating mobile nodes by not allowing them to keep the network live for a longer duration thereby causing a loss of energy [1]. Over the last few years the various energy management schemes employing energy efficient routing protocols have been proposed in MANET to minimize the utilization of the battery power of the participating mobile nodes of the networks and extend the network lifetime [1] [2].In this paper, we have analyzed such protocol, namely the Ad hoc On-Demand Distance Vector (ADOV) routing protocol to study the effect of the variable transmission range on the various parameters, namely throughput, delay, packet delivery ratio (PDR), drop packets, consumed energy in transmitting data packets, and residual energy of the participating mobile nodes of the networks.Our work is based on the simulation using Random Way Point Mobility Model [2].We have investigated into the role of the required tramnsmision range from one node to another to minimize the energy consumption, which a study to the best of the authors' knowledge has not been done much and reported in literature. The paper is divided in seven sections including the present introductory section.In Section 2, we have revisited earlier work related to the present study.Section 3 is an overview on ADOV.In Section 4, we have presented the various concepts related to the network simulators including NS2, while Section 5 is devoted to describing the simulation setups, simulation environment, and mobility model, which have been subsequently used by us in our study of the evaluation of the performance of ADOV reported in Section 6.In Section 7 the work is concluded. Related Work In [3] is introduced the Minimum Energy Dynamic Source Routing (MEDSR) protocol for MANET in which the route discovery has been suggested both in low and high power levels.In this protocol, a higher power level is sought if three attempts of route request from one node to the next for the route discovery fail at a lower power level.However, in MEDSR protocol, the energy is conserved and the overall lifetime of the network is increased at the cost of the delay per data packet since the travel of data packets to the destination node involves a large number of hops.Thus, there is a scope for the improvement in the delay in this protocol. Narayanaswamy et al. [4] proposed Common Power (COMPOW) control in MANET.It is based on the following observation.Excessively high powers cannot be used to transmit the data packets from the source node to the destination node because of the shared medium, which also causes lot of interferences.This affects the traffic carrying capacity of the network and reduces the battery life.On the contrary if the network chooses low powers for establishing the routes then it leads to the route failure calling for the route maintenance and route discovery process to activate very frequently, which causes a loss of significant amount of energy.Therefore, the network power level must be chosen neither too high to cause excessive interference which results in a reduced ability to carry traffic, nor too low to result in a disconnected network.The technique of COMPOW control has been designed and tested only for table driven routing protocols and apart from this the technique is viable only for very dense network where the number of participating mobile nodes is very high and the covering area is small.Hiremath and Joshi [5] proposed a fuzzy adaptive transmission range and fuzzy based threshold energy for the location aided routing protocol, namely Fuzzy Adaptive Transmission Range Based Power Aware Location Aided Routing (FTRPALAR).In this protocol proposed by them, the energy of a mobile node is conserved by employing a fuzzy adaptive transmission power control depending on the minimum number of neighboring nodes to maintain the network connectivity and power aware routing based on fuzzy threshold energy.Further, the experimental results on FTRPALAR obtained by them performs better in terms of the average energy consumption and network lifetime as compared to the conventional location aided routing (LAR) protocol and the variable transmission range power aware location aided routing (VTRPALAR) protocols.The proposed FTRPALAR is able to achieve 18% more lifetimes than VTRPALAR. Tarique and Tape [6] proposed Minimum Energy Dynamic Source Routing (MEDSR) and Hierarchical Minimum Energy Dynamic Source Routing (HMEDSR) protocols.The MEDSR protocol uses two different power levels during the route discovery process to identify low-energy paths.After finding the path, the transmitted power levels of the nodes along the routes are adjusted link by link to the minimum required level.However, the MEDSR protocol uses the flooding during route discovery process resulting in enhanced overhead in large networks thereby affecting the routing performance severely.Although the overhead packets are not in large numbers yet they consume significant amount of energy.This drawback of the MEDSR protocol is alleviated in the HMEDSR protocol which is basically the combination of the protocols MEDSR and Hierarchical Dynamic Source Routing (HDSR), the latter reducing the overhead while the former saving energy in the transmission of data packets [6]. Overview of ADOV Routing Protocol The ADOV protocol, which comes under the purview of reactive routing protocols, is of on-demand type in the sense that the route between two nodes is discovered only when it is needed.Such protocols are designed to make them least overburdened while they maintain the information only for those routes which are active [7].It means that, in the process of route discovery and route maintenance, the routes are discovered and maintained only for the nodes that send their request to the specific destination.The various issues related to the ADOV protocol have been discussed in this section except the simulation parameters for analyzing the performance of ADOV which have been taken up separately in Section 5. Route Discovery in AODV The basic approach in the route discovery process is to establish the route in an on-demand routing protocol by broadcasting the route request message in the network.The destination node, on receiving a route request message, replies by sending a route reply message back to the source.The route reply message carries the route back to the source node that is traversed by the route request message received at the destination node [7].In this process, when a particiapting node of a network wishes to send a data packet to some destination node then the source node checks its routing table to determine whether it has a current route to that destination node [7].If the route is available with the destination node then it forward the pakcet to the appropriate next hop towards the destination node.However, if the particpating mobile node does not have a valid route to the destination node then the node must initiate the route discovery mechnism.Further, to begin this route discovery process, the node creats a RREQ packet and broadcast the route request packet at a low power level.Such a packet contains the source node IP address and current sequence number as well as the destinations IP address and the last known sequence number.The RREQ packet also contains a broadcast ID, which is increemented each time the source node initiate a RREQ.In this way, the broadcast ID and the IP address of the source node form a unique indetifier for the RREQ.After creating the RREQ, the source node broadcasts the packet and then sets a timer to wait for a reply.When a node receives a RREQ, it first checks wheather it has seen it before by noting the source IP address and the braodcast ID pair.Each node maintains a record of the source IP address/bradcast ID for each RREQ it receives, for a specified length of time.If it has already seen a RREQ with the same IP address/broadcast ID pair, it silently discards the packet.Otherwise, it records this information and then process the packet [7].Further, in order to process the RREQ, the node sets up a reverse route entry for the source node in its route table.This reverse route entry contains the source nodes IP address and the sequence number as well as the number of hops to the source node and the IP addresses of the neighbor from which the RREQ was received.In this way, the node knows how to forward a RREP to the source if one received later [7].Figure 1 indicates the propagation of RREQs across the network as well as the formation of the reverse route entries at each of the network nodes.Moreover, a lifetime is associated with the reverese route.If this route entry is not used within the specified lifetime, the route information is deleted to prevent the old routing information from lingering in the route table [7]. Propagation of Route Request It is necessary for responding to the RREQ that the node must have an unexpired entry for the destination in its route table.Furthermore, the sequence number associated with the destination must be at least as great as that idicated in the RREQ.This prevents the formation of routing loops by ensuring that the route returned is never old enough to point to a previous intermideiate node since otherwise, the previous node would have responded to the RREQ [7] [8].If the node is able to satisfy these two requirements, it responds by unicasting a RREP back to the source as descirbed in the next section.If it is unable to satisfy the RREQ, it increments RREQ hop counts and then broadcasts the packet to its neighbor.Natuarlly, the destination node is always able to respond to the RREQ.If the route request packet is lost, the source node is allowed to retry the broadcast route discovery mechnism.After RREQ-retries additional attempts, it is required to notify the application that the destination is unreachable [7] [8]. Forward Path Setup in AODV When a route determines that it has an enough route current to respond to the RREQ, it creats a RREP [7].For the purposes replying to a RREQ, any route with the sequence number not smaller than that indicated in the RREQ is deemed as asscociated with enough current.The RREP sent in response to the RREQ caontains the IP address of both the source and the destination.if the destination node is responding , it places its current sequence number in the packet, initilizes the hop count to zero, and then places the length of time of this route as valid in RREPs lifetime field.However, if an intermediate node is responding, it places its record of the destination's sequence number in the packet, sets the hop count equal to its distance from the destination, and calculates the amount of time for which its route table entry for the destination will still be valid.It then unicasts the RREP towards the source node, using the node from which it receives the RREQ as the next hop [7] [8]. When the intermediate node receives the RREP, it sets a forward path entry to the destination in its route table.This forward path entry contains the IP address of the destination, the IP address of the neighbor from which the RREP had arrived, and the hop count, or the distance, to the destination.To obtain its distance to the destiantion, the node increaments the value in the hop count field by 1.Also associated with this entry is a lifetime, which is set to the lifetime contained in the RREP.Each time the route is used, its associated lifetime is updated.If the route is not used with in the specified lifetime, it is deleted.After processing the RREP, the node forwards it towards the source [7] [8]. Figure 2 indicates the path of the RREP from the destination to the source node. Route Discovery from Source to Destination It is likely that a node will receive a RREP for a given destination from more than one neighbor [7] [8].In this case, it forwards the first RREP it receives and forwards a later RREP only if the RREP contains a greater destination sequence number or a smaller hop count.Otherwise, the node discards the packet.This decreases the number of RREPs propogating towards the source while ensuring the most up-to-date and quickest routing information.The source node can begin data transmission as soon as the first RREP is received and can later updates its routing information if it dicovers a better route [7] [8]. Simulation Details In this section we have decribed the network simulator NS-2, the execution process, the process to genrate the movement, the process of trafic genration, the mobility model as well as the simulation paramenters used in our simualtion. The simulation technology as applied to the networking areas like network traffic simulation is relatively new [9].The computer assisted simulation technologies can be applied in the simulation of networking algorithms or systems by using software engineering.The application field is smaller than that in general field of simulation and it could be natural that more specific and desirable requirements will be placed on network simulations in future [9].For example, the network simulations can have more emphasis on the validity and performance of a distributed protocol or algorithm than on the visual or real-time visibility features of simulations.Moreover, one has to keep pace with the rapidly developing network technologies running on different software over the Internet with the involvement of many different organizations contributing to the whole process.That is why the network simulation always requires open platforms or software which should be scalable and enough to include different packages in the simulations of the whole network.Internet has also a main characteristic that it is structured with a uniformed network stack (TCP/IP) that has all the different layers of technologies which can be implemented in different ways while having uniform interfaces with their neighbored hops and layers [9] [10].Thus, the network simulation tools must be able to incorporate these features and allow different future aspects and new packages to be included and run transparently without any harm to and with either no impact or at least no negative impact on the existing components or packages [9] [10].Network simulators are mainly used by people from different backgrounds and areas like industrial developers, academic researchers and quality assurance (QA) for designing, simulating, verifying, and analyzing the performance of different networks protocols.Network simulators can also be used to evaluate and analyze the effect of the different parameters on the protocols being studied for network scenarios.Generally a network simulator will contain a wide range of networking protocols and technologies that help users to build complex networks from vary basic building blocks for example clusters of nodes and links.With the help of network simulators, one can design and propose different network topologies with the help of various types of nodes like end-hosts, hops, network bridges, routers, and mobile units [10].The present section is thus of relevance to the simulation based study taken up in this paper on AODV to know effect of the variable transmission range on it as mentioned in Section 1. Network Simulator NS-2 The network simulator NS-2, which provides the discrete event simulation with its implementation initiated as early as 1989 with the development of the real network simulator [9], is very flexible and capable of supporting the simulation of different types of routing algorithms of MANET, TCP and also multicast protocols over wired and wireless networks [9].Initially, it was designed and developed for the simulation of wired technology only but later the Monarch Group of the department of computer science at the University of Rice developed the necessary tools and applications to include in the simulator for the wireless and mobile hosts [9].In NS-2, the simulations are written in C++ with an OTcl API (see Figure 3). The user creates a text file in OTcl which describes the layout of the whole network as well as the events to be occurred such as transferring data or node movement application.This OTcl file (.tcl) is executed and a detailed trace file (.tr) is generated which can be filtered with a pattern matching program (such as "grep" or "awk") and inspected by hand, or fed into a visualization tool [9].Some visualization tools are also available with NS-2, one of which is the Network Animator (NAM).NAM is an animation tool for viewing network simulation traces in graphical form.It supports topology layout and has various data inspection tools.NS-2 is suitable for simulating MANET because it has accurate implementations of the IEEE 802.11 standard, a TCP/IP stack and a wide range of routing protocols implemented for NS-2 [9]. Node Movement Generation for Wireless Scenarios in NS-2 We can define the node movement in separate files called as scenario file in NS with the help of node movement generation tool available in NS-2.This scenario file is generated with help of this tool which is available on the "ns-2/indep-utils/cmu-scen-gen/setdest/" location.This scenario file is used to store the information about the initial position of the nodes with their movement details, speed, etc. at various points of time.Generally, since it is very difficult to provide the initial position of the participating mobile nodes manually, movement of the nodes and their speed for each movement at different times we use a random file generator.We can run this tool with following command [9]: . Random Traffic Generation for Wireless Scenarios in NS-2 In NS-2 we can set up random traffic connections between mobile nodes of TCP and CBR using traffic-scenario generator script.This script is available at ~ns/indep-utils/cmu-scen-gen location and is called cbrgen.tcl.It is used to generate CBR and TCP traffics connections between mobile nodes.For this purpose we create a traffic-connection file, in which we need to define different parameters like the type of traffic connection (CBR or TCP), maximum number of connections to be set up between the nodes, the number of nodes and a random seed and, for CBR connections, a rate.The inverse value of the rate is to compute the interval time between the packets [9].We can generate this with the following command: ns cbrgen.The start times for the connections are generated randomly with a maximum value of 180.0 s.For example, we can have a CBR connection file with 10 nodes, having maximum of 8 connections, seed value of 1.0 and a rate of 4.0.So we can do this with following command: ns cbrgen.tcl-type cbr -nn 10 -seed 1.0 -mc 8 -rate 4.0 > cbr-10-test This generates a random traffic pattern with described values. Tool Command Language (Tcl) There are two languages used in NS2 C++ and OTcl (an object oriented extension of Tcl) [9].The compiled C++ programming hierarchy makes the simulation efficient and execution times faster.The simulation results produced after running the scripts can be used either for simulation analysis or as an input to NAM.Tool Command Language Tcl is a powerful interpreted programming language developed by John Ouster out at the University of California, Berkeley [9].Tcl is a very powerful and dynamic programming language.Tcl is a truly cross platform, easily deployed and highly extensible.The most significant advantage of Tcl language is that it is fully compatible with the C programming language and Tcl libraries can be interoperated directly into C programs. NAM NAM for the graphical representation of the simulation have been designed and developed in 1990 as a simple tool for animating packet trace data [9].This trace data is typically derived as the output from a network simulator like NS or from real network measurements, e.g., using tcpdump.Steven McCanne wrote the original version as a member of the Network Research Group at the Lawrence Berkeley National Laboratory, and has occasionally improved the design [9].Marylou Orayani improved it further and used it for her Master's research over summer 1995 and into spring 1996 [9].The NAM development effort was an ongoing collaboration with the Virtual Internetwork Test bed (VINT) project.Currently, it is being developed at ISI by the Simulation Augmented by Measurement and Analysis for Networks (SAMAN) and Conser projects [9]. Trace File The trace file is an ASCII code files and the trace is organized in 12 fields as in Figure 4. The first field is the event type and given by one of four available symbols r, +, − and d which correspond respectively to receive, en-queued, de-queued and dropped.The second field is telling the time at which the event occurs.The third and fourth fields are the input and output nodes of the link at which the events takes place.The fifth is the packet type such as Constant Bit Rate (CBR) or Transmission Control Protocol (TCP).The sixth is the size of the packet and the seventh is some kind of flags.The eighth field is the flow identities of IPv6, which can specify stream color of the NAM display and can be used for further analysis purposes.The ninth and tenth fields are respectively the source and destination address in the form of "node.port".The eleventh is the network layer protocol's packet sequence number.NS keeps track of UDP packet sequence number for the analysis purposes.The twelfth, which is the last field, is the unique identity of the packet.Results of simulation are stored into trace file (*.tr).Trace Graph was used to analyze the trace file. Simulation Setup Let us now consider the simulation setup as relevant to the ultimate goal of our research work which is to evaluate the dependence of AODV on the various parameters and then develop a new technique for the optimum utilization of transmission power for transmitting the data packet from the source to the destination node successfully and in timely manner so as to increase the lifetime of the battery of the participating mobile nodes and that of the network as a whole.The five parameters which we have chosen for our studies are: 1) the density of Network; 2) the speed of participating mobile nodes; 3) the transmission power of the participating mobile nodes and 4) the data traffic pattern.In the simulation the participating mobile nodes move according to a model called Random Waypoint which was proposed by Johnson and Maltz [11].Now, this mobility model is widely being used because of its simplicity and wide availability.To generate the node trace of the Random Waypoint model the setdest tool from the CMU Monarch group may be used.This tool is included in the widely used network simulator NS-2 [11]. In NS-2 distribution, every participating mobile node randomly selects one location in the simulation field as the destination.The participating nodes then move towards the destination with constant velocity chosen uniformly and randomly from (0, V), where the parameter V max is the maximum allowable velocity for every participating mobile node of the networks [12].After reaching the destination, the node stops for a duration defined by the pause time.If pause time T pause = 0 then the participating mobile does not stop and keeps itself moving continuously.After this pause time, it once again chooses another random destination in the simulation field and moves towards it.The same process is repeated again and again until the simulation ends. In the Random Waypoint mobility model, V and T are the two key parameters that determine the mobility behavior of nodes.If V is small and T is long, the topology of ad hoc network becomes relatively stable.On the other hand, if the node moves fast (i.e., V max is large) and T is small, the topology is expected to be highly dynamic.Varying these two parameters, especially the V parameter, the Random Waypoint model can generate various mobility scenarios with different levels of nodal speed.Therefore, it seems necessary to quantify the nodal speed.Intuitively, one such motion is the average node speed.If we could assume that the pause time T pause = 0 max, considering that V max is uniformly and randomly chosen from (0, V), we can easily find that the average nodal speed is 5.0 m/s [13] [14].However, in general, the pause time parameter should not be ignored.In addition, it is the relative speed of two nodes that determines whether the link between them breaks or forms, rather than their individual speeds.Thus, the average node speed seems not to be the appropriate metric to represent the notion of the nodal speed. Mobility Model During simulation each node starts moving from its initial position to a random direction with random speed.The speed is uniformly distributed between 0 and the maximum speed.When a moving node reaches the boundary of the given area, it waits for the pause time (which is 0 in our case) and after that once again starts to move in a random direction and with random speed.The entire traffic source used in our simulation generated Constant Bit Rate (CBR) data traffic.The traffic structure was defined by varying two factors: 1) the sending rate and 2) the packet size. Simulation Environment As the basic scenario we considered a MANET with 10, 20, 30, 40 and 50 mobile nodes spread randomly over an area of 1000 m by 1000 m.The nodes were moving with a maximum speed of 2 m/sec with pause time of 0 sec.A total of 01 traffic sources generated CBR data traffic with a sending rate of 4 packets/sec, using a packet size of 512 bytes. Each simulation had the duration of 500 simulated seconds.Because the performance of the simulations is highly related with the mobility models, the result to be shown in Section 6 represents an average of three different executions of the simulation using the same traffic models but with different randomly generated mobility scenarios (Table 1). We evaluate the following performance indexes: 1) total energy consumed (in Joules); 2) energy consumed depending on the operation (Transmissions (Tx) and Receptions (Rx)) and 3) energy consumed depending on the packet type (MAC, CBR and routing). Result and Discussion To evaluate the performance of conventional AODV, we have created a scenario for the simulation, in which we have placed 10, 20, 30, 40 and 50 nodes randomly in an area of 1000 m × 1000 m.All the participating mobile nodes of the network have been assigned 100 J of energy and the simaltion time is set at 500 sec.Intially the simulation is started for 10 nodes and the transmission range of 100 m has been assigend to each node to transmit the data packets to the destination node.After the completion of the simulation the throughput has been calculated for this scenario.The diffeerent scenarios have been created after increasing the density of the particiapting mobile nodes in the networks for anlyzing the performance of the AODV routing protocol in MANET. Throughput versus Transmission Range The throughput for node 10 is found to be the least in the throughput versus transmission range characteristics (Figure 5).Further, as we increase the density of the particiapting mobile nodes in the networks without changing the area of the netwrok, the throughput increases with the increase of the transmission range (Figure 5: Throughput versus transmission range) owing to the high desity of the participating mobile nodes in the nework.Because of the dense nework most of the particiapting mobile nodes are nearer and within the transmission range of each other and the route faliures and route maitenenace are very less and the throughput is very high. Delay versus Transmission Range The various scenarios of the networks have been genrated to evalute the performance of the participating mobile nodes and networks in terms of the delay with respect to the tranimission ranage.For 10 mobile nodes the delay is much higher compared to the other scenarios, which decreases gradually, as the node density and the transmisison range are increased within the same network area (Figure 6). Further, the delay is the least for 30 nodes showing the consistancy (Figure 6).However, in case of 50 nodes in the same geographical area at some points the delay is found to be higher as compared to the node denisties of 30 and 40 while it is lesser as compared to the node denisties of 10 and 20 (Figure 6).Thus, one has to choose an optimum value of the number of nodes to minimize the delay.As we increases the node density and as well as the transmission power of the participating mobile nodes, the distance between the participating nodes decreases becoming closer to each other and also within the tranimission range of each other (Figure 6).Due to high density of the network, the overhearing increases among the participating mobile nodes and the overheads and congetion increase by many folds, which adversly affects the bandwidth utilization as well as the congetion-free routes to transmit the data from one node to another in the networks.Because of this, the delay increases during the transmission of the data among the participating mobile nodes of the networks. Packet Delivery Ratio versus Transmission Range Various scenarios for the networks have been generated to evalute the performance of the participating mobile nodes and networks in terms of the packet delivery ratio (PDR)defined as the as the ratio of the number of the data packets delivered at the destination to those generated by the source nodewith respect to the tranimission ranage.In Figure 7 the increase in PDR with the increase in the number of modes can be seen. However, beyond a threshold number of modes, no further increase in PDR is realized (Figure 7).Further, there is the least PDR when the desity of the participating mobile nodes is the least and packet delivery ratioincreases with the density of the particiapting mobile nodes in the networks (Figure 7).The overheads increase in the networks with the increase of the density of the networks and because of the nodes coming closer to each other.This affects adeversly the networks in terms of the throughput and delay.However, the PDR is affected to a lesser extent increasing only up to a little extent unless the density of nodes is increased to a larger extent, the terrain remaining the same, when the PDR also gets adversely affected.Here, in the given scenario the speed of the participating mobile nodes is constant and is very less because of the less speed the PDR increases with the incraeses of density and tranismission range of the participating mobile nodes of the networks. Drop Packet versus Transmission Range The drop packets are found to increase with the decease in the density of the participating mobile nodes with respect to the tranismission range (Figure 8).Further, with the increase in the transmission range of the participating mobile nodes the number of the drop packets increases (Figure 8). Furthermore, the number of drop packets is found to be the least when the density of the paticipating mobile nodes is 30.However, in the same scenario of node 30, the drop packets increase with the increase in the the transmission range (Figure 8) attributable to the congestion due to overhearing. Further, the drop packets slightly increased for 40 and 50 nodes since a high desity of the particpaing nodes helps to increase the overheads and also increase of the congestion in routes. Consumed Energy versus Transmission Range The energy consunption increases with the increase in the number of nodes (Figure 9).Thus, we can increase the packet delivery ratio and decrease the numbers of drop pakcets if we increases the transmission power of the particpating mobile nodes but at the cost of high energy consumption.Because of very high energy consumtions the lifespan of the networks is very short and it may be possilble that the participating mobile nodes could not actively particiapte in the transimission of the data packets from the source node to the destination node. Due consideration needs to be given in choosing the number of participating mobile nodes in the network and the transmision power since, when these quantities are very less, the energy consumption to tranmit the data packet from the source node to the destination node becomes very less (Figure 9). Residual Energy versus Transmission Range When the transmission range of the participating mobile nodes is increased, the lifetime of the less dense network increases because we found that the amount of the resedual energy of the of the network is higher, i.e., the lifetime of the whole netwok increases and more data can be trasnfered from one node to another (Figure 10).However, if we increase the number of nodes in the network as well as the transmission range of the particiapting mobile nodes then the lifetime of the particiapting mobile nodes gets aderversly affected remembering that the overheads inscrease beacause of high density and transmission range of the particiapting mobile nodes (Figure 10).This also means that a very high amount of energy is consumed by the particiapting mobile nodes and less residual energy is left for transmitting the data packet in the network which adeversly affects the lifetime of the particiapting mobile nodes and hence also that of the network as a whole. Conclusion We have by simulation investigated into the effect of varying the transmission range on the throughput, delay, packet delivery ratio, drop packets, consumed and residual energy of ADOV routing protocol in MANET taking the number of particiapting nodes as the parameter.All the necessry background and details of simulation have been provided.The study shows that one can achieve higher values of throughput by increasing the number of particiapting nodes.However, due care should be taken to optimize the mumber of the particiapting nodes to minimize the delay while also noting that a larger value of delay is caused by overhearing congestion.Further, the PDR can be increased and the drop packets, which increase with the increase in the transmission range, can be decreased by increasing the number of nodes.One has to compromise on the energy consumption, which increases with the increase in the number of nodes, to get the best performance in terms of throughput, delay, PDR and drop packets.This is because sufficient amount of energy is consumed and less resudual energy left to the participating mobile nodes to transmit the data pakets from the source to destination node successfully, thereby adversely affecting the lifetime of the particiapting mobile nodes and also the lifespan of the whole networks.Further, there is scope to control the transmission range dynamically and reduce the overheads for reducing the energy consuption. Figure 4 . Figure 4. Fields of the trace file.
8,244.6
2016-02-15T00:00:00.000
[ "Computer Science", "Engineering" ]
Probing the top-Higgs coupling through the secondary lepton distributions in the associated production of the top-quark pair and Higgs boson at the LHC We complement the analysis of the anomalous top-Higgs coupling effects on the secondary lepton distributions in the associated production of the top-quark pair and Higgs boson in proton-proton collisions at the LHC of the former work by one of the present authors by taking into account the quark-antiquark production mechanism. We also present simple arguments which explain why the effects of the scalar and pseudoscalar anomalous couplings on the unpolarized cross section of the process are completely insensitive to the sign of either of them. Introduction Determination of the coupling of the recently discovered Higgs boson [1] to the top quark currently belongs to one of the most challenging tasks of the high energy experimental physics. Measurement of the associated production of the top quark pair and Higgs boson in the clean experimental environment of e + e − collisions was considered in this context already more than two decades ago [2], [3], but different projects of the high energy e + e − collider [4]- [11], despite some of them being more or less intensively discussed for years, are still at a rather early stage of TDR. However, if the LHC performance in next runs is as excellent as it was in run 1 we may expect that the process the search for which, based on run 1 data, were already reported by both the CMS [12] and ATLAS [13] collaborations, will be measured quite precisely. This is why in the past few years the associated production of the top quark pair and Higgs boson has invoked quite some interest also from a theoretical side, see, e.g., [14]- [21]. It was shown in Ref. [15] that the distributions in rapidity and angles of the secondary lepton that can be produced in the decay oft-quark of process (1) are quite sensitive to modifications of the top-Higgs coupling. Actually, only the gluon fusion mechanism of ttH production, which is dominant at the LHC energies, and one specific decay channel: t → bW + → bud, t →bW − →bµ −ν µ and h → bb, were taken into account in Ref. [15], i.e., the following hard parton scattering processes was considered. There are 67 300 Feynman diagrams of process (2) already in the leading order (LO) of the standard model (SM) in the unitary gauge, if the Cabibbo-Kobayashi-Maskawa mixing and masses smaller than the b-quark mass m b are neglected. At the same time there are only 32 Feynman diagrams which contribute to the signal cross section of ttH production, two of which are shown in Fig. 1. The remaining 30 signal diagrams are obtained from those depicted by attaching the Higgs boson line of Hbb-vertex to the other t-ort-quark line, or interchanging the b andb quarks in Figs. 1(a) and 1(b) and interchanging the two gluons in Fig. 1(b). The diagrams with the Higgs boson line of Hbb-vertex attached to either the b-orb-quark line are not counted here, as their contribution to the ttH production signal is suppressed by the mass ratio m b /m t . The effects caused by modifications of the scalar and pseudoscalar couplings of the Higgs boson to top quark were clearly visible in the ttH production signal cross section, but they were to large degree obscured by the interference of the ttH production signal diagrams with the diagrams of irreducible off resonance background. In the present work, we complement the analysis of the influence of the anomalous Higgs boson coupling to top quark on the secondary lepton distributions in the process of associated production of the top quark pair and Higgs boson in proton-proton collisions at the LHC of Ref. [15] by taking into account the quark-antiquark annihilation hard scattering processes with the same final state as that of process (2): Figure 1: Feynman diagrams of ttH production in process (2). Blobs indicate the Higgs-top coupling. with q = u, d. To be more specific, we take into account uū-,ūu-, dd-anddd-scattering processes. Under the same assumptions as those made above for process (2), there are 78 068 Feynman diagrams in the LO of SM for each of the qq-scattering processes considered. However, only 24 of them contribute to the signal of the ttH production. Examples of the signal diagrams of the process of uū-scattering to the final state of process (3) are shown in Fig. 2. The other signal diagrams can be obtained by attaching the Higgs boson line of the Hbb-vertex ū u gb tν to the other t-ort-quark line or interchanging the b-andb-quark lines in the diagrams of Fig. 2. Let us note that another 24 diagrams which contain the Feynman propagators of the t-, t-quark and the Higgs boson at a time can be obtained from the signal diagrams just described by the exchange of the u-quark lines between the initial and final state. However, they are not treated as the signal diagrams here, because they contain the gluon, Z 0 or photon propagator in the t-or u-channel and their contribution to the signal cross section is negligible anyway, which has been checked by direct computation. The rest of the article is organized in the following way. The possible effect of the anomalous top-Higgs coupling on the unpolarized cross section of the process of ttH production at the LHC are analyzed in Section 2, our results are presented in Section 3 and, finally, some concluding remarks are contained in Section 4. Effects of the anomalous top-Higgs coupling The most general top-Higgs coupling is given by the following Lagrangian [22]: where GeV, is the top-Higgs Yukawa coupling and the real couplings f and f ′ describe, respectively, the scalar and pseudoscalar departures from the purely scalar top-Higgs Yukawa coupling of SM, which is reproduced for f = 1 and f ′ = 0. The allowed regions of the (f, f ′ ) plane, according to the analysis of Ref. [16] performed at the 68 and 95% confidence level, are plotted in Fig. 1 of Ref. [17]. They are derived from the constraints on the Hgg and Hγγ couplings from the Higgs boson production and its decay into γγ, which among others involve assumptions on the Higgs boson couplings to other fermions and bosons, and hence are model dependent. Therefore, we will not stick to them in the next section, where we will illustrate the effects of f ′ on the process of associated production of the top quark pair and Higgs boson from which the direct constraints on f and f ′ can be derived. where h is a scalar representing a product of the Higgs boson propagator carrying the four momentum q with the Hbb-vertex, u (v) is the Dirac spinor representing the off-shell t-quark (t-quark) of the four momentum p t (pt) that decays into the b-quark (b-quark) and off-shell W + (W − )-boson, ε is a polarization four vector representing the gluon propagator contracted with the uūg-vertex, g s is the strong coupling constant and is a complex mass parameter that replaces the mass m t in the top quark propagator in order to regularize the pole arising if its denominator approaches zero. After some simple algebra Eqs. (5) and (6) can be written in the following form: Now, let us note that, as in the process of ttH production in e + e − collisions that was considered in Ref. [14], the dominant contribution to the cross section comes from the phase space region, where both the t-quark andt-quark are close to their mass shells and hence the off-shell spinors u and v should satisfy the following approximate equations: Using Eqs. (9) in (7) and (10) in (8), and neglecting terms ∼ Γ t in the numerators, we get the following approximate expressions for the amplitudes: and for a sum of the two: In order to calculate the sum over polarizations of the squared module of the matrix element pol. |M a | 2 , we take into account the approximate completeness relations for the spinors u and v: pol. and note that the off-shell polarization four vectors ε are real, as they are defined in the following way: where the helicity spinors v( p 1 , λ 1 ) and u( p 2 , λ 2 ) of, respectively, theū-and u-quark in initial state, which are calculated according to Eqs. (5) and (6) of Ref. [25], are real if the momenta p 1 and p 2 are antiparallel. Thus pol. More simplified analytic form of Eq. (16) is irrelevant, as the calculation of the cross section will be performed numerically anyway, but let us note that only the terms on the r.h.s. of Eq. (16) that contain γ 5 may be proportional to the product f f ′ . However, if we use the relation ε /p t / ε / = −ε 2 p t / + 2(p t · ε)ε / in the second and third term and the relation q /p t / q / = −q 2 p t / + 2(p t · q)q / in the second term, and then use the relation in the last term on the r.h.s. of Eq. (16), we see that the dependence on f f ′ , and thus a sensitivity to the sign of either f or f ′ , disappears in the unpolarized cross section of the hard scattering process uū → budbµ −ν µ bb. Let us note, that the same arguments can be easily repeated for the amplitudes of the Feynman diagrams of Fig. 1, which dominate the ttH production through the gluon fusion process (2). We would like to stress here that all the above approximations are used for the sake of the argument in this section only and are not used to obtain the full results presented in Section 3. Results The calculation is performed in the framework of the SM, supplemented with the top-Higgs coupling derived from Lagrangian (4), with the use of carlomat [23], a general purpose program for the MC computation of the lowest order cross sections. The differential cross section of the process pp → budbµ −ν µ bb (18) is calculated with the use of the following factorization formula in the sum. We use MSTW LO parton distribution functions [24] at the factorization scale Q = m 2 t + j p 2 T j , where p T j is the transverse momentum of the final state quark or antiquark of process (18). The calculation is performed separately for the gluon fusion (2) and each of the quark-antiquark hard scattering processes (3). We use the same physical input parameters and cuts (3.2)-(3.7), with m cut bb = 20 GeV in (3.7), as in Ref. [15], and three different combinations of the scalar and pseudoscalar couplings of Lagrangian (4) Let us note, that in order to calculate the total cross section of process (18), a 20-fold phase space integral and a 2-fold integral over parton density functions must be performed, not to mention the additional 9-fold Monte Carlo (MC) integral that replaces the sum over particle helicities, without which the computation would not have been feasible in practice. The differential cross sections of process (18) at the proton-proton center of mass energy of Conclusions We have complemented the analysis of the influence of the anomalous Higgs boson coupling to top quark on the secondary lepton distributions in the process of associated production of the top quark pair and Higgs boson in the proton-proton collisions at the LHC of Ref. [15] by taking into account contributions of the quark-antiquark annihilation hard scattering processes. Although, the gluon fusion mechanism dominates the ttH production through process (18) at √ s = 14 TeV, the contribution of quark-antiquark hard scattering processes (3) is quite substantial and, therefore, should be taken into account in the analyses of data. Moreover, we have explained why the effects of the scalar and pseudoscalar anomalous couplings in the unpolarized cross section of the process are completely insensitive to the sign of either of them.
2,835
2015-07-06T00:00:00.000
[ "Physics" ]
Phase-matching-free parametric oscillators based on two-dimensional semiconductors Optical parametric oscillators are widely used as pulsed and continuous-wave tunable sources for innumerable applications, such as quantum technologies, imaging, and biophysics. A key drawback is material dispersion, which imposes a phase-matching condition that generally entails a complex design and setup, thus hindering tunability and miniaturization. Here we show that the burden of phase-matching is surprisingly absent in parametric micro-resonators utilizing mono-layer transition-metal dichalcogenides as quadratic nonlinear materials. By the exact solution of nonlinear Maxwell equations and first-principle calculations of the semiconductor nonlinear response, we devise a novel kind of phase-matching-free miniaturized parametric oscillator operating at conventional pump intensities. We find that different two-dimensional semiconductors yield degenerate and non-degenerate emission at various spectral regions due to doubly resonant mode excitation, which can be tuned by varying the incidence angle of the external pump laser. In addition, we show that high-frequency electrical modulation can be achieved by doping via electrical gating, which can be used to efficiently shift the threshold for parametric oscillation. Our results pave the way for the realization of novel ultra-fast tunable micron-sized sources of entangled photons—a key device underpinning any quantum protocol. Highly miniaturized optical parametric oscillators may also be employed in lab-on-chip technologies for biophysics, detection of environmental pollution and security. Since three-wave parametric coupling is intrinsically weak, one can only achieve low oscillation thresholds by using doubly or triply resonant optical cavities. In addition, parametric effects are severely hampered by the destructive interference among the three waves propagating with different wavenumbers k 1,2,3 in the dispersive nonlinear medium because of a generally non-vanishing wavevector mismatch Δk = k 3 − k 2 −k 1 (see Fig. 1a). To avoid this highly detrimental effect, the use of phasematching (PM) strategies is imperative, i.e., following the standard nonlinear optics terminology, fulfillment of momentum conservation Δk = 0 to prevent destructive interference. The commonly adopted birefringence-PM method 20 is critically sensitive to the nonlinear medium orientation. Quasi-PM 21,22 exploits the momentum due to a manufactured long-scale periodic reversal of the sign of the nonlinear susceptibility, which cannot be easily applied in miniaturized systems. In semiconductors, PM is achieved by S-shaped energy-momentum polariton dispersion in the strong coupling regime for excitons and photons 23,24 that is only accessible at low temperatures and large pump angles. Cavity PM 25 , also denoted "relaxed" PM 26 , occurs in Fabry-Perot micro-cavities with cavity length ' shorter than the coherence length π/Δk; this technique can be used to drastically reduce the effective quadratic susceptibility χ 2 ð Þ eff (see Fig. 1a). All of the above-mentioned PM techniques require a non-trivial experimental design and setup that is further constrained by the need for resonant operation. In this manuscript, we show that two-dimensional (2D) materials with high quadratic nonlinearity, currently emerging as important nonlinear photonic elements [27][28][29] , open up unprecedented possibilities for tunable parametric micro-sources. Remarkably, when illuminated by different visible and infrared waves, such novel 2D materials provide negligible dispersive dephasing Δϕ owing to their atomic-scale thickness ' ( λ, where λ is the optical wavelength (see Fig. 1b). In turn, the three waves interacting within the 2D material do not undergo destructive interference due to the surface-like nonlinear interaction. Hence, the PM requirement in the standard nonlinear optics jargon (i.e., the momentum conservation requirement Δk = 0), is removed here. Furthermore, such "phase-matching-free" devices turn out to be very versatile and compact, with additional tunability afforded by electrical gating of 2D materials, which provides ultrafast electrical-modulation functionality. The most famous 2D material, graphene, is not the best candidate for PDC owing to the centrosymmetric structure. In principle, a static external field can be used to break centrosymmetry and induce a χ 2 ð Þ eff , but the spectrally flat absorption of graphene remains severely detrimental for PDC. Recent years have witnessed the rise of transition metal dichalcogenides (TMDs) as promising photonic 2D materials. TMDs possess several unusual optical properties dependent on the number of layers. Bulk TMDs are semiconductors with an indirect bandgap, but the optical properties of their monolayer (ML) counterpart are characterized by a direct bandgap ranging Phase-matching-free micron-sized parametric oscillators. a Schematic illustration for conventional three-wave parametric coupling in bulk nonlinear crystals. The effective quadratic susceptibility χ 2 ð Þ eff is heavily affected by the mismatch Δk among the wavevectors k m = n m ω m /c of the pump (3), signal (1), and idler (2) waves, whose destructive interferenceΔk ≠ 0 hinders parametric coupling. b Sketch of the ML-TMD-based parametric oscillator. The cavity is assembled using two Bragg mirrors separated by a dielectric layer, and the ML-TMD is placed onto the left mirror. The incident (i) pump field produces both reflected (r) and transmitted (t) pump, signal and idler fields by means of the ML-TMD quadratic surface conductivity σ nm ≠ 0. The mutual dephasing Δϕ ¼ Δk' among these three waves becomes negligible within the atomic thickness of the nonlinear ML-TMD (Δϕ ≈ 10 −2 , see Supplementary Material) because ' ( λ, thus enabling phase-matching-free (i.e., free from the momentum conservation requirement Δk = 0) parametric coupling. c Sketch of the geometry of MX 2 ML-TMDs. Fast modulation is enabled by extrinsic doping by a gate voltage, with gold contacts applied between the ML-TMD and the Bragg mirror. from~1.55 to~1.9 eV 30-32 that is beneficial for several opto-electronic applications 33 . In addition, ML-TMDs have broken centrosymmetry and can thus be used to facilitate second-order nonlinear processes [34][35][36][37][38] . Here, we study PDC in micro-cavities embedded with ML-TMDs; we find that the cavity design is extremely flexible compared to standard parametric oscillators due to phasematching-free operation (see Fig. 1a, b). We demonstrate that at conventional infrared pump intensities, parametric oscillation occurs in wavelength-sized micro-cavities incorporating ML-TMDs. We show that the mode selectivity of doubly resonant cavities enables one to engineer the output signal and idler frequencies; these frequencies are tuned by the pump incidence angle and can be modulated electrically by an external gate voltage. Materials and methods Parametric down-conversion for MX 2 We calculate the linear and PDC mixing surface conductivities of MX 2 starting from the tight-binding (TB) Hamiltonian for the electronic band structure 39 . Since the properties of infrared photons with energies smaller than the bandgap are determined by small electron momenta around the K and K′ valleys, we approximate the full TB Hamiltonian as a sum of k ⋅ p Hamiltonians of first and second order H 0 (k,τ,s), where k is the electron wavenumber and τ and s are the valley and spin indices, respectively. We then derive the light-driven electron dynamics through a minimal coupling prescription leading to the time-dependent Hamiltonian , where e is the electron charge, ħ is the reduced Planck constant, and A(t) is the radiation potential vector, which is used to obtain Bloch equations for the interband coherence and the population inversion. Finally, by solving perturbatively the Bloch equations for ML-TMDs in the weak excitation limit, we obtain the surface current density K(t) after integration over reciprocal space, whereσ L ω j À Á (j = 1, 2, 3) andσ l;m ð Þ (l, m = 1, 2, 3) are the linear and PDC surface conductivity tensors, respectively. Note that our approach is based on the independentelectron approximation and is thus fully justified only for infrared photons far from exciton resonances occurring at photon energies higher than 1.5 eV 40,41 . Parametric oscillations The signal, idler, and pump fields, labeled with subscripts 1, 2, and 3, respectively, have frequencies ω n satisfying ω 1 + ω 2 = ω 3 . By the transfer matrix approach, a full electromagnetic analysis of the cavity (see Supplementary Material) yields the equations where Q 1 , Q 2 , Q 3 are complex amplitudes proportional to the output fields produced by the pump field, which is proportional to the amplitude P 3 . Here,σ nm are scaled quadratic conductivities for the MX 2 ML-TMD and are parameters characterizing the linear cavity, whereσ n are scaled linear surface conductivities, q n ¼ ω n =c ð Þ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ε ω n ð Þ À sin 2 θ p are the longitudinal wavenumbers inside the dielectric slab, ε(ω) is the relative permittivity of the dielectric slab, θ is the pump incidence angle and r R ð Þ n are the complex reflectivities for right illumination of the left Bragg mirror (with vacuum and the dielectric slab on its left and right side). It is worth stressing that the wavevector mismatch Δk = k 3 − k 1 − k 2 does not appear in the basic cavity Eq. (2) because the ML-TMDs are treated as 2D materials with surface-like conductivity. In principle, such media possess an atomic-scale thickness of ' ¼ 0:65 nm, with the resulting wavevector mismatch Δk producing a finite but negligibly small phase-shift Δϕ ≈ 10 −2 among the three waves (see Supplementary Material for further details). In turn, such a phase-shift (which does not appear in our formulation based on a surface-like nonlinearity) does not affect parametric coupling (by the destructive interference of the fields), and the phasematching constraint is heavily relaxed. Parametric oscillations (POs) are solutions of Eqs. (2) with Q 1 ≠ 0 and Q 2 ≠ 0, and in this case, the compatibility of the first two equations yields (see Supplementary Material) which is the leading PO condition. As the right hand side of Eq. (4) is generally a complex number, for the realization of PO, we have the condition Equation (5) can be physically interpreted as a locking of the phase difference arg Q 1 À arg Q Ã 2 allowing the signal and idler to oscillate. Once Eq. (5) is satisfied, Eq. (4) provides the pump threshold for the onset of PO. Due to the small absolute magnitude of the nonlinear surface conductivities, the cavity parameters |Δ n | must be minimized to achieve a feasible threshold. This can be obtained by choosing the doubly resonant condition for signal and idler corresponding to the minima of |Δ 1 | and |Δ 2 |, respectively. For these minima to be very small, r R ð Þ 1 and r R ð Þ 2 are required to be very close to one. Such a constraint can be satisfied by the use of a suitable Bragg mirror design, with the stop-band centered at half of the pump frequency ω 3 /2 since, in this case, the signal and idler fields experience a large mirror reflectance. Results and discussion The structure of ML-TMDs is formed by two hexagonal lattices of chalcogen atoms embedding a plane of metal atoms arranged at trigonal prismatic sites located between chalcogen neighbors 32 . Figure 1c shows the lattice structure for MX 2 ML-TMDs (M = Mo, W, and X = S, Se), and Fig. 2a, b show the valence and conduction bands for MoS 2 as obtained from tight-binding calculations 39 . The electronic band structure of other MX 2 materials is considered to be qualitatively similar. The direct bandgap is 1.5 eV, which implies optical transparency for infrared radiation; the linear surface conductivity has a very small real part (corresponding to absorption) and a higher imaginary part at infrared wavelengths. Figure 2c shows the wavelength dependence of the linear surface conductivities of MX 2 . In the presence of an external pump field with angular frequency ω 3 , the ML-TMD secondorder nonlinear processes lead to the generation of downconverted signal and idler waves with angular frequencies ω 1 and ω 2 , such that ω 3 = ω 1 + ω 2 . Figure 2e illustrates the PDC mixing surface conductivities for MoS 2 . Both linear and nonlinear conductivities are calculated by a perturbative expansion of the tight-binding Hamiltonian for MX 2 (see Methods and Supplementary Material). For infrared photons with energy smaller than the bandgap, extrinsic doping by an externally applied gate voltage (see Fig. 1c) modifies the optical properties, leading to increased absorption due to free-carrier collisions and to smaller PDC mixing conductivities. Figure 2d, f show the dependence of the linear and nonlinear surface conductivities on the Fermi level E F . As detailed below, extrinsic doping generally leads to a decrease in PDC efficiency. Figure 1b shows the parametric oscillator design incorporating ML-TMDs. The cavity consists of a dielectric slab (thickness L) surrounded by two Bragg grating mirrors (BGs); the ML-TMD is placed on the left BG inside the cavity. The cavity is illuminated from the left by an incident (i) pump field (frequency ω 3 ), and the oscillator produces both reflected (r) and transmitted (t) signal and idler fields with frequencies ω 1 = (ω 3 + Δω)/2 and ω 3 = (ω 3 − Δω)/2, where Δω is the beat-note frequency of the parametric oscillation (PO). As detailed in the Materials and methods, the cavity equations for the fields do not contain the wavevector mismatch Δk. Indeed, due to their atomic thickness, ML-TMDs are not optically characterized by a refractive index but rather by a surface conductivity. Hence, the parametric coupling produced by the quadratic surface current in ML-TMDs is not hampered by dispersion; thus, no PM condition is required. To observe signal and idler generation, only the PO condition must be satisfied along with the signal resonance (SR) and idler resonance (IR) conditions, leading to a significant reduction in the intensity threshold (see Methods). Since there is no PM requirement, such conditions can be met by adjusting either the cavity length L or the pump incidence angle θ as tuning parameters. For SR and IR, one needs highly reflective mirrors for both signal and idler (see Materials and methods), as realized by locating the stop band of the micron-sized BGs at half of the pump frequency ω 3 /2. Figure 3 shows the PO analysis for a cavity composed of two BGs with polymethyl methacrylate (PMMA) and MoS 2 deposited on the left mirror. The infrared pump has a wavelength of λ 3 = 780 nm, which lies in the same spectral region showing very pronounced nonlinear properties for MoS 2 (see Fig. 2e). The BGs are tuned with their stop bands centered at 1560 nm ( = 2λ 3 ). In Fig. 3a-i, we consider the case of normal incidence θ = 0 and plot the PO (black), SR (red), and IR (green) curves in the (L/λ 3 ,Δω/ω 3 ) plane. Doubly resonant POs (DRPOs) corresponding to the intersection points of these three curves are labeled by dashed circles. Therefore, for normal incidence of the pump, degenerate (Δω = 0) and nondegenerate (Δω ≠ 0) DRPOs exist at specific cavity lengths. Note that such oscillations also occur for subwavelength cavity lengths (L < λ 3 ). Each oscillation starts when the incident pump intensity I i ð Þ 3 is increased above a threshold I i ð Þ 3Th (see Materials and methods). Figure 3b-i shows the threshold for two specific degenerate and nondegenerate DRPOs. Figure 3b, f shows the thresholds (black curves on the shadowed vertical planes) corresponding to the PO (black) curves; one can observe that the minimum thresholds occur at SR and IR (identified by the intersection between the red and green curves). The minimum intensity thresholds are on the order of GW cm −2 , with the non-degenerate DRPO threshold greater than the degenerate DRPO threshold because the reflectivity of the Bragg mirror is maximum at Δω = 0 (i.e., at half the pump frequency, as discussed above). where the oscillation actually occurs is rather narrow due to the high reflectivity of the adopted BG. We emphasize that tuning of the PO may be realized by adjusting the pump incidence angle θ, with negligible effect on the oscillation thresholds. In Fig. 3j-n, we analyze the DRPOs by using θ as a tuning parameter for a given cavity length. In particular, in Fig. 3j, k, we consider a cavity with a fixed length, as in Fig. 3b-e. The PO, SR, and IR curves of Fig. 3j intersect at a degenerate DRPO point at θ ' 6 . In Fig. 3k, we plot the transmitted signal intensity I t ð Þ 1 as a function of the pump incidence angle and intensity I i ð Þ 3 ; one can observe that the intensity threshold is comparable to the case shown in Fig. 3b-e, with PO occurring for a range of angles θ on the order of a hundredth of a degree, which is experimentally feasible. We show similar results in Fig. 3m, n, where the nondegenerate DRPO of Fig. 3f-i is investigated for a cavity with a slightly different length and is shown to exist at a finite incident angle with unchanged note-beat frequency Δω. A more accurate analysis of Fig. 3l also reveals that, for a given L, the cavity sustains multiple DRPOs (both degenerate and non-degenerate) at different incidence angles θ. In Fig. 3n, we plot the transmitted intensity of a degenerate DRPO that grows with pump intensity above the ignition threshold. Until now, our analysis has been based on the basic oscillator geometry sketched in Fig. 1b, where the ML-TMD is placed on top of the right mirror. It is, however, also instructive to investigate the dependence of the PO phenomenology on the location of the ML-TMD inside the cavity. Consequently, we consider a different parametric oscillator design whose geometry is sketched in Fig. 4a, with the same Bragg mirrors and cavity dielectric (of thickness L = 3.05λ 3 ) as above but with the ML-TMD placed at a distance 0 < d < L from the left mirror. For simplicity, we focus here on degenerate DRPOs (Δω = 0), triggered by the same pump as above (λ 3 = 780 nm), as in this case, due to the physical coincidence of the signal and idler fields, the SR and IR conditions coincide and the PO condition is automatically satisfied (see Materials and methods). In Fig. 4b, we plot the SR = IR curve identifying the incidence angle θ at which the DRPO occurs as a function of the normalized distance d/L. Note that the PO angle periodically depends on d/L and is always close to θ 0 ð Þ ¼ 38:57 (compare with Fig. 3n) as a consequence of the slight modification of the free cavity modes produced by the presence of the ML-TMD. In Fig. 4c, we plot the pump intensity threshold I i ð Þ 3Th of the POs shown in Fig. 4b as a function of d/L. The marked periodic dependence of the intensity threshold on the location of the ML-TMD is particularly evident, together with the existence of minima and very large maxima. Such features can be easily understood by noting that at different locations inside the cavity, the ML-TMD experiences a spatially periodic cavity modal field (which is observed, as detailed above, to be slightly dependent on the location of the ML-TMD) and therefore shows minima and maxima for the intensity threshold at the anti-node and node positions (where the modal field strength is maximal and zero, respectively). It is also worth stressing that such features are strictly a consequence of the two-dimensional character of the ML-TMD, which can additionally be exploited to tune and control the parametric oscillator behavior. The novel PO utilizing ML-TMDs as nonlinear media are PM free because of the atomic size of the ML-TMDs. Intensities in GW cm -2 Fig. 3 Parametric oscillations. Analysis of the doubly resonant parametric oscillations (DRPOs) of a cavity (with PMMA as the cavity dielectric) illuminated by a λ 3 = 780 nm pump with micron-sized Bragg mirrors whose stop band is centered at 1560 nm. In a-h, the cavity length is used as a tuning parameter for normal incidence θ = 0, whereas in j-n, the incidence angle θ is the tuning parameter for the two assigned cavity lengths. a Identification of DRPOs as the intersection between the parametric oscillation (PO) curve and signal resonance (SR) and idler resonance (IR) curves in the (L/λ 3 Several examples of POs with MoS 2 can also be designed using other families of ML-TMDs, leading to qualitatively similar results. In the Supplementary Material, we compare the calculated dependence of the pump intensity threshold as a function of wavelength λ 3 for parametric oscillators built using MoS 2 , WS 2 , and MoSe 2 , WSe 2 ; we find that the chosen material affects the minimal threshold intensity in a given spectral range. One can optimize the choice of the material for a desired spectral content and threshold level. In this respect, we emphasize that these functionalities are enabled by the inherently large nonlinear surface conductivities of ML-TMDs. A heuristic comparison with standard photonic media may be accomplished by introducing an effective second-order nonlinear mixing susceptibility χ 2 ð Þ eff ω 1 ; ω 2 ð Þ for the ML-TMDs, which is found to be of the order χ 2 ð Þ eff ω 1 ; ω 2 ð Þ% 10 À10 mV À1 (≈2 orders of magnitude higher than that of LiNbO 3 , which is one of the most widespread and efficient materials used for second-order nonlinear optical functionalities 42 ). Therefore, by using standard photonic media instead of ML-TMDs (in the envisaged microcavity), parametric oscillations would require a pump threshold that is at least 4 orders of magnitude higher (the threshold intensity depends inversely on the productσ 23σ , see Materials and methods), and second-order nonlinear effects due to other photonic components of the proposed device are expected to be irrelevant. A further degree of freedom offered by ML-TMDs lies in the electrical tunability afforded by the application of an external gate voltage, as depicted in Fig. 1c. The gate voltage increases the Fermi level and hence affects the nonlinearity and absorption because of electron-electron collisions in the conduction band (see Fig. 2d, f). Although electrical tunability of MX 2 has not been hitherto experimentally demonstrated, to the best of our knowledge, we emphasize that such an additional degree of freedom is absent in traditional parametric oscillators. In the Supplementary Material, we calculate the pump intensity threshold as a function of the Fermi level of MoS 2 , and we show that the threshold may increase by one order of magnitude. Consequently, an external gate voltage can be used to switch-off PO at a fixed optical pump intensity, with potential for realization of rapid electrical modulation of the output signal and idler fields. Finally, we emphasize that experimental realization of the discussed micron-sized phase-matching-free parametric oscillators is heavily facilitated by the inherent flexibility offered by these devices. Indeed, in contrast to traditional parametric oscillators, the key tunability (by means of the external pump incidence angle) unlocks the cavity size, which remains arbitrary. While the narrow angular selectivity found in our calculations can be easily overcome by using focused pump beams with finite size, the reflectivity of the Bragg mirrors heavily affects the parametric oscillation threshold. Thus, high-reflectivity Bragg mirrors with leakage ≈10 −4 are desirable for reaching thresholds on the order of GW cm −2 , which are achievable using pulsed infrared lasers with picosecondlike single pulse duration. Accurate control of the TMD layer number remains the only experimentally critical limiting factor: since TMDs with even layer numbers are centrosymmetric, it is imperative for the oscillator design to embed TMDs with an odd layer number. In addition, increasing the layer number hampers relaxation of the phase-matching condition; therefore, TMD monolayers are considered to be the best materials in terms of design optimization. Conclusions POs can be excited in micron-sized cavities embedding ML-TMDs as nonlinear media at conventional pump intensities in a PM-free regime. The cavity design remains inherently free of the complexity imposed by the need for PM and can be used to realize doubly resonant PDC of signal and idler waves. The flexibility offered by such novel oscillator design enables the engineering of selective degenerate or non-degenerate down-converted excitations by simple modification of the incident angle of the pump field. Furthermore, electrical tunability of ML-TMDs can enable one to rapidly modulate the output signal and idler waves by shifting POs below the threshold. Based on our calculations, we demonstrate that novel parametric oscillators embedding ML-TMDs highlight a new technology for all applications in which highly miniaturized tunable sources are relevant, including environmental detection, security, biophysics, imaging and spectroscopy. PM-free ML-TMD microresonators can also be potentially used to realize micrometric sources of entangled photons when pumped slightly below the threshold, thus paving the way for the development of integrated quantum processors.
5,742.8
2017-07-27T00:00:00.000
[ "Physics", "Materials Science" ]
NuSeT: A deep learning tool for reliably separating and analyzing crowded cells Segmenting cell nuclei within microscopy images is a ubiquitous task in biological research and clinical applications. Unfortunately, segmenting low-contrast overlapping objects that may be tightly packed is a major bottleneck in standard deep learning-based models. We report a Nuclear Segmentation Tool (NuSeT) based on deep learning that accurately segments nuclei across multiple types of fluorescence imaging data. Using a hybrid network consisting of U-Net and Region Proposal Networks (RPN), followed by a watershed step, we have achieved superior performance in detecting and delineating nuclear boundaries in 2D and 3D images of varying complexities. By using foreground normalization and additional training on synthetic images containing non-cellular artifacts, NuSeT improves nuclear detection and reduces false positives. NuSeT addresses common challenges in nuclear segmentation such as variability in nuclear signal and shape, limited training sample size, and sample preparation artifacts. Compared to other segmentation models, NuSeT consistently fares better in generating accurate segmentation masks and assigning boundaries for touching nuclei. Introduction Quantitative single-cell analysis can reveal novel molecular details of cellular processes relevant to basic research, drug discovery, and clinical diagnostics. For example, cell morphology and shape are reliable proxies for cellular health and cell-cycle stage, as well as indicating the state of disease-relevant cellular behaviors such as adhesion, contractility, and mobility. [1][2][3][4][5] However, accurate segmentation of cellular features such as the size and shape of the nucleus remains challenging due to large variability in signal intensity and shape, and artifacts introduced during sample preparation. [6,7] These challenges are exacerbated by cellular crowding, which juxtaposes cells and obscures their boundaries. Additionally, in many traditional segmentation methods [8], parameters need to be iteratively adjusted for images varying in quality. [9] Convolutional neural networks (CNN) have emerged as a robust alternative to traditional segmentation methods for segmenting cell nuclei. [10][11][12][13][14][15][16] CNNs achieve their superior performance through new deep-learning models. [10,[17][18][19] CNNs' applicability for high precision image segmentation was first demonstrated by a Fully Convolutional Network (FCN) for pixel-level segmentation. [10] Additional FCN cell segmentation models have since been developed. [14,20,21] These pioneering approaches established a basic pipeline for CNN-based nuclear segmentation and achieved significant improvements in segmenting different types of cells including bacteria and mammalian cells. [14,21] However, in their original form, FCNs typically required large training datasets to achieve high levels of accuracy. [10] This bottleneck was overcome in U-Net by introducing a U-shaped network that incorporates pooling layers and up-sampling layers. [15] Additionally in U-Net, the network was guided to segment overlapping objects by introducing weight matrices at cell-boundaries. Several state-of-the-art nuclear segmentation models have since been developed using this architecture. [11,13,22,23] Several online cell segmentation interfaces allow users to predict and train on their own image data, facilitating front end use by researchers. [23,24] However, U-Net and FCN based models are curated and evaluated on pixel-level accuracy, where each pixel is segmented directly without the object detection step. In cell biology, the main goal is to make reliable statements about cells as a whole (e.g. the number of cells, their average size and shape, detection of rare/unusual cells) rather than focusing on image pixels. For such problems, the idea of instance segmentation provides a more effective solution, as the loss function incorporates a sense of the whole object and not just individual pixels. One such approach, the Deep watershed transform [25], incorporates the object by learning a distance transform computed from the original training masks. The distance transform is further fed into a watershed layer to have the final segmentation results. A recent improvement is to incorporate a Faster R-CNN detection module. In this approach, the algorithm computes object locations and uses them as markers for the watershed layer, improving the segmentation. [26] Another approach, Mask R-CNN [19], applies FCNbased segmentation to regions proposed by Region Proposal Networks (RPN) and achieves good segmentation results in real-world image datasets. A more recent implementation of this approach replaces the RPN with a single shot detection module [27], achieving superior performance in segmenting and tracking cells and nuclei. [28,29] However, the performance of Mask R-CNN based approaches remains to be validated for images with high cell density. Mask R-CNN also employs fixed anchor scales for bounding boxes across all images, which is a limitation for samples with variable-sized nuclei. [18,19] Additionally, at the pixel-level, the segmentation task of Mask R-CNN is performed by FCN, which is less accurate with small training datasets compared with U-Net. [15,30] To address these issues, we have developed a Nuclei Segmentation Toolset (NuSeT), which integrates U-Net [15] and a modified RPN (based on the implementation of previous works [31,32]) to accurately segment fluorescently labeled nuclei. In this integrated model, U-Net performs pixel-segmentation, while the modified RPN predicts unique bounding boxes for each image based on U-Net segmentations. The resulting output provides seeds for a watershed algorithm to segment touching nuclei. To minimize segmentation errors stemming from fluorescence signal variability and cell density variability in samples, we employed a novel normalization method that uses only foreground pixel intensities for image normalization. To increase the robustness and applicability of the model, we used training sets including samples with wide variations in imaging conditions, image dimensions, and non-cellular artifacts. Extensive qualitative and quantitative evaluation suggest that our segmentation pipeline significantly improves nuclei segmentation, especially in distinguishing overlapping boundaries, and is generalizable to both fluorescent and histopathological images. NuSeT is a robust nuclear segmentation tool Tools for segmenting fluorescent nuclei need to address multiple features and limitations of biological images. [6,33] Typical issues and limitations include: 1. Boundary assignment ambiguity: biological samples frequently have very high cell density with significant overlap between objects. 2. Signal intensity variation: Within one image, the signal can vary within each nucleus (e.g. due to different compaction states of the DNA in heterochromatin vs. euchromatin) and across nuclei (e.g. due to cell to cell differences in nuclear protein expression levels and differences in staining efficiency). 3. Non-cellular artifacts and contaminants: Fluorescence microscopy samples are often contaminated with auto-fluorescent cell debris as well as non-cellular artifacts. 4. Low signal to noise ratios (SNRs): Low SNRs typically result from lower expression levels of fluorescent targets and/or high background signal, such as sample autofluorescence. (S1 Fig). We used an end-to-end training approach that incorporates both U-Net and Region Proposal Network (RPN) [15,18] to address these issues (Methods). In our approach, the training and inference step consists of running an input image in parallel in both U-Net and RPN. The final output of U-Net consists of two feature maps of the same shape as the input image, representing background and foreground pixel-assignment scores. [10] The final foreground prediction is then computed from the maximum class score of each pixel. Although U-Net alone performs well on some microscopy datasets [30,34], we incorporated RPN since it was originally designed to detect objects in images with high information content. [18] We reasoned that the accurate performance of RPN in detecting objects can be leveraged to improve nuclear segmentation performance. To achieve robust separation of touching nuclei, we used RPN bounding boxes to determine nuclear centroids, which were then supplied as seeds to a watershed algorithm. [35,36] To improve segmentation accuracy in images with large nuclear size variations, we modified the original RPN architecture to use bounding box dimensions based on average nuclear size for each image (S2 Fig). Instead of training U-Net and RPN separately, we merged the feature-extraction part of RPN with the down-sampling part of U-Net to avoid longer training time and more memory cost (Fig 1A). [10,15,18,19] In this way, the instance detection insights of RPN are extracted from the model structure. To evaluate the segmentation performance of the different algorithms, we computed the mean intersection over union for foreground and background (mean IoU), Root Mean Square Error (RMSE), and pixel accuracy (to benchmark pixel-level performance). Since in biological image processing the primary focus is on cell-level segmentation rather than pixel-level accuracy, we also included object-level segmentation metrics, including the rate of correctly separating overlapping nuclei, correct and incorrect detections, splits, merges, catastrophes, and both the falsepositive and false-negative detection rates (Methods). [29,30] Two separate datasets, 'MCF10A' and 'Kaggle', were used to compare the performance of the algorithms. [33] The MCF10A dataset consists of images of relatively uniformly fluorescent nuclei of a non-tumorigenic breast epithelial cell line [37], grown to different levels of confluence. The Kaggle dataset was adapted from a public dataset [33] representing cells from different organisms (including humans, mice, and flies) and containing images with a wide range of brightness, cell densities, and nuclear sizes. The overall comparison in S1 Table and S2 Table suggests that NuSeT achieves similar pixel-level segmentation accuracy compared with a current state-of-the-art pixel-level cell segmentation approach (U-Net) but has higher separation rates for overlapping nuclei and fewer merge errors. With the Kaggle dataset, NuSeT improved the separation of touching nuclei by more than 75% compared with U-Net. Compared with another state-of-the-art instance segmentation approach, Mask R-CNN, NuSeT achieved much lower false-negative detection rates in Kaggle dataset, leading to significantly better pixel-level segmentation accuracy. To make NuSeT more user-friendly, we have prepared a cross-platform graphic user interface (GUI) for the scientific community. Our GUI comes with the pretrained model which we used to benchmark NuSeT performance for various nuclei segmentation tasks. The GUI also allows the use of training and predicting modules (Fig 1B), allowing the users to perform custom segmentation tasks with NuSeT. Foreground normalization improves segmentation performance Normalizing training data to alleviate image intensity differences is central to accelerating learning and improving network performance. Historically, imaging data have been normalized by subtracting the mean intensity calculated from all pixels in a dataset. [38,39] However, this leads to discrepancies in normalization, particularly for images with markedly different brightness levels. Normalizing data at whole-image level addresses the issue of illumination differences [40], but introduces brightness differences in images with sub-regions of varying cell densities (Fig 2A). Additionally, whole-image normalization fares poorly in images strewn with auto-fluorescent artifacts (S3 Fig). We incorporated a foreground normalization step in our data preprocessing. In this approach, only the pixels that belong to cell nuclei (foreground) are selected to calculate mean and standard deviation of pixel intensities. Since no label is provided during inference, foreground normalization requires two passes. In step one, the test data are normalized on a per image level to generate a coarse prediction of the foreground with our RPN-U-Net fusion. In step two, this coarse prediction is used to perform foreground normalization on test images before they are fed into the model for a second pass (Fig 2B). Compared with whole-image normalization, the two-step foreground normalization approach is relatively robust to illumination differences, cell-density variations, and image artifacts and performs better in normalizing images with a broader dynamic range of pixel intensities ( Fig 2C and 2D). As a result, model training with foreground normalization increased nuclei detection accuracy and boundary assignment for both Kaggle and MCF10A datasets, with more correct detections and less merge errors (S1 Table). To further analyze how whole-image normalization models affect the performance of the foreground normalization model, we trained the whole-image normalization models to different mean IoU levels. This step was essential as the pixel-level accuracy of the whole-image normalization model was critical for selecting pixels to perform the following foreground normalization. By connecting the different whole-image normalization models with the final foreground normalization model, we found that when the mean IoU of the whole-image normalization models were less than 0.82, the performance of the foreground normalization model heavily depended on the whole-image normalization models (Fig 2E and 2F, S3 Table). This suggests that the performance of the foreground normalization model relies on the accuracy level of the whole-image normalization model. However, when the mean IoU of the whole-image normalization models were higher than 0.82, the foreground normalization model was less affected (Fig 2E and 2F, S3 Table). Given the modularity of the foreground normalization approach, we next asked whether foreground normalization could be integrated into other deep learning models, such as U-Net and Mask R-CNN, to enhance their performance. Consistent with our expectation, training U-Net with foreground normalization improved the overlapping nuclei separation performance by 6% to 35% (in MCF10A and Kaggle datasets). Foreground normalization also improved nuclei-detection accuracy of U-Net, and reduced merge errors (S4 Table). However, the segmentation performance of Mask R-CNN was not significantly improved by foreground normalization. The segmentation performance was almost identical to the model trained with whole-image normalization (S4 Table). Given that the performance of Mask R-CNN is highly dependent on the detection accuracy of RPN, whereas both NuSeT and U-Net rely heavily on pixel classification to perform segmentation, we concluded that foreground normalization improved the segmentation performance by rescaling the image pixels more consistently, aiding in better classification of foreground and background pixels. Synthetic datasets in model-training improve detection and segmentation accuracy Common sample contaminants have irregular shapes, significantly different overall brightness levels and aspect ratios compared to real cells, and uneven pixel intensities. To improve model performance and minimize false-positive detection rates, we computationally generated synthetic images containing irregular shapes with varying intensities, as well as nuclei-like blobs (Methods). We also added Gaussian blur and noise to the synthetic images to better represent real-world images. Additionally, overlapping blobs were included to mimic touching nuclei. Example synthetic images and training labels are shown in Fig 2E. Including synthetic data in the training process notably improved the model's performance in distinguishing real nuclei from imaging artifacts ( Fig 2F) and enhanced the separation of touching nuclei (S1 Table). The addition of foreground normalization on top of the synthetic images during model-training further reduced false positive detections (Fig 2F). Aided by these improvements, NuSeT outperformed both U-Net and Mask R-CNN in artifact detection/rejection (Fig 2F). RPN-aided Watershed improves boundary-resolution of highly overlapping objects Having improved nuclear segmentation performance, we revisited the problem of separating overlapping nuclei. Previous studies have used algorithms such as intervening and concave contour-based normalized cut [41,42] on binary segmentation masks extracted using traditional segmentation methods such as Otsu's method [8] to delineate overlapping nuclear boundaries. However, nuclear segmentation using traditional thresholding approaches failed to detect half of the nuclei in the Kaggle dataset (S2 Table), indicating that this approach is only effective for images with clean backgrounds and uniform signal. Recent studies have trained deep neural networks to learn the Euclidean distance transform (EDT) of the original mask corresponding to the input images [25,26], and apply a watershed transform on the model-predicted distance map to perform the final segmentation. This method has been further improved by adding the cell location information to the watershed transform to achieve better segmentation results. [26] These methods successfully address the challenges of separating overlapping objects, as EDT provides the neural networks with more morphological information. Instead of training the model on EDT space, we trained the U-Net module directly with the binary masks. We also employed our modified RPN approach to detect nuclei. The nuclear centroids estimated from the RPN derived bounding box coordinates were passed as seeds for the watershed algorithm to generate cuts at touching nuclei boundaries on the U-Net produced binary masks ( Fig 3A). [35,36] Our results suggest that a modified RPN can detect most nuclei in overlapping regions, and a RPN-aided watershed separates 72%/94% of overlapping nuclei for Kaggle/ MCF10A dataset ( Fig 3B, S2 Table). Compared with the modified RPN model without watershed, RPN-aidedwatershed improved the overlapping nuclei separation performance and lowered the number of merge errors (S2 Table). Through the integration of synthetic images, foreground normalization, and RPN-aided watershed, NuSeT consistently outperforms other state-of-the-art segmentation methods including U-Net and Mask R-CNN in nuclear boundary demarcation, particularly for blurry, low SNR nuclei (Fig 3C, S4 Fig, S2 Table). Mask R-CNN and NuSeT perform comparably in relatively sparse and homogenous samples (S2 Table). However, NuSeT approximates groundtruth boundaries more closely than U-Net and Mask R-CNN in samples with high cell densities (Fig 3D and 3E). Three-dimensional spatio-temporal tracking of individual nuclei in mammary acini To investigate the performance of our algorithm in segmenting densely packed nuclei, we used NuSeT to segment and track nuclei in 3D reconstituted mammary acini grown from a Ras transformed MCF10A (MCF10AT) cell line. MCF10AT was chosen since upon continued growth in matrigel, this cell line produces mammary acini with very high cell density. Threedimensional segmentation was performed by processing individual 2D slices from a Confocal Microscope Z-stack followed by three-dimensional reconstruction. NuSeT successfully segmented most of the nuclei in an acinus (Fig 4A), which facilitated seamless tracking of nuclei in mammary acini disorganizing on a 3D collagen matrix (Fig 4B and 4C). Both NuSeT and Mask R-CNN performed similarly on early-stage mammary acini (cell count =~34 cells/acinus) (S5 Fig). To further evaluate the performance of different algorithms (NuSeT, U-Net, Mask R-CNN and Otsu's method) on segmenting nuclei in mammary acini, we carried out nuclear segmentation on 2D projections of dense mammary acini. NuSeT accurately segmented most of the nuclei in dense mammary acini (Fig 4D-4G). We were also able to track single nuclei through the entire process of acinar disorganization PLOS COMPUTATIONAL BIOLOGY NuSeT: An advanced Nuclear Segmentation Tool that while Mask R-CNN and NuSeT achieved comparable accuracy in nuclear boundary determination (median area of detected nuclei: 147 μm 2 vs. 139 μm 2 , Fig 4E and 4F), Mask R-CNN only detected a subset of all nuclei (Fig 4D and 4G). Nuclear segmentation with U-Net on the other hand resulted in much larger nuclear area (median area of detected nuclei = 233 μm 2 , Fig 4E and 4F), indicating that U-Net often failed to separate touching nuclei (Fig 4G). All deep learning approaches outperformed the 'traditional' algorithm (Otsu's Method, nuclei area = 2816.6 μm 2 , S5 Fig), as it rarely segmented single nuclei in dense settings (Fig 4G). Together, our results suggest that NuSeT outperforms both Mask R-CNN and U-Net in detecting nuclei and assigning boundaries for overlapping nuclei. Segmentation of histopathology samples and dividing cells To further validate the performance and assess the generalizability of our algorithm, we extended NuSeT based segmentation to histopathology samples and rare-event detections as in the case of dividing cells. As a test case for segmentation of histological samples, we re-trained NuSeT to segment fat globules in H & E stained sections of liver tissue. Evaluation of liver steatosis is a key step in both fatty liver disease diagnosis as well as pre and post-liver transplantation evaluation. The key challenges of segmenting fat globules from liver sections include detecting multi-scale globules and distinguishing them from tissue tearing artifacts. NuSeT successfully segmented both micro and macro-globules and avoided false detection of tissue tearing artifacts (Fig 5A and 5B), with mean IoU = 0.73 on a validation dataset. Detecting and segmenting rare events in images are more challenging, as the majority area of the image is denoted as background, and back-propagation of gradients will overwhelm the model in classifying simple background pixels. Mitotic events, especially in images populated densely with non-dividing cells, is an example of such rare-event detection. To address this challenge, we designed an approach to highlight the regions close to mitotic events and give them more weights to 'catch' the attention of the model during training (Fig 5C). Using this strategy, we retrained the NuSeT model to detect and segment mitotic nuclei in human breast cancer histopathology samples [43,44], as the total number of mitotic events detected is a crucial indicator of the degree of malignancy for breast cancer diagnosis. Our results indicate that NuSeT can detect and segment the majority of mitosis events in breast cancer histopathology slides (Fig 5D and 5E), and was able to provide confidence scores for all the detected mitotic events (detection precision = 56.22%, recall = 58.85% on validation dataset). When we inspected the data, we found several detection errors stemming from mis-classification of other objects such as dense nuclei and lymphocytes, which are very similar in appearance to mitotic nuclei. When trained with fluorescently labeled nuclei (MCF10A cells stably expressing the nuclear marker histone H2B-eGFP), NuSeT captured the mitotic progression from prophase to telophase (Fig 5D and 5E, S7 Fig) (detection precision = 73.90%, recall = 90.20% on validation dataset). Together our results indicate that NuSeT is highly generalizable and can be applied to histopathology segmentation tasks as well as detection of rare events in samples of high clinical value. Discussion Here we present a deep learning model for nuclear segmentation that is robust to a wide range of image variabilities. Compared with previous models that need to be trained separately for specific cell types, NuSeT provides a more generalized approach for segmenting fluorescent nuclei varying in size, brightness and density. We have also developed novel training and pre/ post-processing approaches to address common problems in biological image processing. Our results indicate that every stage in deep learning, from data collection to post-processing, is crucial to training an accurate and robust nuclear segmentation model. When compared with the state-of-the-art cell segmentation models, NuSeT separates touching nuclei better than U-Net and detects more nuclei than Mask R-CNN. Thus, it assimilates the advantages of both semantic segmentation (U-Net) and instance segmentation (Mask R-CNN) and circumvents their limitations. This combination enables NuSeT to analyze complex three-dimensional cell clusters such as mammary acini and track single nuclei in dynamic crowded environments. When retrained on histopathology images, NuSeT is able to segment cells and rare events in H&E Stained samples using new training data. Therefore, we expect NuSeT to find wide applicability, particularly in the areas of cell lineage tracing and clinical diagnosis. Although we have modified the original RPN architecture to adjust detection scales based on the median nucleus size for each image, NuSeT assumes similar nuclear sizes in the same image. This may account for the occasional errors in nuclei segmentation when using RPNaided watershed. If markedly irregular (such as dim/deformed/blurry) nuclei are encountered in the same image, RPN may over-or under-detect the nuclei and produce incorrect numbers of bounding boxes. This would lead to marker misplacement and erroneous segmentation lines. While we expect NuSeT to perform well for nuclei of most mammalian cell types, its performance for mixed populations remains to be validated. Recent studies have extracted image features from multi-scale and 'pyramidal hierarchy' neural networks to improve detection accuracy for objects with large size variations. [45,46] Subsequent work has improved object detection in dense samples using weighted loss functions. [27] By incorporating these advances into our current model, we expect to further improve NuSeT in multi-scale nuclei detection. Our approach has cross-platform support and comparatively low hardware requirements (S5 Table). With a medium-level Nvidia GPU (Quadro P4000), training an accurate model only takes five hours, and the inference proceeds at 1.98 seconds/Mega pixel. From a user standpoint, the NuSeT GUI enables researchers to easily segment their images without needing to understand all the details of machine-learning, which connects state-of-the-art computer vision algorithms to a suite of cell biology problems. While in the present work we provide an effective and efficient pipeline for cell nuclei segmentation, this approach should be easily adaptable to a wide variety of image segmentation tasks involving densely packed and overlapping objects, such as jumbled piles of boxes or people in crowds. Kaggle dataset preprocessing The Kaggle dataset was downloaded from the Broad Bioimage Benchmark Collection (Accession number BBBC038v1). [33] This dataset was sampled from a wide range of organisms include human, mice and flies, and the nuclei were recorded under different imaging conditions. Stage-1 training and test datasets were used for training and validation process. All the images were manually censored and training data with low segmentation accuracies were discarded. Only fluorescent images were used for training and validation process. We converted the run-length encoded labels to binary masks for both training and validation labels in MATLAB. The final Kaggle dataset used for our model contains 543 images for training and 53 images for validation. Segmentation errors, including mask misalignment and touching cells, were manually corrected image-by-image for training and validating data. Mammary acini, MCF10A monolayer growth, and mitosis data collection The MCF10A data and fluorescent mitosis data were collected on an Olympus FV10i confocal microscope with a 60X objective on MCF10A human breast epithelial cell line. The cell nuclei were stained with 1uM Sir-DNA for 1 hour before imaging. The test set consists of 25 experiments with the corresponding ground-truth binary labels. MCF10AT acini were grown and the acini disorganization assays were performed as described in Shi et al. [4] The fluorescent mitosis data were collected on an Olympus FV10i confocal microscope with a 60X objective on MCF10A human breast epithelial cell line stably expressing H2B-eGFP with 10 minutes time interval over 3.5 days. [4] The final fluorescent mitosis training dataset used for our model contains 518 images for training and 57 images for validation. Liver tissue slide collection Biopsied liver tissue slides were stained with hematoxylin and eosin and scanned with Philips Intellisite Pathology Solutions or Aperio AT2 scanners. To accelerate the training process, each liver slide was down-sized by 8 folds and partitioned into 20-30 tiles of dimensions 256 pixels by 256 pixels. The fat globules were manually annotated by a pathologist for both training and validation datasets. The final training dataset used for our model contains 247 images for training and 10 images for validation. Histopathology mitosis dataset preprocessing The Mitosis dataset was downloaded from ICPR 2012 [43] and ICPR 2014 [44] mitosis detection contests. Breast cancer biopsy slides ranging from low-grade atypia to high-grade atypia were stained with hematoxylin and eosin and scanned by two scanners: Aperio Scanscope XT and Hamamatsu Nanozoomer 2.0-HT. Mitosis events were annotated by at least two individual pathologists. Training datasets acquired at 40X magnifications from ICPR 2012 and 2014 were used for training the model. All the images were manually censored and training data without any mitosis events were excluded. We also converted the coordinates of mitosis locations into binary masks for both training and validation labels using MATLAB scripts. The final training dataset used for our model contains 621 images for training and 69 images for validation. To accelerate the training, we down-sampled the original images by a factor of 2. The trained model was further tested with ICPR 2012 test dataset. Data augmentation To accelerate the training process, only simple data augmentation techniques were applied to the training images. We adopted mirror flip and small rotation (10 degrees, counterclockwise) for training data to alleviate the overfitting problem. Synthetic data generation Synthetic cell nuclei images were generated by utilizing nuclei-like blobs (adapted from https://stackoverflow.com/questions/3587704/good-way-to-procedurally-generate-a-blobgraphic-in-2d), as well as random shape polygons/lines. Signal (brightness) variations were added to both blobs and polygons/lines. The sizes of nuclei like blobs, polygons and lines were varied image-by-image to simulate different imaging conditions. The synthetic images were generated with various image sizes, with width and height ranging from 256 pixels to 640 pixels. Gaussian noise and Gaussian blur were added to these images. We applied overlapping of blobs to strengthen the model capability in separating touching nuclei. The binary masks of the synthetic images were generated separately. To correctly separate all overlapping blobs in the corresponding segmentation masks, the positions of blobs were used as markers to apply watershed transform [36] on overlapping blobs. Training and inference details To construct the training data, we incorporated 543 training images from the Kaggle dataset and 25 training images from MCF10A dataset as the base-training dataset. After data augmentation, the training set contained 568 (original) + 568 (flip) + 568 (rotate) = 1704 images. Then we mixed the real images with synthetic images at 1:1 ratio to generate the final training dataset. The training images were normalized by subtracting the foreground mean value and dividing by the foreground standard deviation. Since U-Net contains 4 down-sampling and upsampling layers, to make the tensors at each layer compatible, training images were further cropped so that widths and heights of the images were adjusted to the nearest multiple of 16. To train RPN, the ground truth coordinates for bounding boxes were calculated based on the binary nuclei masks. The coordinates of the bounding box, (x_min, y_min), (x_max, y_max) were denoted as the most upper-left and lower-right pixels of the corresponding nuclei. Weight matrices were calculated per mask with w0 = 10 and sigma 2 = 5 pixels. To avoid out-of-memory, one image was fed into the network at a time. During the training, the sequence order of the training data was reshuffled before each epoch to prevent overfitting. The learning rate was set to 5e-5, and Rmsprop [47] was utilized as the training optimizer, and the best performance model was chosen within the first 30 epochs. The training loss was the sum of segmentation loss and detection loss. Segmentation loss was the sum of binary cross-entropy loss [10] and Dice loss, and the detection loss was the class loss and regression loss as described in previous work. [18] Two validation datasets were used to benchmark the model performance. The Kaggle validation dataset [33] contains 50 images that have various types of nuclei under different imaging conditions. The MCF10A dataset contains 25 images that have homogenous nuclei imaged under the same setting manner. This study was performed on Nvidia Quadro P4000. Additional segmentation performance is shown in S6 Fig. Model evaluation Eight models were chosen to compare their performance on both Kaggle and MCF10A validation dataset, including Otsu's Method [8], Deep Cell 1.0 [14], U-Net [15], Mask R-CNN [19], NuSeT with whole-image normalization and without synthetic data, NuSeT with whole-image normalization, NuSeT with foreground normalization, and NuSeT with foreground normalization and RPN-aided watershed. The entire training dataset (with data augmentation and synthetic images) was applied to train all NuSeT models. To test Deep Cell 1.0's performance on the Kaggle and MCF10A dataset, we selected the HeLa fluorescent nuclei model from the initial set of models from (http://www.deepcell.org/predict, accessed on Feb 25 th , 2019). Since no pre-trained two-dimensional fluorescent nuclei segmentation model was found from U-Net [15,34], we trained U-Net on our training dataset (without synthetic data) as our closest estimate for performance. The original Mask R-CNN model was trained for real-life segmentations. Therefore we trained Mask R-CNN on our training dataset (without synthetic data) starting from FPN-101 backbone. [48] We did not apply the aforementioned modified RPN to Mask R-CNN, since Mask R-CNN performs the segmentation strictly after the RPN detection, effectively blocking information transfer between the detection and the segmentation modules. We removed cells smaller than 1/5 of the average cell area in the image for prediction masks from all models prior to benchmarking. To evaluate model performance, we adopted the following performance metrics: percentage of touching cell separated, correct detections, incorrect detections, split errors, merge errors, catastrophe errors, false negative detection rate (F.N. rate), false positive detection rate (F.P. rate), mean I.U., RMSE, F1 and pixel accuracy. The first eight metrics were evaluated on the nuclei level, and the last four metrics indicate the performance on the pixel-level. The calculation of correct and incorrect detections, as well as split, merge and catastrophe errors have been described in previous works. [29,30] Briefly, correct detections denote the number of predicted cells that can link with ground truth cells, and incorrect detections refer to the number of unlinked cells from the prediction. Split, merge and catastrophe errors are subsets of incorrect detections, where split and merge errors describe the splitting and merging of ground truth cells into prediction cells, and catastrophe errors refer to the uneven matching of ground truth and prediction cells. [29,30] The percentage of touching nuclei separated is calculated as: % nuclei separated ¼ N nuclei separated N total overlapping nuclei N nuclei separated denotes the number of touching nuclei that have been successfully separated by the model, N total overlapping nuclei denotes the total number of touching nuclei in the entire dataset. F.N. rate is the proportion of the nuclei that the model fails to detect in the entire dataset. The detection failure is defined as: given a nucleus' ground-truth binary mask, find the corresponding model-predicted mask that has the largest overlap ratio, which is measured by: Where A GT is the area of ground truth nucleus, A pred is the area of model-predicted nucleus. If the overlap ratio is smaller than 0.7, it is suggested that the model fails to detect the nucleus. Hence the F.N. rate is denoted as: F:N:rate ¼ N missing nuclei N total nuclei N missing nuclei denotes the number of nuclei that the model fails to detect. N total nuclei denotes the total number of nuclei labelled by ground-truth in the dataset. Likewise, F.P. rate is the proportion of the nuclei that the model mis-detects in the entire dataset. The mis-detection is defined as: given a nucleus' model-predicted mask, find the corresponding ground-truth mask that has the largest overlap, if the overlap ratio of the model predicted mask and the ground-truth mask is smaller than 0.7, it is suggested that the model detects an 'nucleus' that does not exist in the ground-truth. Hence the F.P. rate is denoted as: F:P:rate ¼ N misÀ detections N total nuclei N mis−detections denotes the number of model-predicted nuclei that found no match in the ground-truth labels. Pixel-level metrics mean IU, F1, RMSE and pixel accuracy were calculated as: ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi 1 N pix Where TP, TN, FP, FN denotes the pixel-level counts of true positive, true negative, false positive and false negative for single image. N cls denotes the number of classes a pixel can be predicted to, in our case N cls = 2 (foreground and background), and TP n denotes the true positive counts of class n. N pix is the number of pixels in the image, and y i pred is the binary value of pixel i in the model-predicted mask, y i is the binary value of pixel i in the ground-truth mask. The pixel-level metrics over the entire dataset were then calculated as the average metrics of all the images in the dataset. Precision and recall were calculated as TP/(TP+FP) and TP/(TP +FN). Supporting information S1 Table. Internal performance comparison across different datasets. Step-by-step addition of synthetic data, foreground normalization, and RPN-aided watershed result in better performance at object-level. Notice that the pixel-level accuracies (mean IU, RMSE, F1, pixel accuracy) are similar, despite marked differences in object-level metrics. Melcher.
8,057
2019-08-28T00:00:00.000
[ "Computer Science" ]
First data on microflora of loggerhead sea turtle (Caretta caretta) nests from the coastlines of Sicily ABSTRACT Caretta caretta is threatened by many dangers in the Mediterranean basin, but most are human-related. The purposes of this research were: (i) to investigate microflora in samples from six loggerhead sea turtle nests located on the Sicilian coast and (ii) to understand microbial diversity associated with nests, with particular attention to bacteria and fungi involved in failed hatchings. During the 2016 and 2018 summers, 456 eggs and seven dead hatchling from six nests were collected. We performed bacteriological and mycological analyses on 88 egg samples and seven dead hatchlings, allowing us to isolate: Fusarium spp. (80.6%), Aeromonas hydrophila (55.6%), Aspergillus spp. (27.2%) and Citrobacter freundii (9%). Two Fusarium species were identified by microscopy and were confirmed by PCR and internal transcribed spacer sequencing. Statistical analyses showed significant differences between nests and the presence/absence of microflora, whereas no significant differences were observed between eggs and nests. This is the first report that catalogues microflora from C. caretta nests/eggs in the Mediterranean Sea and provides key information on potential pathogens that may affect hatching success. Moreover, our results suggest the need for wider investigations over extensive areas to identify other microflora, and to better understand hatching failures and mortality related to microbial contamination in this important turtle species. INTRODUCTION The loggerhead sea turtle (Caretta caretta) is a vulnerable species according to the International Union for Conservation of Nature (IUCN) and is included as a protected species under different international conventions [e.g. the Barcelona Convention, the Bern Convention and the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES)]. Although C. caretta is threatened by many dangers, most are related to human interactions, however other, indirect threats are present; ingestion of fishing hooks or plastic can seriously damage the animal's gastrointestinal tract and impact with boats, accidental capture in fishing nets and the influence of beach tourism on habitats and nesting sites all present threats (Caracappa et al., 2018;Mingozzi et al., 2008). The loggerhead sea turtle is the only sea turtle species that nests along the Eastern Sicilian coastline and on the Lampedusa and Linosa Islands. In previous years, loggerhead nesting sites were recorded on the coasts of Sicily, Sardinia, Apulia and the Ionic coasts of Basilicata and Calabria regions (www.lifegate.it). However, at the national level, nesting is considered sporadic, except for the Ionic sections of the southern Calabria and Pelagian Islands (Linosa and Lampedusa), where loggerhead nesting sites were confirmed. In previous years, there have been significant increases in the numbers of nests recorded along Italian shores; this reached 70 nests in the summer of 2018, along with increases in deposition sites in Sicily. On the Sicilian island, not only has there been increased numbers of nests reported, but also increased involvement of several coastal areas in addition to the Pelagian Islands where nesting has always occurred (www.legambiente.it). It is important to note that loggerhead survival, egg development and carapacial abnormalities are influenced by environmental factors, such as temperature, humidity, distance from the sea, tidal flow, rain levels and sediment granulometry (Baran et al., 2001;Caracappa et al., 2016;Durmuşet al., 2011;Santoro et al., 2006). These environmental parameters or conditions can cause or encourage bacterial and fungal proliferation, affecting correct embryonic development and thereby causing hatching failures. Indeed, turtle nests, thanks to the presence of nutrients, high temperatures and high humidity, are ideal habitats for microbial growth, and these factors can impact on hatching success by altering nest temperatures and oxygen content (Bézy et al., 2015;Keene et al., 2014). Several microorganisms have been identified and isolated from unhatched turtle eggs and some are considered important causes of nest mortality (Bailey et al., 2018;Sarmiento-Ramírez et al., 2010). Bacteria belonging to the genus Vibrio have been isolated from cloacal swabs of Lepidochelys olivacea and Chelonia mydas agassizii turtle nests in Costa Rica and Mexico (Acuña et al., 1999;Zavala-Norzagaray et al., 2015). The presence of Enterobacteriaceae, such as Escherichia coli, Salmonella spp., Enterobacter spp., Klebsiella oxytoca, Klebsiella pneumoniae, Citrobacter spp., Serratia spp., Pseudomonas spp. and Aeromonas spp. have been found in C. mydas and L. olivacea eggs (Al-Bahry et al., 2009;Keene et al., 2014;Santoro et al., 2006). In terms of Gram-positive bacteria, the most frequently isolated species of staphylococci from C. mydas nests in Costa Rica were Staphylococcus aureus, Staphylococcus intermedius, Staphylococcus epidermidis and Staphylococcus cromogenes (Santoro et al., 2006). Those fungal species isolated from sea turtle nests and eggs come from Aspergillus, Fusarium, Chrysosporium, Penicillium, Emericella, Rhizopus, Actinomucor and Apophysomyces genera (Bailey et al., 2018;Candan, 2018;Güçlü et al., 2010;Phillott et al., 2002). These genera are predominantly saprophytic species that become opportunistic pathogens under particular conditions, e.g. developing embryo conditions. In particular, Fusarium solani and Fusarium oxysporum have been reported in the nests of different turtle species in Turkey, Costa Rica, Australia, Brazil, Cape Verde and Italy (Güçlü et al., 2010;Keene et al., 2014;Neves et al., 2015;Phillott and Parmenter, 2014). Fusarium falciforme and Fusarium keratoplasticum have also been reported in loggerhead sea turtle nests in the USA (Bailey et al., 2018). According to some studies (Phillott, 2004;Sarmiento-Ramírez et al., 2010; and references therein), fungal species belonging to the Fusarium genus are the leading cause of hatching failure in turtle eggs. These fungal species target eggs located at the top and sides of nests, at positions in close contact with the surrounding sand. After penetrating the inorganic and organic shell layers, these fungal species reduce respiratory gas exchange, decrease the availability of eggshell calcium for developing embryos and exploit embryonic tissue as nutrient sources (Phillott, 2004). Currently in the Mediterranean Sea, there is a dearth of information on the microbial contamination of eggs/nests of C. caretta and their possible implications for egg hatching. The aims of this study were to investigate the microflora in six loggerhead sea turtle nests located on the Sicilian coast, and to better understand microbial diversity associated with these nests, paying particular attention to bacterial and fungal species as potential causes of hatching failures. RESULTS During our study, 88 samples from six nests were analysed to provide data on the microflora of loggerhead sea turtle nests located on the Sicilian coast. The results of bacteriological and mycological investigations are summarised in Table 1. All samples, except those from nest 4 (from Linosa Island), were positive for one or more fungal colonies: in particular three types of morphologically different fungal colonies grew on Sabouraud Dextrose Agar (SDA). From nests 1 and 2, two different colonies ascribable to the Aspergillus genus were isolated (n=24). Although molecular identifications were not performed, the typical growth on SDA, associated with microscopic observations, suggested these colonies belonged to A. fumigatus and A. flavus species. In addition, amongst the SDA plates (n=88), 71 showed fungal colonies potentially attributable to Fusarium spp. (Table 1); these colonies were white-cream and salmon pink, with a light brown reverse. The microscopic appearance supported a Fusarium genus identification and suggested the possible presence of two species: F. solani and F. oxysporum ( Fig. 1) (Leslie et al., 2006;Nelson et al., 1983). Morphological observations of these fungal colonies were confirmed by molecular analyses performed randomly on at least two positive plates from each nest. Sequences from the internal transcribed spacer (ITS) region were compared with data available from GenBank ® and confirmed the colonies were F. solani and F. oxysporum. A single haplotype was observed for each of the two species (GenBank accession number MN960391-92), and these two haplotypes were identical to the haplotypes in GenBank ® for these species. Accordingly, all 71 sequenced colonies were ascribed to F. solani or F. oxysporum ( Fig. 2 and Table 1). PERMANOVA analyses on the multivariate dataset showed significant differences between nests and the presence/absence of bacteria (Si), whereas no significant differences were observed between the interaction eggs and sites [Eg (Si)] as shown in Table 2. DISCUSSION For the first time, this study has investigated the microflora in eggs and dead hatchlings from six loggerhead sea turtle nests located on the Sicilian coast. We detected bacterial strains belonging to the genera Aeromonas and Citrobacter, and fungi belonging to the genera Aspergillus and Fusarium. Specifically, the most prevalent microorganism strains were Fusarium spp. (80.6%) followed by A. hidrophyla (55.6%). Importantly, our results are in agreement with previous data on the microflora of sea turtle nests and on possible pathogens (bacteria and fungi) that influence hatching (Keene et al., 2014;Phillott and Parmenter, 2014;Sarmiento-Ramírez et al., 2010). Indeed, bacteria belonging to the Aeromonas genus are ubiquitous and are often isolated in marine and coastal environments (Dumontet et al., 2000;Fiorentini et al., 1998). Aeromonas spp. can infect eggs by penetrating shell pores, where they exploit interior substrates, allowing the bacteria to proliferate (Soslau et al., 2011). Although turtle shells are semi-permeable, they do not completely inhibit the passage of bacteria as eggs in contaminated substrates can acquire internal infections within hours of bacterial contact (Feeley and Treger, 1969). Wyneken et al. (1988) reported pathogenic microorganisms in the eggs of C. caretta that could explain the significant losses of hatched eggs occurring in sea turtle nests (Wyneken et al., 1988). Indeed, approximately 75% of bacteria isolated from turtle eggs can play pathogenic roles, and were also detected in mammals, amphibians, birds and fish (Craven et al., 2007). This could lead to the hypothesis that their roles are as opportunistic pathogens in turtle eggs when in conditions that favour proliferation. Fungi from the Fusarium genus are considered saprophytes, but they can act as opportunistic pathogens in immunocompromised subjects or in developing embryos, especially under environmental stress conditions (Güçlü et al., 2010). Critically, the two Fusarium species identified in this study, F. solani and F. oxysporum, are recognised as causes of reduced hatching rates in sea turtle nests, and can occasionally cause 100% mortality in turtle embryos (Güçlü et al., 2010;Keene et al., 2014;Neves et al., 2015;Sarmiento-Ramírez et al., 2010). The ability of F. solani and F. oxysporum to penetrate egg shells and invade the embryonic tissue is due to the production of lipolytic and proteolytic enzymes that degrade inorganic and organic egg components (Phillott, 2004). Additionally, Bailey et al. (2018) demonstrated the presence of Fusarium DNA (F. falciforme and F. keratoplasticum) in embryonic fluid and biofilms from 73 fully incubated, unhatched loggerhead sea turtle eggs collected from different regions of North America. However, a recent molecular study from Turkey identified fungi from five genera (Aspergillus, Emericella, Rhizopus, Actinomucor and Apophysomyces) isolated from successfully hatched green turtle (Chelonia mydas) nests on eastern Mediterranean coasts (Candan, 2018). Moreover, these authors demonstrated that the hatching success of nests contaminated by fungi was significantly lower than those of uncontaminated nests. Although this study does not sufficiently demonstrate the cause of C. caretta sea turtle nest failures on the Sicilian coastline, it does present key information on the microflora found in such nests with hatched/unhatched eggs and dead hatchlings. According to our observations on Linosa Island nests (using data loggers), the temperature and humidity recordings of these particular nests were very high during incubation periods. In particular, the average recorded temperatures were in the range 30-35°C, while the relative humidity was greater than 95% (unpublished data). These high values are potentially lethal for embryonic development and are optimal for the development of pathogenic microflora. Data loggers should be increasingly used to constantly record the main environmental variables (temperature and moisture) associated with the study of the characteristics/nature of the substrate. In this Node support is reported as 'BI nodal posterior probabilities'/'ML bootstrap support'. The accession numbers of the sequences derived from GenBank are shown in brackets. 'Haplotype 1' was observed in two nests (nests 3 and 6) and corresponds to F. oxysporum. 'Haplotype 2' was observed in four nests (nests 1, 2, 5 and 6) and corresponds to F. solani. See Table 1 for details on the occurrence of the two species in the studied sites and samples. particular instance (Linosa Island), the granulometric composition of the beach where the eggs were laid may have had important roles in pathogen development. Interestingly, site diversification was highlighted by PERMANOVA analyses, suggesting a different contribution of this particular nest to the microflora. Environmental factors which influence hatching success, such as the different sand grain size at nesting sites, humidity, soil temperature and interference from anthropic activities should also be taken into consideration in future studies. Moreover, according to several studies, cloacal swabs should be taken immediately after egg laying to check if microflora has been transmitted to the young by the mother (Phillott, 2004;Phillott and Parmenter, 2014). Finally, our hypothesis that the hatching failures at our six nests were caused by pathogenic microorganisms requires more information and analysis. Considering that in all the nests studied, the number of unhatched eggs was high, it is not surprising that A. hydrophila, F. oxysporum and F. solani were isolated. Besides being ubiquitous and widespread in marine and coastal environments, these microorganisms have been associated with hatching failures in the past (Phillott, 2004;Sarmiento-Ramírez et al., 2010). Furthermore, from our results table (Table 1), we observed that A. hydrophila occurred in association with Fusarium spp. in nests where the numbers of unhatched eggs exceeded hatched eggs. However, in the absence of cloacal swab analysis of the females, we cannot state for sure if these microorganisms were the cause of the hatching failures, or if these microorganisms proliferated by finding favorable substrates in the unhatched eggs. This study only focused on six nests from four territorial sites, therefore wider investigations over extensive areas are required to better understand the causes of hatching failures, as well as the high hatchling mortalities caused by microorganism contamination or environmental conditions. Sample collection As part of the monitoring activities of the Centro di Referenza Nazionale sul benessere, monitoraggio e diagnostica delle malattie delle tartarughe marine (CReTaM) of the Istituto Zooprofilattico Sperimentale della Sicilia (IZSSi), during the 2016 and 2018 summers, the centre received 463 samples from six different nests on the Sicilian coast (Table 3). Samples were transferred to the IZSSi in sterile biological bags for laboratory analyses. Bacteriological and mycological analyses were performed for each nest, on hatched and unhatched eggs that were randomly selected. A total of 56 unhatched eggs, 25 hatched eggs and seven dead hatchlings were analysed (Table 3). Bacteriological and mycological analyses All samples were washed in sterile water to remove sand residues. The shells from hatched eggs were homogenised in 9 ml alkalin pepton water (APW) broth and incubated at 25°C for 24-48 h. For unhatched eggs, after opening with sterile scissors and sterile forceps, swabs were taken from the shell and the contents. For dead hatchlings, swabs were taken from the belly surface, after which an incision was made with a sterile scalpel to allow swabbing of the abdomen. The swabs were then transferred to 9 ml APW and incubated at 25°C for 24-48 h. After this period, approximately 10 µl APW was aseptically spread onto selective agar plates. Samples were spread onto blood agar for the growth of different bacterial species and thiosulphate citrate bile salts sucrose agar for Vibrio spp isolation. These plates were incubated at 25°C for 24-48 h. Samples were spread onto McConkey agar plates for the isolation of Enterobacteriaceae and mannitol salt agar plates for the isolation of Staphylococcus spp. Plates were incubated at 37°C for 24 h. The presence of Salmonella spp. in egg contents and embryos (when present) were also tested. Firstly, samples for pre-enrichment were placed into 9 ml buffered peptone water broth, followed by enrichment in selenite cystine broth and Rappaport Vassiliadis broth, and then seeded on xylose lysine deoxycholate agar and brilliant green agar. After dissociation in generic culture medium, bacterial isolates were identified using the API test (Awong-Taylor et al., 2008). Mycological examinations were conducted by seeding shells and swabs on SDA. Plates were incubated at room temperature for 7 days. After this period, the isolated fungi were stained with Giemsa and morphologically identified according to guidelines from Leslie et al. (2006) and Nelson et al. (1983). Molecular identification of fungus Among the 71 SDA plates positive for Fusarium spp. growth, two positive plates from each nest were randomly selected for molecular identification. Fungal DNA was extracted using the Quick-DNA™ Fungal/Bacterial Miniprep Kit (Zymo Research, Irvine, CA, USA), according to the manufacturer's instructions. PCR of the ITS region was performed using the primer pair; ITS-1 (5′-TCCGTAGGTGAACCTGCGG-3′) and ITS-4 (3′-TCCTCCGCTTATTGATATGC-5′) as described previously (White et al., 1990). PCR products were purified and sequenced by Macrogen Inc. (Seoul, South Korea) on an ABI3130xL (Applied Biosystems, Carlsbad, CA, USA) sequencer. In addition to comparing our sequences with those available in the public repository, GenBank, 15 Fusarium spp. sequences and one Nectria atrofusca sequence were used as outgroups and included in these analyses (see Fig. 2 for accession numbers). Novel and GenBank sequences were aligned using ClustalX (Thompson et al., 1997) and manually trimmed to remove tails which were not present in all samples. The jModelTest ver 2.1.10 (Darriba et al., 2012) was used to test for the best fitting models of nucleotide substitution for the ITS dataset, under Akaike information criterion; the best-fit model proved to be a generalised time-reversible model with a gamma distribution rate variation among sites (GTR+G). The genetic identification of our samples was performed using Bayesian inference (BI) and maximum likelihood (ML) methods as implemented in MrBayes v. 3.2.6 (Ronquist et al., 2012) and PhyMl v. 3 (Guindon and Gascuel, 2003), respectively. As a measure of branch support, bootstrap values were calculated with 1000 replicates in the ML trees. For the BI, two independent Markov Chain Monte Carlo analyses were run with 2 million generations (temp.: 0.2; default priors). Trees and parameter values were sampled every 100 generations resulting in 20,000 saved trees per analysis; an initial fraction of 5000 trees (25%) was conservatively discarded as 'burn-in'. Nodes' statistical support of BI was evaluated by their posterior probabilities. Statistical analyses A permutational multivariate analysis of variance (PERMANOVA, Anderson et al., 2008) was performed to test the null hypothesis referring to the presence/absence of different bacteria between nests and hatched/ unhatched eggs. The analysis was based on Bray-Curtis dissimilarities (Bray and Curtis 1957) presence/absence dataset, and each term in the analysis was tested by 1999 random permutations of the appropriate units. The experimental design comprised two factors [Site (Si; six levels, fixed and orthogonal) and eggs (Eg; two levels, random and nested in Si)] and five variablespresence/absence of the following taxa: A. hydrophila, C. freundii, Aspergillus spp., F. solani and F. oxysporum.
4,312
2020-01-01T00:00:00.000
[ "Biology", "Environmental Science" ]
Saddle-node of limit cycles in planar piecewise linear systems and applications In this article, we prove the existence of a saddle-node bifurcation of limit cycles in continuous piecewise linear systems with three zones. The bifurcation arises from the perturbation of a non-generic situation, where there exists a linear center in the middle zone. We obtain an approximation of the relation between the parameters of the system, such that the saddle-node bifurcation takes place, as well as of the period and amplitude of the non-hyperbolic limit cycle that bifurcates. We consider two applications, first a piecewise linear version of the FitzHugh-Nagumo neuron model of spike generation and second an electronic circuit, the memristor oscillator. 1. Introduction. Bifurcations are qualitative changes in the phase portrait of families of differential equations as the parameter varies. The simplest cases are those involving equilibria. The next level of complexity would be bifurcations where limit cycles are involved. In particular, in a planar saddle-node bifurcation of limit cycles, also called fold of limit cycles, the phase portrait passes from exhibiting two hyperbolic limit cycles of different stability surrounding an equilibrium point, to the disappearance of such cycles through their collision in a non-hyperbolic semistable limit cycle. This bifurcation is very common in applications. Specifically, the bistable configuration, where the equilibrium point in the interior of both limit cycles and the outer limit cycle are stable, whereas the inner limit cycle is unstable. In this configuration, the basin of attraction of both attractors are bounded by the unstable limit cycle, which can be considered as a threshold between the resting regime and the oscillatory one. These bistable systems are ubiquitous in biology [45], and saddle-node bifurcation is then a route for explaining some phenomena, for instance annihilation and single-pulse triggering [26]. In such phenomenon, an oscillatory behavior is ceased by injecting a sub-threshold pulse, and then, the activity is restarted by injecting a supra-threshold pulse. Annihilation has been described in several biological oscillators, such as the activity of the sinoatrial node [27], the eclosion rhythm of fruit flies, the circadian rhythm of bioluminescence in marine algae and biochemical oscillators, see [45]. Moreover, saddle-node bifurcation is also involved in the building of the elliptic bursting [30], a bursting mechanism (a type of oscillatory behavior of excitable systems whose main characteristic is an alternation of quiescent phases, and rapid oscillations phases [29]), which takes place in rodent trigeminal neurons [15]. Also in electronic circuits, phenomena involving saddle-node bifurcations can appear, see [1,43]. In particular, they are observed in the well-known Chua's circuit [23,24]. This phenomenon is of a great interest for the control system designer. We can find very few works about the proof of the existence of saddle-node bifurcation of limit cycles in general nonlinear systems. In [12], it was shown that this bifurcation occurs in a generic two parameter unfolding of a homoclinic orbit with resonant eigenvalues. In [31,Th 3.6], authors prove the existence of a curve of saddle-node of limit cycles in canard regime in a family of slow-fast systems. However, it is not an easy task to give an expression, in the parameter space, where this bifurcation takes place. Alternatively, numerical methods can be used to analyze the saddle-node bifurcation of limit cycles. In fact, new methods and continuation packages have been developed with this goal [25]. From their appearance in the book of Andronov, Vitt and Khaikin [2], piecewise linear (PWL) systems have shown their capability not only to capture different behaviors coming from a wide class of applications [17,32], but also to reproduce a large amount of aspects of nonlinear dynamics. Furthermore, these systems show new behaviors, impossible to obtain under differentiability hypothesis [16,17,32]. Together with the property of mimicking the richness of nonlinear dynamics, PWL systems exhibit a simpler analytical treatment. This property allows, in many cases, to obtain quantitative information of the analyzed dynamical objects (for instance, the period and amplitude of limit cycles, [18,22,28]) and to explain the way some bifurcations take place [6]. Saddle-node bifurcation of limit cycles in the PWL context has been reported in different publications. In [35], the authors prove the existence of a codimension-1 manifold of saddle-node of limit cycles in a family of planar continuous PWL systems with three zones and symmetry coming from a heteroclinic connection. In [34], the authors study the number of limit cycles and prove the existence of two hyperbolic limit cycles with different stability surrounding one equilibrium point in a non-symmetric family of PWL systems. Even when this configuration is close to a saddle-node bifurcation as, for instance, we can observe in Figure 1 c) of [34], where two limit cycles are close one another, the saddle-node bifurcation is not reported in this paper. In [40], boundary equilibrium bifurcations in planar continuous PWL systems with two and three zones are studied. They find situations with two limit cycles and in this case the authors conjecture the existence of a saddlenode bifurcation of limit cycles. Regarding three dimensional PWL systems with two zones, in [23,24], the authors analyze the existence of a saddle-node of limit cycles as a degeneration of a focus-center-limit cycle bifurcation. Furthermore, they apply these results to the Chua's circuit. In [6,10], the authors describe a noose bifurcation in a PWL version of the Michelson system; such structure involves a saddle-node bifurcation, which is also analyzed. We finish this literature revision with [7,8], where a generalization of the Melnikov theory to non-smooth systems was developed and applied to prove the existence of saddle-node bifurcations of limit cycles in discontinuous and hybrid PWL systems. In this article, we focus our attention on the proof of the existence of a saddlenode bifurcation of limit cycles in continuous PWL systems with three zones. A branch of saddle-node of limit cycles is proved to start at a degeneration of a focuscenter-limit cycle bifurcation. This conclusion is established through the application of the Implicit Function Theorem to the closing equations together with a nonhyperbolicity condition. This technique has been previously used to prove the existence of limit cycles in [3,5,22,23,36] and global connections in [9], but we remark that here it is applied not only to the set of closing equations, but also to the non-hyperbolicity condition. The starting point for applying the Implicit Function Theorem is, often, in the perturbation of a linear center, either in the plane [3,7,21], or in the space [4,6,22]. Moreover, the piecewise linear perturbation can be continuous [6,21,22], or discontinuous [3,7]. We consider two applications of our theoretical result. First, the McKean model, a PWL version of the FitzHugh-Nagumo system [20,38]. As far as we are concerned, the existence of the saddle-node bifurcation of limit cycles in the original differentiable FitzHugh-Nagumo system has not been proved yet, although there exist numerical evidences of its existence [41,42]. In [44], the author consider a discontinuous version of the McKean model with two zones and proves the existence of a saddle-node bifurcation of limit cycles. In the present paper, we consider a continuous version of the McKean model with four zones of linearity, three zones to mimic the cubic, and one small extra zone in one of the folds in order to capture the passing of the solutions through the fold, see [19]. From our main result we conclude the existence in this model of a saddle-node bifurcation and derive explicit expressions for both the curve of saddle-nodes and period of the saddle-node limit cycle. The obtained result is compatible with those in [44]. As a second application, we consider the memristor oscillator [14]. For this electronic circuit, we analyze the version of the model that was considered in [34]. Although in [34] they find cases where two limit cycles exist, they do not focus their attention on the existence of the saddle-node bifurcation of limit cycles. The paper is outlined as follows. In Section 2, we introduce the target system and the main result. After that, Section 3 is devoted to the application of our result to first a PWL version of the FitzHugh-Nagumo neuron model and second the memristor oscillator. Subsequently, in Section 4 we include the proofs of the result established in Section 2. In Section 5 we state some conclusions and perspectives. Finally, we include two appendices. Appendix A is devoted to the most technical details of the proof of the main result and in Appendix B, we include an algorithm for fine tuning of the external impulse in order to facilitate annihilation/regeneration in a voltage trace of the McKean model. 2. PWL system with three zones. Main result. We focus our attention on the continuous planar piecewise linear (PWL) system with three zones of linearity (1) where u = (u 1 , u 2 ) T , the dot denotes the derivative with respect to the variable s and the piecewise linear vector field F is given by being v, w ∈ R, v < w, n L , n C , n R ∈ R 2 and M L , M C , M R 2 × 2 real matrices. The existence and uniqueness of solutions for the initial value problem associated to system (1) comes from the fact that F is a Lipschitz function. Note that, inter alia, matrices M L , M C and M R have to share their second columns due to the hypothesis of continuity of the vector field F . This shared column will be denoted by (m 12 , m 22 ) T , where the superscript T denotes the transposed. When m 12 = 0, the variable u 1 is decoupled and the dynamical behavior of the system (1) is not strictly bidimensional. We say, following [4], that system (1) is not observable. When the system is observable, an adequate change of variable allows to write the system into the canonical Lienard form. We enunciate this fact in the following result where we also transform the values v and w into −1 and 1, respectively. The proof of this result is direct. Proposition 1. Suppose that m 12 = 0. Then, the change of variables where Our interest begins under the assumption that system (3) possesses a center configuration in the central zone |X| ≤ 1. This fact implies D C > 0 and the existence of a unique equilibrium point (X C ,Ȳ C ) for the linear system (Ẋ,Ẏ ) T = N C (X, Y ) T + d T C . Now, we establish a result, whose proof is straightforward, where one can get, with a suitable change of variables, D C = 1 andȲ C = 0. Proposition 2. Assume that D C > 0. Then, the change of variables where the prime denotes the derivative with respect to t, being We remark that system (4) can be written in the form where To begin with, in the following result, whose proof is direct, we impose the conditions for the existence of a continuum of periodic orbits of system (7) with two tangency points. Proposition 3. If a C = m = 0, then system (7) has a continuum of periodic orbits in the central zone. Moreover, the most external orbit Γ 0 of this continuum has two tangency points with the separation lines x = −1 and x = 1, at the points, q 0 = (−1, 0) and q 1 = (1, 0), respectively. By perturbing this non-generic situation, we will look for non-hyperbolic periodic orbits Γ, which lives in the three regions L, C and R, coming from the most external orbit of the continuum Γ 0 . In particular, in Section 4 we prove the following result, which is the main result of this work, about the existence of a saddle-node bifurcation of limit cycles. Theorem 2.1. Consider system (7) with fixed values of the parameters t L , t R , d L , and d R . If t L · t R < 0 and t L + t R = 0, then there exists a function a * C (m), analytic as function of m 1/3 , and defined in the neighborhood of the origin satisfying with a * C (0) = 0, and such that, for 0 < |m| 1, system (7) has a saddle-node bifurcation of limit cycles when a C = a * C (m). Specifically, if t L t R (t 2 L −t 2 R )(a C −a * C (m)) < 0, then the system has two three-zonal limit cycles with opposite stability and close to the periodic orbit C (m)) > 0, then the system has no limit cycles close to Γ 0 ; and if a C = a * C (m), then the system has a unique three-zonal semi-stable limit cycle Γ close to Γ 0 . Moreover, approximations of the function a * C (m), the period and amplitude of limit cycle Γ are given by, and where the amplitude A has been measured as the difference between the intersection of the orbit with the separation line x = −1 with a positive y − coordinate minus the intersection of the orbit with the separation line x = 1 with a negative y−coordinate. Remark 1. Note that, with the definition of the amplitude considered in Theorem 2.1, the tangent orbit Γ 0 of the unperturbed case of Proposition 3 has amplitude equal to zero. Remark 2. Condition t L · t R < 0 in Theorem 2.1 is a necessary condition to prove the existence of a saddle-node bifurcation of limit cycles perturbing from the tangent orbit Γ 0 , see expression ofτ L in (22). Hence, we conclude that this bifurcation is only possible under the condition of having f (x; a C , t L , m, t R ) a quadratic shape. This conclusion does not contradict the existence of saddle-node limit cycles when f (x; a C , t L , m, t R ) has a cubic shape, (see, for instance, [35]), provided that in such case the saddle-node limit cycle perturbs from a heteroclinic connection. [34] provide conditions about the traces and determinants of the coefficient matrices of the system, including the inequality t L · t R < 0, such that there exists at most one limit cycle, which precludes the presence of a saddle-node bifurcation of limit cycles. Note that conditions in [34] are not satisfied under the hypotheses of Theorem 2.1, as it is assumed that the trace of the coefficients matrix in central zone t C = −m is small enough in absolute value. Remark 4. With respect to the cases not included in Theorem 2.1, we can provide the following information. The symmetric case (t L = t R and a C = 0), has been studied in [22] and only the existence of a limit cycle that perturbs from the linear center appears. This does not mean that the saddle-node bifurcation can not be obtained from other configurations, (see, for instance, [8,35]) and, as far as we are concerned, the analysis of the saddle-node bifurcation in the symmetric case is not closed. The reversible case (t L = −t R , m = a C = 0) implies an unbounded center. If t L = −t R = 0, propositions 4 and 5, provide us the existence of a family of non-hyperbolic periodic orbits with m = a C = 0, corresponding to that of the unbounded center. In the case t L = −t R , m = 0 and/or a C = 0, there could be a saddle-node that do not emerge from the tangent, but instead arise from any threezonal periodic orbit. This study requires a different analysis, such as the Melnikov theory used in [3,8]. In Fig. 1 we plot an schematic representation of the bifurcation diagram in m, a C parameter plane in case t L t R (t 2 L −t 2 R ) < 0 for m and a C sufficiently small. Solid line represents the saddle-node curve a C = a * C (m). In the region a C > a * C (m), two limit cycles with opposite stability and close to Γ 0 exist and in the region a C < a * C (m) no limit cycles close to Γ 0 exist. < 0 for m and a C sufficiently small. Solid line represents the saddlenode curve a C = a * C (m). In the region a C > a * C (m), two limit cycles with opposite stability and close to Γ 0 exist and in the region a C < a * C (m) no limit cycles close to Γ 0 exist. 3. Applications. In this section we consider two applications of our theoretical result. First subsection is devoted to a PWL version of the FitzHugh-Nagumo model, the McKean model. Second subsection is focused on the memristor oscillator. 3.1. The McKean model. The McKean model is a simplified piecewise linear model of neuronal activity with regular firing, [37]. This model can be derived from the FitzHugh-Nagumo model just by considering a piecewise linear approximation of the cubic nullcline of the voltage V . Let us consider the differential system where C > 0 is the capacitance and the cubic nullcline of V is given here by the piecewise linear function 1/2, t r > 0 and β 2 < 1/C. Some differences with respect to the standard McKean model are remarkable. In first place, the cubic nullcline of the FitzHugh-Nagumo model is here approximated by a four-linear-segments polygonal curve. The idea is to mimic, locally, one of the quadratic folds by three linear pieces (with a close to flat central slope, namely t c ) instead of a corner. Occasionally, this approximation gives results fitting better with the ones exhibited by the FitzHugh-Nagumo model, see [19]. In second place, in the classical McKean model, the slope t r of the segment defined in the strip V ∈ (a/2 + δ, (a + 1)/2) is set equal to one, while in our version of the model it is an independent parameter, that can be changed. By setting the values of the parameters C, β, w 0 , a and δ, in the following result we prove the existence, in the parameter space (t c , I), of a saddle-node bifurcation of limit cycles in the McKean model. Additionally, approximated expressions for the bifurcation curve and for the period and amplitude of the saddle-node limit cycle are provided. We recall that in [44] the author proves a similar result, but in a discontinuous version of the McKean model with two zones of linearity. In Figure 2, we illustrate the passing through the saddle-node bifurcation in the case (βC + 1)(t r + 1)(t r − 2βC − 1) < 0. When the parameter I is smaller than the bifurcation value I * , then no limit cycles exist near the equilibrium point which, locally, is an attractor, see panel (a). In panel (b), the parameter is just on the bifurcation value, I = I * , and a non-hyperbolic limit cycle appears, so-called saddle-node limit cycle. This limit cycle is stable from the outside and unstable from the inside. After passing through the bifurcation value, two concentric limit cycles perturb from the non-hyperbolic one. The outer limit cycle is stable whereas the inner one is unstable, see panel (c). Even when the bifurcation takes place involving three zones, some configurations exhibiting by the system are originated at the bifurcation. In particular, when the parameter increases far from the bifurcation value, the size of the inner limit cycle decreases and it becomes a two zones limit cycle, whereas the size of the outer limit cycle increase and it becomes a four zones limit cycle. Panel (d) shows this configuration for the parameter I larger enough than I * . Corollary 1. Consider system (11) with fixed values of the parameters C, β, w 0 , t r , a and δ. If (1 + βC)(t r − βC) > 0 and t r − 2βC − 1 = 0, then there exists a function I * (t c ), defined for t c in a neighborhood of the value t * c = βC and such that (t c − βC)(1 + t r )(t r − βC)(t r − 2βC − 1) > 0, which corresponds to a saddle-node of limit cycles of the system. Specifically, if (1 + βC)(1 + t r )(t r − βC)(t r − 2βC − 1)(I −I * (t c )) < 0, then the system exhibits two three-zonal limit cycles with opposite stability; if (1 + βC)(1 + t r )(t r − βC)(t r − 2βC − 1)(I − I * (t c )) > 0, then the system has no limit cycles; and if I = I * (t c ), then the system has a unique three-zonal semi-stable limit cycle. Approximations for function I * (t c ) and the period of the saddle-node limit cycle are given by and Proof. Following propositions 1 and 2, the change of coordinates and time where r = (1 − βt c )/C > 0 and Figure 2. Saddle-node bifurcation in the McKean model (11) with C = 0.25, β = 0.5, w 0 = 0, a = 1, δ = 0.25, t c = 0.1 and t r = 0.8. According to Corollary 1, the bifurcation takes place at I * = 1.263 . . . In panel (a), the parameter I = 1.2 is smaller than the bifurcation value, then no limit cycles exit near the equilibrium point which is a local attractor. In panel (b) the parameter is just on the bifurcation, I = I * , and a non-hyperbolic limit cycle appears. This limit cycle is stable from the outside and unstable from the inside. In panel (c) the parameter I = 1.265 is greater than the bifurcation value and then two concentric limit cycles perturb from the non-hyperbolic one. The outer limit cycle is stable whereas the inner one is unstable. The limit cycles perturbing from the saddle-node limit cycle move away one each other as the parameter increases far from the perturbation value I * . In panel (d), for I = 1.3, the inner limit cycle becomes a two zonal limit cycle whereas the outer one becomes a four zones limit cycle. transforms system (11) into the following one wheref Note that, restricted to the bands {x < −1}, {|x| ≤ 1} and {1 < x ≤ 1/2δ}, this system coincides with system (7), Moreover, as δ is fixed, the dynamics of the fourth zone {x ≥ 1/2δ} does not influence the analysis around the tangent periodic orbit that exists for I = 0 and t c = βC. Therefore, the rest of the proof is a consequence of Theorem 2.1. Saddle-node of limit cycles exhibited by the McKean model can be used to explain the switching between resting and spiking activity by injecting an external impulse, see Figure 3. In fact, from the approximation of the bifurcation value I * (t c ), obtained in Corollary 1, we propose a first proof of concept to address the tuning of the external impulse I, in order to facilitate annihilation/regeneration in a voltage trace. The starting point is the assumption that the system is close to a saddle-node limit cycle. The proposed solution switches between the resting and spiking activity by injecting an external impulse during a fixed time window, depending on the parameters of the system. The algorithm is stated in Appendix B. In Figure 3, we illustrate the result of this algorithm, the oscillatory behavior is annihilated and restarted again just by injecting a single pulse. 3.2. The memristor oscillator. Memristors are two-terminal electronic passive devices where charge and electric flux are related through a nonlinear function, called the characteristic of the memristor. This device was firstly introduced by Chua [13]. These class of new generation oscillators have potential to model the behavior of synapse connections in neurons. In [34], the authors propose the following modification for the nonlinear flux-charge characteristic of the memristor oscillator appearing in [14], namely, The state equations of the system of the mathematical model of the memristor oscillator are given by where the constants a, b L , b R , G, u and v depend on the components of the circuit. Under the hypothesis v(G−a) > 0 and by means of the application of the changes of the variables stated in propositions 1 and 2, system (14) can be transformed into the form (4), where The next result is a direct consequence of Theorem 2.1. Corollary 2. Consider system (14) with fixed values of the parameters G, b L , b R , and a. If (14) has a three-zonal saddle-node limit cycle when ) < 0, the system exhibits two three-zonal limit cycles with opposite stability; ) > 0, then the system has no limit cycles; and if u = u * (v), the system has a unique three-zonal semi-stable limit cycle. Approximations for the function u * and the period of the saddle-node limit cycle are given by and 4. Proof of main result. This section is devoted to the proof of Theorem 2.1. By perturbing the non-generic situation described in Proposition 3, in a first subsection we will look for a non-hyperbolic periodic orbit Γ, which intersects the three regions L, C and R, coming from the most external orbit of the continuum Γ 0 . In a second subsection we will prove that it corresponds to a saddle-node bifurcation. 4.1. Existence of non-hyperbolic periodic orbits. Let us begin by introducing some notation. For chosen parameters η = (a C , t L , m, t R , d L , d R ) and a point p ∈ R 2 , we denote by ϕ(t; p, η) = (x(t; p, η), y(t; p, η)) the solution of system (7) with initial condition ϕ(0; p, η) = p. The coordinates of ϕ(t; p, η) will be referred to as x i (t; p, η) and y i (t; p, η), with i ∈ {L, C, R}, depending on the region where the solution belongs to, for that value of t. Consider a point p 0 = (−1, y 0 ). Assume that it exists a flight time τ Cu > 0 such that x C (τ Cu ; p 0 , η) = 1 and x C (s; p 0 , η) ∈ (−1, 1) for all s ∈ (0, τ Cu ). In such a case, we can define the Poincaré half-map between the switching lines x = −1 and x = 1 at the point y 0 as P Cu (y 0 , η) = y C (τ Cu ; p 0 , η). Similarly, we can define the Poincaré half-map between the switching lines x = 1 and x = −1 at the point y 2 as P C d (y 2 , η) = y C (τ C d ; p 2 , η), where τ C d > 0 is the flight time and p 2 = (1, y 2 ). On the other hand, consider a point p 1 = (1, y 1 ). Assume that it exists a flight time τ R > 0 such that x R (τ R ; p 1 , η) = 1 and x R (s; p 1 , η) > 1 for all s ∈ (0, τ R ). In such a case, we can define the Poincaré half-map between the switching line x = 1 and itself at the point y 1 as P R (y 1 , η) = y R (τ R ; p 1 , η). Similarly, we can define the Poincaré half-map between the switching line x = −1 and itself at the point y 3 as P L (y 3 , η) = y L (τ L ; p 3 , η), where τ L > 0 is the flight time and p 3 = (−1, y 3 ). At this point, the Poincaré map for system (7) can be defined. A periodic orbit which visits the three regions must satisfy P (y 0 , η) = y 0 , (see Fig. 4), or equivalently, the following eight equations, Figure 4. Representation of a three-zonal periodic orbit of system (7). x C (τ Cu ; (−1, y 0 ), η) = 1, y C (τ Cu ; (−1, y 0 ), η) = y 1 , together with the constrains τ L , τ R , τ Cu , τ Cd > 0 and x C (s; (−1, y 0 ), η) ∈ (−1, 1), for all s ∈ (0, τ Cu ), x R (s; (1, y 1 ), η) > 1, for all s ∈ (0, τ R ), x C (s; (1, y 2 ), η) ∈ (−1, 1), for all s ∈ (0, τ Cd ), must be satisfied. Equations (16) and inequalities (17) are the conditions of the existence of a periodic orbit living in the three regions. To take into account the non-hyperbolicity of the periodic orbit, we consider the derivative of the Poincaré map, which corresponds to the exponential of the integral of the divergence of the system along such a periodic orbit, see [11]. In the particular case of PWL systems, the integral of the divergence can be explicitly computed as the sum of the products of the traces and the flight times in each region of linearity, see [21]. Hence, for a fixed point y 0 of Poincaré map P . Therefore, the condition of non-hyperbolicity in our case reads, Now, consider fixed values for the parameters t L , t R , d L , and d R . The existence of a non-hyperbolic limit cycle arising from the last periodic orbit of the linear center reduces to the existence of a set of parameters a C , m ∈ R, τ L , τ R , τ Cu , τ Cd > 0 and real values y 0 , y 1 , y 2 , y 3 , so that they satisfy equations (16) and (18) and inequalities (17). We begin by looking for a solution for the equations and later we will check that the solution satisfies the required inequalities. We would like to point out that we have used the symbolic manipulators Mathematica and Maxima in order to check the truthfulness of expressions in (22), obtained first by hand. Remark 5. Note that we have chosen parameter τ R in order to apply the Implicit Function Theorem, but parameter τ L could be equally chosen to obtain a dual result. Once we have proven the existence of solution of system (16) and we have found an approximation of the solution (22), we proceed to prove in the following result that these solutions correspond to non-hyperbolic periodic orbits of system (7), checking that they satisfy inequalities (17). Proposition 5. If t L · t R < 0, the functions given in Proposition 4 correspond to a non-hyperbolic three-zonal periodic orbit of system (7). Proof. To ensure that the functions given in Proposition 4 correspond to a nonhyperbolic three-zonal periodic orbit, we need to check that functionsτ L (τ R ),τ Cu (τ R ) andτ Cu (τ R ) satisfyτ L (τ R ) > 0,τ Cu (τ R ) > 0 andτ Cd (τ R ) > 0 for τ R > 0 sufficiently small and that these functions and the rest of the functions verify inequalities (17) for τ R > 0 and small. For this purpose, we will assume in what follows that τ R is strictly positive and sufficiently small. A similar reasoning allows to prove that the function x R verifies that x R (s; (−1,ȳ 3 ),η) > 1 for s ∈ (0, τ R ) and the second inequality of (17) is true. 4.2. Correspondence to a saddle-node bifurcation. To finish with the proof of Theorem 2.1, in this second subsection we are going to prove that the nonhyperbolic limit cycle whose existence has been proved in the previous subsection corresponds, indeed, to a saddle-node bifurcation when t L + t R = 0. To obtain this, we will prove that the nondegeneracy conditions on the Poincaré map hold, that is, the second derivative of the Poincaré map defined in (15) with respect to the initial condition and the derivative of the Poincaré map with respect to parameter a C are both different from zero, see [33,39]. The section is divided into two parts. In the first one, we compute the second derivative of the Poincaré map with respect to the initial condition and we will see that this derivative is nonzero. In the second one, we compute the derivative of the Poincaré map with respect to parameter a C and we test that this derivative does not vanish. Computation of the second derivative of the Poincaré map with respect to the initial condition. Consider the non-hyperbolic periodic orbit ϕ(t; (−1,ȳ 0 ),η) given in Proposition 5. From the study in [11], it is easy to see that the second derivative of Poincaré map with respect to y 0 in the non-hyperbolic fixed pointȳ 0 is given by where the derivative of the flight times with respect to y 0 have to be evaluated in Denote M ij the component ij of a matrix M and F k i the component i of the vector field defined by system (7) in the corresponding zone k ∈ {L, C, R}. From [11], the derivative of the flight times with respect to the initial condition are given by , and dτ L dy 0 = dτ L dy 3 dy 3 dy 2 dy 2 dy 1 The substitution of expressions (26)- (27) in (25) allows us, after some straighforward but tedious computations, to ensure that the second derivative of the Poincaré map is different from zero if and only if Finally, the substitution of variables τ Cu , y 0 , y 1 , y 2 , τ Cd , y 3 , τ L , a C , m, τ R by functions developed in expressions (22), provides us the following condition Thus, since t L · t R < 0 and t L + t R = 0, the first term in (28) is different from zero. Therefore, the first nondegeneracy condition on the Poincaré map holds. Computation of the derivative of the Poincaré map with respect to parameter a C . To compute the derivative of the Poincaré map with respect to parameter a C in the neighborhood of the non-hyperbolic periodic orbit γη, we will use the following expression from [39], where being w 0 is the orientation of the curve, if x, y, ∈ R 2 , we define the wedge product x ∧ y = x 1 y 2 − y 1 x 2 , div(F ) is the divergence of the vector field F defined by system (4) and F a C denotes the derivative with respect to parameter a C . Note that, although this identity is originally only valid for smooth systems, it can be generalized to continuous piecewise smooth systems with similar reasoning as those done in [8]. In our case, w 0 = −1 and F (γη(0),η)) = 0, so we will compute only the factor Q(ȳ 0 ,η) of expression (29) given in (30). The integral along a periodic orbit will be divided into four parts, depending on the zone of linearity that the orbit is located. By using Lemma A.5 of Appendix, these four addends are given by: 1. Part of the orbit located between y 0 and y 1 , in central zone, 2. Part of the orbit located between y 1 and y 2 , in right zone, 3. Part of the orbit located between y 2 and y 3 , in central zone, 4. Part of the orbit located between y 3 and y 4 , in left zone, SADDLE-NODE OF LIMIT CYCLES IN PLANAR PWLS AND APPLICATIONS 5293 Using Lemma A.5, some direct but tedious computations allow us to compute and And then, we obtain Q(ȳ 0 ,η) = S 1 + S 2 + S 3 + S 4 . The substitution of variables τ Cu , y 0 , y 1 , y 2 , τ Cd , y 3 , τ L , a C , m, τ R by functions developed in expressions (22) provides us the following expression Thus, if t R = 0, the first term in the development is different from zero and therefore, the proof of the existence of the saddle-node bifurcation is concluded. Finally, we proceed to obtain relations (8)- (10). From expression ofm in (22), we can obtain the expression of τ R in terms of the parameterm, by inverting the series. Moreover, taking into account that m · t R (t 2 L − t 2 R ) > 0, τ R > 0. Then, substituting this expression into the expression ofā C in (22), we obtain the existence of function a * C (m) and the approximation given in (8). The expression of the period T γ given in (9) has been obtained by summing up the series of the first three expressions of (22) plus the expression of τ R obtained. The approximation of the amplitude has been obtained with the first order ofȳ 0 −ȳ 2 from (22), by substituting the expression of τ R in the resulting expression. 5. Conclusions and perspectives. In this article, we have proven the existence of a saddle-node bifurcation of limit cycles perturbing from a local center in planar continuous PWL systems with three zones. Power series of the bifurcation manifold, the amplitude and the period of the saddle-node limit cycle are also provided. The obtained result is general in the sense that it characterizes the bifurcation in any continuous PWL system with three regions of linearity. In particular, this result follows up theorems 7, 8, 9 and 10 in [34] in the following sense: In statement c) of these theorems, the authors prove the existence of two limit cycles, in a certain region of the parameter space. The region is given in terms of the trace of the coefficient matrix in central zone, and of a value ε > 0. On the other hand, in Theorem 2.1 we obtain, for each m sufficiently small, a curve a C = a * C (m) that provides a boundary for the existence of two limit cycles, as we can see, for instance, in Figure 1. By using the injectivity of function a * C , for each a C small enough, it is possible to find a value m = m * (a C ) such that (m * (a C ), a C ) belongs to the curve. The value |m * (a * C )| corresponds to the ε of theorems 7-10 (c) in [34], after the appropriate change of variables. This main result can be applied to a wide variety of models whose oscillatory behavior is well-known, and where these oscillations are shown to born in a saddlenode bifurcation of limit cycles. In particular, we consider two different applications. The first one is a PWL version of the FitzHugh-Nagumo system, the McKean system. For this model, the bifurcation value is written in terms of the natural parameter of the system, the applied current. Note that there are not restrictions on the conductance value C, so it can be as small as needed. Therefore, these results can be applied in the slow-fast regime. Nevertheless, the saddle-node limit cycle, and the two limit cycles which collide are far from the canard regime. The analysis of fold limit cycles in the canard regime is beyond the objective of this work and is part of an ongoing project. The second application, is an electronic circuit, namely the memristor oscillator [14]. We consider the version of the model studied in [34]. Moreover, the bifurcation value considered here is the boundary of linearity zones of the flux-charge characteristic of the memristor. In both applications, we provide approximated expressions for the bifurcation curve and the period of the saddle-node limit cycle. the following variational problem with respect to the initial conditions, Notice that we have already substituted m = 0 in the corresponding matrix in the central zone A C , see (5). Solution of (31) is given by Now, it suffices to substitute τ = π in previous expressions to conclude the proof of the first statement. The proof of the second statement is completely analogous. Consider the first equality in the third statement. Function ∂y R ∂y1 (τ ; (1, 0), η 0 ) corresponds to the second component of the solution of the following variational problem with respect to the initial conditions, As we need to evaluate the solution in τ = 0, it is straightforward that ∂y R ∂y1 (0; (1, 0), η 0 ) = 1. Analogously, considering the corresponding variational problem for the solution in the left zone, it follows that ∂y L ∂y3 (0; (−1, 0), η 0 ) = 1. In the next lemma we compute the non-null components of the 7th colum of the Jacobian matrix. The components in the 3th and 7th row can be obtained from Lemma A.1. Now, it suffices to substitute τ = π in previous expressions to conclude the proof of the first statement. The proof of the second statement is completely analogous. Finally, consider the third statement. As functions ∂y R ∂a C (τ ; (1, 0), η 0 ) and ∂y L ∂a C (τ ; (−1, 0), η 0 ,) are the second component of the solution of the corresponding variational problem with respect to parameter a C , whose initial condition is always (0, 0), and we want to evaluate them in τ = 0, it is obvious that ∂y R ∂a C (0; (1, 0), η 0 ) = ∂y L ∂a C (0; (−1, 0), η 0 ) = 0. In next lemma we compute the components of the 8th column of the Jacobian matrix. The components in the 3th and 7th row can be obtained from Lemma A.1. Now, it suffices to substitute τ = π in previous expressions to conclude the proof of the first statement. The proof of the second statement can be analogously done. The third statement is analogous to the proof of the third statement of Lemma (A.3). Finally, we present an auxiliary result that is used for the computation of the derivative of the Poincaré map with respect to parameter a C . Lemma A.5. Consider the autonomous linear systeṁ where x ∈ R n , A ∈ M n , b ∈ R n , n ≥ 1 and λ ∈ R. If A + λI is regular, then, where I the identity matrix of dimension n. Appendix B. Fine tuning algorithm. In the following steps, we provide an algorithm to fine tuning the external impulse I in the McKean model, in order to switch from an oscillatory behavior to a resting behavior (step 1 to step 7), and vice versa (step 8 to step 10). 1. Given an oscillatory voltage trace V (t), corresponding with a limit cycle of system (11), with known parameter values C, β, w 0 , a, δ, t c and t r , and assuming that this limit cycle is close to a saddle-node limit cycle, compute the impulse, I 1 = I * (C, β, w 0 , a, δ, t c , t r ), from expression (12). 2. Compute the equilibrium point (V 1 , w 1 ) of system (11) with I = I 1 . Let λ = µ ± iη be the eigenvalues of the Jacobian matrix of the vector field at (V 1 , w 1 ). 3. From the first equation of system (11), approximate the trace of w(t). 4. Look for valuesṼ = V (t) andw = w(t) such thatṼ = βw − w 0 withw < w 1 . 6. Compute the impulse I 2 = w 2 − t c V 2 − (δ − a/2)(t c + 1) such that system (11), with I = I 2 , has an equilibrium point at (V 2 , w 2 ). 7. At timet, apply impulse I = I 2 − I 1 during the time π/η, where η is the imaginary part of the eigenvalue given in the step 2. 8. Compute the values 9. Compute the impulse I 3 = w 3 − t c V 3 − (δ − a/2)(t c + 1) such that system (11), with I = I 3 , has an equilibrium point at (V 3 , w 3 ). 10. At any time after π/η, apply impulse I = I 3 − I 1 during the time π/η, where η is the imaginary part of the eigenvalue given in the step 2.
10,581.2
2019-05-27T00:00:00.000
[ "Mathematics" ]
Harvest Stage Recognition and Potential Fruit Damage Indicator for Berries Based on Hidden Markov Models and the Viterbi Algorithm This article proposes a monitoring system that allows to track transitions between different stages in the berry harvesting process (berry picking, waiting for transport, transport and arrival at the packing site) solely using information from temperature and vibration sensors located in the basket. The monitoring system assumes a characterization of the process based on hidden Markov models and uses the Viterbi algorithm to perform inferences and estimate the most likely state trajectory. The obtained state trajectory estimate is then used to compute a potential damage indicator in real time. The proposed methodology does not require information about the weight of the basket to identify each of the different stages, which makes it effective and more efficient than other alternatives available in the industry. Introduction Chile is the main exporter of fresh fruit in the Southern Hemisphere (ODEPA), generating 59.3% of the total production [1]. Worldwide, Chile exports more than 75 different species to more than 100 countries around the world, being a leader in the export of table grapes, plums, apples, blueberries and peaches. In this regard, any improvement in productive processes of fresh fruit harvesting for exportation has a significant impact on the national economy. Those improvements should help to efficiently manage the whole chain of the productive process: Crop, harvest, packing and transport to the destination market. The fruit produced in Chile is mainly harvested manually (hand picking), but this process requires numerous personnel. While the personnel working in harvesting processes are continuously trained, the vast majority of these workers are employed solely during the productive season. The coordination of this activity requires personnel highly trained in the processes of manual harvesting, since the fruit can suffer damage, mainly mechanical. Indeed, authors such as Li et al. [2] state that fresh fruit is susceptible to mechanical damage during the whole process, from harvesting in the harvest stage, the transfer to the packing site, its passage through it and also during the transport that takes it to its final destination. required to identify the harvest's time transitions without using the fruit's weight. The lengths of the time that elapsed in each one of the harvest phases are useful for avoiding high temperatures during prolonged periods of waiting time. Time transition identification is a challenging problem; in particular for settling this because, as is shown in Galeas [14], the harvest phases go sequentially, and due to the data provided being gathered from low-cost instrumentation, the problem is well suited for the use of the hidden Markov chains methods and the work of Rabiner [15]; for this particular problem the Viterbi algorithm is one of the best suited. In this regard, the objective of this article is to propose a novel monitoring system for berry harvesting processes that is solely based on the use of temperature and vibration sensors to perform inferences and estimate the most likely trajectory and switch times between each harvesting process stage. The obtained trajectory estimate will be then used to compute a potential damage indicator for the fruit in terms both of the registered temperature and vibration energy. Markov Chains The proposed harvesting stage detection algorithm is built on the assumption that this sequence of stages can be modeled as a hidden Markov model (HMM). Before going into the details that support this assumption, though, it is important to define the concept of a first-order Markov process. A Markov process is a stochastic process that satisfies the Markov property (sometimes characterized as "memorylessness"), that basically states that one can make predictions for the future of the process based solely on its present state, i.e., conditional on the present state of the system, its future and past states are independent [15]. A first-order Markov chain is a particular case of a Markov process [15]. To define it properly, let us consider a system such that its condition at any time instant can be characterized by a finite set of states S 1 , S 2 , · · · , S N . At any time, this system can change its operational condition in time (i.e., the system makes a transition from one "state" to another), with transition probabilities that are conditional on the current state. We denote the state transition times as t = 1, · · · , and the state at any given time t as q t [15]. The probabilistic model for system state transitions for the specific case of a discrete first-order Markov chain is completely described by the state transition matrix A and the initial state probability distribution Π, where: and where: This Markov process is denominated "observable" since the system output is a state that can be directly measured. The probability of a given sequence can be computed in this case using the following straightforward procedure: P(O = {S n 0 , S n 1 , · · · , S n l }|Model) = P[S n 0 ] · P[S n 1 |S n 0 ] · · · P[S n l |S n l−1 ] = π 0 (n 0 ) · a n 0 n 1 · · · a n l−1 n l (2) Hidden Markov Models (HMMs) In many practical cases, the system state cannot be directly measured and must be estimated. These cases can be well characterized through the concept of hidden Markov models (HMMs). The adjective "hidden" refers to the state sequence through which the model passes, not to the parameters of the model; the model is still referred to as a hidden Markov model even if these parameters are known exactly. Measurements are linked to the system states via a conditional probability density function. As a consequence, the resulting model has two sources of uncertainty that affect the inference problem: (i) Hidden state dynamics and (ii) measurement noise [15]. A discrete HMM is characterized by the following parameters: • N: Number of states. The set of possible states can be denoted by S = {s 1 , · · · , S N }. The state of the system at time t is denoted q t . • M: Number of measurements associated with each state. Each measurement corresponds to a physical outcome from the system that can be acquired using the appropriate sensors. • The transition probability distribution between system states A = {a ij } where: • The measurement probability distribution conditional of the state j, • The initial probability distribution of system states π, where: Considering all of the above, for convenience the following compact notation is typically used to denote the entire set of parameters that characterizes the HMM: A realization of an HMM is graphically depicted in Figure 1. It is important to note that part of the system dynamics are hidden to the observer ("hidden evolution model"). In addition, in an HMM there is an observational model, which is conditional on the state trajectory. The objective in an inference problem based on HMMs is to estimate the sequence of hidden states S = {s 1 , · · · , S N } conditional on a set of system measurements O = {O 1 , · · · , O N } [15]. Hidden state transition model Measurement model Viterbi Algorithm The Viterbi algorithm (VA) [16][17][18] was proposed as a solution to the decoding of convolutional codes by Andrew J. Viterbi in 1967. This algorithm had a great impact in the fields of communications and signal processing, extending its influence to other domains such as the problem of state estimation in stochastic nonlinear systems. The Viterbi algorithm (VA) aims at finding the optimal estimate for a sequence of hidden states (called the Viterbi path) in an HMM, conditional on a set of system measurements. This task is achieved using a dynamic programming formulation, where the inference problem is divided in a series of small stages (indexed by the time associated with each observation). At each stage, the VA finds the optimal value for the state within the sequence, and it continues the analysis to the next stage in an inductive manner. Formally speaking, to find the optimal sequence of hidden states Q * = {q * 1 q * 2 · · · q * T } in a realization of an HMM, conditional on a sequence of system measurements O = {O 1 O 2 · · · O T }, the following variable is defined: [15,16]: where δ t (i) is the most likely path for the HMM at time t, considering the first t observations and the state S i as terminal conditions. By induction, it is possible to write: As a result, the inference problem is solved by using the pseudo-code shown in Appendix A. The Blueberry Harvesting Process The experiment is carried out inside the Boldo S.A. orchard. This orchard has 50 hectares planted with blueberries and is located in Yungay, Chile, in coordinates lat: −37.1149584, long: −72.1973101. As shown in Figure 2, its packing site is located at the center of the garden and there are roads that divide the plantation of blueberries into 3 sectors and each of these sectors is divided into 7 sub-sectors for the irrigation process. Each sector has different varieties, including: Duke, Rabiteye, Brightwell, Tifblue, O'neal and Brigitta. The process of picking fresh blueberries is done manually and begins by assigning a crew of collectors in each sector of the garden. Figure 2a shows a top view of the garden showing the orchard, the packing house and the storage centers (places provided with shade to temporarily store the boxes previous to delivery at the local packing site), while Figure 2b shows a ground view from the storage center number 6. The collectors walk through the orchard arranged in rows of approximately 120 meters in length, provided with a plastic box of 3.5 L hung from the neck by a harness. Sometimes the collector must walk to the other side of the set of rows (approximately 600 m) to start picking berries. The harvesting process has a duration of 20 to 50 min, depending on the experience of the harvester, the proximity of the rows to be collected, and how much fruit is in the bushes. Once the plastic box is filled, the collector return to the first storage center. In the reception center, another worker increases the count of the number of boxes harvested by the collector and records the time when it was received. Finally, the collector is provided with an empty box to restart the picking process. The boxes full of fruit are stored in this storage center awaiting a truck with a trailer to take them to the local packing site. After arriving at the packing site the net weight of fruit picked is recorded using an electronic scale. Then, the filled boxes enter into the packing site throughout a freezing tunnel to lower the temperature of the berries. Inside the packing site, the boxes are emptied over a classification table and the boxes are recycled to begin a new harvest cycle. A Modular Distributed Monitoring System for the Harvesting Process: The "Smartbin" The proposed system was developed using a harvest basket of 3.5 L, which incorporated two components: A main device installed on one of its sides, which contains a SODAQ Autonomo device (SODAQ, Hilversum, The Netherlands), a real time clock, temperature sensors and an inertial unit (IMU) to measure the vibrations of the harvest basket and detect the shocks suffered by it. A false base sustained with a load cell, to measure at all times the weight carried by the basket. The SODAQ Autonomo uses an Atmel SAMD21J18 processor, with 256 kb of Flash memory, 32 kb of SRAM memory and a 32-bit processor running at 48 MHz. In addition, it has a socket for the use of a micro SD card, which allows internal storage of the data. A real time clock (DS1307) with the time and date was added to this device, information that is attached to all captured data. The IMU used is based on the MPU-9250 chip with an accelerometer, a gyroscope and a 3-axis magnetometer. The unit also has two temperature sensors based on the digital device DS18B20 (Dallas Semiconductor, Dallas, TX, USA), the accuracy of which is 0.5 • C. These temperature sensors protrude like two tubes of the main device, glued to one of the internal walls of the harvest box to measure the temperature of the berries at two heights, 6 cm and 10 cm from the base of the box (see Figure 3). The load cell located in the false base of the box is connected to an analog/digital converter HX711 which in turn is connected to the SODAQ Autonomo device. The system was provided with a Li-ion battery of 2300 mAh/3.7 V for its energy autonomy, which was estimated at 30 h of continuous operation. Figure 3a presents a photograph of the final prototype, while Figure 3b,c shows the front section and the lateral section of the prototype. The false base, or support tray for the weight sensor, left a useless space at the bottom of the basket because the tray must relay over the load cell to carry on the weighting procedure. The two temperature probes give information about the fruit's temperature inside the basket and close to the surface only when the IoT basket is full. The difference between the temperature measures can be used to detect when the fruit level has reached the two positions where the temperature probes are installed. This main device works as a remote collection unit and as a data logger, transmitting wireless and storing all the data collected in an microSD card installed in the "SODAQ Autonomo" device, with two types of records, one that is written every 100 ms with the measurements of the IMU, date and time, and another that is written every 15 s with measurements of temperature, weight, voltage of the battery, date and time. These time measurements are taken to identify faults in the system, and correlate the tests with the events that occurred during the day. Data Acquisition Campaign Data from the experimental campaign was acquired using 5 "smartbins" in an experimental set up carried out during one day in the middle of the harvest season of blueberries in the "El Boldo" orchard (see Figure 2). Each one of the 5 "smartbins" was used in two consecutive harvest cycles during the day of the experiment. As a result, it was possible to record 10 complete harvesting cycles (each cycle finishes with the bin returned to the hands of the picker after being emptied). The structure of the acquired dataset can be summarized as follows: • Temp_1: Temperature measurement acquired every 15 s using a sensor that is located near the bottom of the bin. • Temp_2: Temperature measurement acquired every 15 s using a sensor that is located near one of the four the external edges of the bin. • Acc x : Acceleration measurement in x-axis acquired ten times per second with an IMU located inside the bin. • Acc y : Acceleration measurement in y-axis acquired ten times per second with an IMU located inside the bin. • Acc z : Acceleration measurement in z-axis acquired ten times per second with an IMU located inside the bin. • Weight: Net weight of the "smartbin" acquired every 15 s with a sensor that is located at the bottom of the bin. In terms of nomenclature, and for all practical purposes, each harvesting cycle was labelled using the following format: N i c j , where i = 1, 2, 3, 4, 5 refers to the ith bin and j = 1, 2 indicates the number of the recorded cycle for that specific bin. In eight of these cycles N i c j , where i = 2, 3, 4, 5, j = 1, 2 were used as training data, while two cycles were used for validation purposes (N 1 c 1 and N 1 c 2 , both corresponding to the 1st bin). To avoid loss of information, the signals were processed using raw values. Proposed Methodology for Online Harvesting Stage Detection The proposed methodology uses the Viterbi algorithm to perform inferences of datasets and estimate the most likely state trajectory in the harvesting process. Indeed, this case study allows to define a finite number of possible "states" (each one associated with one stage of the harvesting procedure), making it a perfect candidate for the implementation of inference schemes based on the assumption of HMMs. The set of observations O incorporate data from IMUs and temperature sensors. While the entire process has six "states" that can be identified (picking, waiting, transport (full bin), cooling, emptying and transport (empty bin)), only four of them are hereby considered. The latter since only the first four states are critical in terms of quantifying the potential damage to the fruit during the harvesting procedure (the "emptying" state is fully automated, and afterwards the bin is empty). These states are: (1) "Picking"(S1): The pickers, provided with a 3.5 L plastic box hung around the neck by a harness, cover the orchard prepared in rows approximately 100 m long. Picking lasts 20 to 40 min per box, depending on the picker's experience and the volume of fruit on the shrubs. During this stage it is possible to measure high energy vibration signals and high temperatures. (2) "Wait" (S2): When the box is full, the picker goes to the storage center (shaded area), where he/she delivers the box for counting. The full boxes remain at the warehouse waiting for the tractor-trailer to take them to the local packing area. (3) "Transport" (full bin) (S3): The tractor-trailer transports full boxes from the warehouse to the local packing area. (4) "Cooling" (freezer tunnel) (S4): The fruit is admitted to packing via a conveyor table, where a cooling system lowers its temperature using a freezing tunnel. Considering all of the above, an HMM is trained for this case study using eight harvesting cycles N i c j , where i = 2, 3, 4, 5, j = 1, 2. Ground truth for the transition times between states in training (and also validation) data was defined by incorporating information acquired from the weight sensor that is located at the bottom of the "smartbin". Weight sensor measurements allow to simplify the detection of state transitions because they help to determine the moment when the "picking" stage is over (bin weight measurements stabilize at a constant value, a condition that can be tested by a basic hypothesis testing procedure), as well as the exact moment when the bin is emptied. Conditional to the latter transition times, it is possible to discriminate the "cooling" stage just by detecting sudden drops in temperature measurements, while "wait" and "transport" stages can be identified since they differ significantly in terms of the associated energy in the IMU signal. The challenge behind the proposed method for state transition detection is to avoid the usage of weight measurements altogether (except, as in this case study, for purposes of determining ground truth transition times in training data). The latter since it would be preferable and significantly cheaper to eliminate this weight sensor from the original design of the "smartbin". For this purpose, an HMM is conceived to describe the transition between the stages of the harvesting process, where the observation space is solely determined by the following sensor information: (1) Inertial measurement unit (IMU): Data acquired by the IMU. A simple pre-processing algorithm is implemented to complement this information with an average of the total energy in the vibration signal every 15 s over the time window containing the last 15 s of measurements. (2) Temperature measurements: Besides the information provided by sensors Temp_1 and Temp_2, a simple pre-processing algorithm is implemented to measure the difference in readings between both temperature sensors. Considering all of the above, and following the maximum likelihood estimation procedure explained in [15] to determine the coefficients of state transition matrices in an HMM, it is possible to state that the harvesting process can be characterized by the following matrices: where A is obtained by computing the expected residence time on each state in the training dataset [15]. In this case, π is known since the HMM is always initialized in state S1 ("picking"). The characterization of the entire process using an HMM allows to use the Viterbi algorithm for state transition time detection purposes. Proposed Methodology for Fruit Damage Indicator A natural byproduct associated with the implementation of the Viterbi algorithm for estimation of the most likely state path is that it is also possible to detect start and end times for each of the different stages of the berry harvesting process. These start and end times become critical information to characterize the potential damage accumulated during "picking", "waiting" and 'transport" stages since during that lapse the fruit in the bin is exposed to a higher level of vibrations and elevated temperatures. As established by [3,6,[19][20][21], long exposures to high temperatures and high quantities of dissipated energy contribute to early damage to the fruit. Inspired by this fact, this research effort has proposed the following damage indicator to assess the potential damage incurred by the fruit during the harvesting process. where I MU_Energy is a variable that indicates the energy associated with the vibration signal recorded by sensors in the bin during a 15 s sliding window. T S4 corresponds to the moment in which the Viterbi algorithm detects a transition from states S3 to S4, measured in seconds. The temporal reference t = 0 is established to be synchronized with the start of the "picking" stage. The proposed indicator for potential fruit damage offers robustness against disturbances in estimates of transition times, since it solely depends on T S4 for all practical purposes. Indeed, T S4 determines the start of the "cooling" stage and thus it is expected to observe at that time simultaneous (and sudden) drops in readings of sensors Temp_1 and Temp_2, while the energy in the vibration signal should be small compared to the "picking" and "transport" stages. This evidence anticipates that errors in the estimate of T S4 should be negligible in comparison to the total time allotted for the harvesting cycle, and therefore the value of the proposed damage indicator, which depends on the overall accumulation of stress on the fruit, should not exhibit significant changes to its value. Table 1 and Figures 4-13 show the results obtained when applying the proposed scheme for the harvest stage recognition and potential fruit damage assessment on actual field data from an experimental campaign. Each figure consists of three graphs that help to understand the manner in which the proposed algorithm interprets the acquired data. The first graph shows the performance exhibited by the Viterbi algorithm in the detection of transitions between each one of the first four stages of the harvesting process: "Picking", "wait", "transport" and "cooling". The second graph shows the energy of the IMU signal (averaged over a 15 s sliding window), and finally the third graph on each figure shows the temperature registered on the second temperature sensor inside the bin. Figures 4-13 are sorted in terms of the one that represents the most potential fruit damage to the one that is more innocuous. Given the structure of the proposed damage indicator, both the time of exposure of the fruit at ambient temperature (principally at states S1-S3) and cumulative energy of vibration signals (principally at state S1) have critical influence on the assessment of potential damage. Figure 4 illustrates a case where the potential fruit damage is the greatest. One of the reasons that explain this statement is the fact that in this cycle the fruit was exposed to relatively high ambient temperature for a lengthy lapse of time. Moreover, both during the "picking" and "transport" stages, the energy of the IMU accelerometer signal is significant, indicating that the fruit in the bin could have been shaken excessively. It is important to note that the Viterbi algorithm in this case fails to detect the transition between states S1 and S2 (overall efficacy in detection in this dataset is 89.918%). While this issue affects the tractability of the bin in the system, it does not have an impact on the assessment of potential fruit damage since the transition to S4 ("cooling stage") is perfectly detected. Figures 5 and 6 illustrate a case where potential fruit damage is significantly high. While the same concepts explained in the previous case also apply here, it is important to note that the energy associated with the vibration signal is lower than in the case of Figure 4. Moreover, please note that the performance of the Viterbi algorithm is high (overall efficacy in detection in these datasets is 99.396%), exhibiting a negligible delay in the detection of the transition between S1 and S2 in dataset N 1 c 1 , the latter being used for purposes of validating the proposed approach. Obtained Results in Experimental Campaign While temperature associated with the data shown in Figures 7-9 is higher than their predecessors, the lapse of time where the fruit was exposed to ambient temperature is considerably smaller. In both cases, there is a small delay in the estimate of parameter T S4 , but the performance of the Viterbi algorithm is still beyond 98.95%. Validation dataset N 1 c 2 ( Figure 10) is the one where the Viterbi algorithm exhibits the lowest performance (overall efficacy in detection in these datasets is 84.667%). Nevertheless, even in this case, the error associated with the estimate of parameter T S4 is 90 s, which represents 2% in a dataset that records 4485 s of operation. Last but not least, Figures 11-13 exhibit analogous performances in terms of the accuracy of the Viterbi algorithm. Interestingly, in terms of potential fruit damage, the most innocuous dataset corresponds to one where the ambient temperature was low, and where the harvesting cycle lasted less than 4425 s. Conclusions This article proposes a monitoring system for berry harvesting solely based on the use of temperature and vibration sensors. The monitoring system assumes a characterization of the process in terms of a hidden Markov model and uses the Viterbi algorithm to perform inferences and estimate the most likely state trajectory. The obtained state trajectory estimate is then used to compute a potential damage indicator for the fruit in terms both of the registered temperature and vibration energy, with an overall average efficacy in detection for validation datasets of 91.937%, while errors in the estimates of the moment at which the bin reaches the cooling stage were not larger than 2%, a fact that validates the proposed damage indicator as a robust feature for characterization of the potential degradation in the quality of the fruit when used in conjunction with the Viterbi algorithm for purposes of estimating the value of T S4 . More importantly, the proposed procedure proves to be equivalent in terms of the effectiveness in the characterization of the stages of the harvesting process to other alternatives found in the literature, but significantly more efficient since it does not require information about the weight of the bin in which the fruit is collected to identify the different stages of the harvesting process and determine indicators that could help to assess if this harvesting process is being performed normally. It seems that the Viterbi algorithm is a complex solution for this problem but it is inexpensive to include those procedures in the software running on the microprocessor of the "smartbins", avoiding the need to measure weight and consequently disregarding the strain gauges and the mechanical parts needed to support them. The fact that it is possible to dispense with the utilization of weight sensors in the design of "smartbins", replacing them with more advanced signal processing tools, has a significant economic impact in terms of the penetration of these monitoring devices in the agricultural market as a right solution for some of the problems that the industry has faced over these years. The information provided by these "smartbins" is helpful to support decisions with economic significance for the producers, such as infrastructure investment, locations of the storage centers, schedules for transportation between the storage centers and the packing house and quantity and training of the personnel working as fruit pickers.
6,652.8
2019-09-01T00:00:00.000
[ "Computer Science" ]
Passenger and cargo throughput forecast of China's three major airports : China is a large civil aviation country in the world. With the continuous development of China's economy, the demand for air cargo transportation and passenger transportation is constantly rising in many cities. In this paper, the grey-forecast model GM (1,1) is adopted to forecast the passenger and cargo throughput of Beijing Capital(PEK), Shanghai Pudong(PVG) and Guangzhou Baiyun (CAN) Airport in the next few years. Based on the predicted data, we analyzed the future traffic development trend, and found out the advantages and disadvantages of the three airports. Finally, we put forward corresponding suggestions for the future development of the three airports in order to promote the sustainable development of the airports. Introduction Since China has put forward the strategy of "being a powerful country in civil aviation", China is now planning the stride forward from large aviation industry to power one. Further more, airport service level is an important indicator of civil aviation power. With the increase of air transport demand in China, many large airports have experienced high load operation, which is not conducive to the sustainable development of airports. Therefore, airport companies need to predict air transport demand to avoid that situation. The purpose of prediction is to reveal the development law of things, better grasp the future dynamics of things, and to provide necessary information for decision-making. passenger and cargo throughput forecast is the basis of airport development decision and airport construction, and is the grand of determining the short-term construction planning scale and longterm reservation control of the airport [1]. Tailin Chen [2]applied the grey theory to the prediction of airport aviation business volume. He pointed out that this method has advanced theory, reliable prediction results and certain application value. Xiaoping Lin [3] established the prediction model of cargo throughput of Chengdu Shuangliu Airport by using the grey theory. He compared the actual data with the prediction results and proved that the grey model is feasible and accurate in predicting the cargo throughput of Shuangliu Airport. Yubao Chen [4] took the capital airport as an example and used the combined forecasting method to predict the passenger throughput. Xinwo Yang [5] used the combined forecasting method to forecast the passenger throughput and cargo throughput in the Pearl River Delta. Yuanchang Deng [6] predicted the passenger transport demand of Guangzhou Baiyun International Airport. Zhanwei Wang [8]predicted the future passenger and cargo throughput of the top 10 hub airports in Asia, and made a comparative analysis among them. Yunfang Wang [9]used the the grey-forecast model to predict the cargo throughput of Beijing Capital Airport. From the above references, it can be seen that most people have predicted the passenger and cargo throughput of the airport, which proved that the grey-forecast model has a good prediction effect. Based on the above references, this paper forecasted the passenger and cargo throughput of three large airports in China through the grey-forecast model, compared and analyzed the development trend of different airports. Data selection According to the 2019 China Airport throughput data released on the official website of Civil Aviation Administration of China. In 2019, the passenger throughput of china has exceeded 1.3 billion, and the top ten airports has all exceeded 40 million. Among them, Beijing Capital Airport(PEK)ranked first, Shanghai Pudong (PVG)and Guangzhou Baiyun(CAN) ranked second and third respectively. [10] COVID-19 in 2020 had a great impact on the flow of people and the transportation of goods. Therefore, the comparison of airport data in 2020 is meaningful only when compared with other airports in this year, which is not significant compared with historical data. Therefore, this paper takes the PEK,PVG,CAN, which ranked the top 3 in China's airport capacity in 2019, as the sample airport. From the annual reports of the above three airports, the passenger and cargo throughput of PEK, PVG ,CAN from 2012 to 2019 are selected to predict the changes of passenger and cargo throughput in the next 10 years through the grey-forecast model. Introduction of the gray prediction model The grey prediction model theory is put forward by Professor Ju long Deng [11] of Hua zhong University of science and technology. At present, many prediction methods (such as linear regression) need more information, while the grey-forecast model does not need a large number of samples, it establishes a grey differential prediction model through a small amount of incomplete information. It can make a fuzzy long-term description of the development law of things, and is well used in the fields of traffic demand prediction. The modeling process of GM(1,1)is as follows : First, we should weaken the volatility and randomness of the original data, Record the original data as: . , using the Accumulating Generation Operation (AGO), will be produced. among them Make a sequence for the generated mean among them Then, according to GM(1,1), the grey differential equation is: "a"and"u" are parameters to be estimated, and" "are parameter vectors to be estimated. using the least square method: In above formula , , The whitening equation of grey differential equation (1) is: The solution of equation (1 .2) is: Finally, the formula of accumulated predicted value obtained from above is: using the Inverse Accumulating Generation Operation (IAGO) as follows: (5) Data Calculation By bringing the data of PEK,PVG,CAN into the forecasting model, the predicted value can be obtained. The following Table 1 and Table 2 show the data of CAN of the three airports calculated by the model: Date Test Then the residual error detection and stage ratio deviation inspection are carried out for the prediction results. The inspection method is as follows: (1)Residual error detection Absolute error: , k=2,3…,n Relative error: , k=2,3,…,n Average relative residual: When the average relative residual is less than 0.2, it is considered that the fitting degree between the model and the original data is acceptable. When the average relative residual is less than 0.1, it is considered that the fitting degree between the model and the original data is high. (2) Stage ratio deviation inspection: Firstly, the stage ratio of the original data is calculated : = n , k=2,3,…,n According to the predicted development coefficient (-a), the corresponding stage ratio deviation and average stage ratio deviation are calculated: , When the average stage ratio deviation is less than 0.2, it is considered that the fitting degree between the model and the original data is acceptable. When the average stage ratio deviation is less than 0.1, it is considered that the fitting degree between the model and the original data is high The test results calculated by Matlab are as show in Table 3: 2.86% 0.043 As can be seen from the above table, the residual error detection and stage ratio deviation inspection of the prediction models of the three airports are all qualified. The average relative residuals are less than 1 0%, The Average stage ratio deviation are all less than 0 .1. Therefore, the model has a high degree of fitting to the original data, and the prediction accuracy is ideal. so, the author thinks that using the grey-forecast model to forecast the passenger and cargo throughput of PEK,PVG,CAN in the next ten years can achieve satisfactory forecasting results. Prediction results and analysis The forecast results of passenger throughput and cargo throughput of PEK, PVG and CAN in the next 10 years are as show in Table 4 and Table 5: As it can be seen from the table above, the passenger and cargo throughput of the three airports will show a steady growth trend in the next decade. And the growth rate of PVG is the most outstanding of the three, its airport passenger throughput is expected to reach 126.1418 million in 2025, exceeding that of PEK. CAN follow behind. Although the passenger throughput of PEK is high and on the rise, it still grows slowly compared with the other two airports. In the future, the trend of cargo throughput is also the highest in PVG. In 2025, the cargo throughput will achieve a breakthrough of 4,863,500 tons, followed by GAN, which will reach 2,885,023 tons, while the cargo throughput growth trend of PEK is almost a straight horizontal line. Development proposal Based on the above model analysis and research, we have a comprehensive understanding of the future development trend about the three major airports, and according to the data analysis, we have put forward the following suggestions to improve and perfect the future situation. For PEK, one of the reasons is responsible for the slow growth of its passenger and cargo throughput is that its financial industry, information transmission, computer services and software industries account for a relatively high proportion, and the demand for cargo throughput is not high compared with the other two airport. Therefore, PEK can be organically connected with the city's logistics industry to stimulate the demand for air cargo; At the same time, the city's developed emerging technologies such as big data, artificial intelligence and block-chain can also be used to improve the airport's air capacity. For PVG and CAN the ever-increasing passenger and cargo throughput bring the increasing pressure on this two airports at the same time. They should further strengthen the construction of airport infrastructure such as cargo stands and freight stations, and increase the flight team. In the process of continuous business development, they are supposed to formulate corresponding air transportation capacity improvement plans, set up research centers, and use cutting-edge technologies to better meet the travel needs of the people and the demand for cargo transportation. Conclusion From the prediction results of the three airports, the future development prospects of them are very brilliant. On the whole, the transportation demand of this three airports will rise steadily in the next 10 years. The passenger transport demand of PVG and CAN will grow rapidly in the next 10 years. This two airport should improve its passenger service capacity to meet the future passenger traffic demand. As for PEK, the demand for cargo throughput is growing slowly. Therefore, stimulating the demand for cargo transport is the main task for its future development.
2,353.4
2022-01-01T00:00:00.000
[ "Engineering", "Business" ]
Solution of Fuzzy Volterra Integral Equations In A Bernstein Polynomial Basis In this paper, we have used the parametric form of fuzzy number and convert a fuzzy Volterra integral equation to a system of integral equations in crisp case. We present a numerical method for solving fuzzy Volterra integral equations of the second kind. The proposed method is based on approximating unknown function with Bernstein’s approximation. This method using simple computation with quite acceptable approximate solution. However, accuracy and efficiency are dependent on the size of the set of Bernstein polynomials. Furthermore we get an estimation of error bound for this method. I. INTRODUCTION The solutions of integral equations have a major role in the field of science and engineering.A physical even can be modelled by the differential equation, an integral equation.Since few of these equations cannot be solved explicitly, it is often necessary to resort to numerical techniques which are appropriate combinations of numerical integration and interpolation [1,2].There are several numerical methods for solving linear Volterra integral equation [3].Kauthen in [4] used a collocation method to solve the Volterra-Fredholm integral equation numerically.Maleknejad and et al. in [5] obtained a numerical solution of Volterra integral equations by using Bernstein Polynomials. The concept of fuzzy numbers and fuzzy arithmetic operations were first introduced by Zadeh [6], Dubois and Prade [7].We refer the reader to [8] for more information on fuzzy numbers and fuzzy arithmetic.The topics of fuzzy integral equations (FIE) which growing interest for some time, in particular in relation to fuzzy control, have been rapidly developed in recent years.The fuzzy mapping function was introduced by Chang and Zadeh [9].Later, Dubois and Prade [10] presented an elementary fuzzy calculus based on the extension principle also the concept of integration of fuzzy functions was first introduced by Dubois and Prade [10].Babolian et al., Abbasbandy et al. in [11,12] obtained a numerical solution of linear Fredholm fuzzy integral equations of the second kind.Also, the fuzzy integral equations have been studied by several authors [13,14,15]. In this paper, we present a novel and very simple numerical method based upon Bernstein's approximation for solving Volterra fuzzy integral equations. II. PRELIMINARIES In this section the basic notations used in fuzzy calculus and Bernstein polynomials are introduced.We start by defining the fuzzy number.Definition 1. [16] A fuzzy number is a fuzzy set ) (x u The set of all the fuzzy numbers is denoted by we define addition and multiplication by k as and it is shown that ) , ( 1 D E is a complete metric space [20]. The Bernstein's approximation, We suppose .be the max norm on [0,1] , then the given in [22], shows that the rate of convergence is at (3) due to Voronovskaya [23] shows that for (0,1) the rate of convergence is precisely . 1 n III. FUZZY VOLTERRA INTEGRAL EQUATION The Fuzzy Volterra integral equations of the second kind (FVIE-2) is [24] 1. 0 is a fuzzy function these equation may only possess fuzzy solution.Sufficient conditions for the existence of a unique solution to the fuzzy Volterra integral equation are given in [24].Now, we introduce parametric form of a FVIE-2 with respect to Definition 2. Let 5) and ( 6), we have By referring to Remark 2 we have ( 7) It is clear that we must solve two crisp Volterra integral equation of the second kind provided that each of Eqs. ( 7) and ( 8) have solution. We consider the Volterra integral equations of the second kind given by, , ) To determine an approximate the unknown function of Eq. ( 4), we approximate with Bernstein's approximation , then we have: By referring to Remark 2, we have the following equations x s being chosen as suitable distinct points in ] (0, , and 0 x is taken near 0 such that 1. In general we cannot be able to carry out analytically the integrations, involved.We compute the integral that exist 7) and (8). We give error bound for this solution in the following theorem. Theorem 2. Consider the crisp Volterra integral equations of the second kind (7) and (8) A is nonnegative if and only if 2 A is a A is a generalized permutation matrix, then Eq. ( 9) has a fuzzy Bernstein approximation. Proof.By we have 0 2 ≥ A . By Theorem 3 and our hypotheses, proof is completed. A. Comparison with Other Methods In this subsection, the shortcomings of the existing methods [5,12,26] for solving fuzzy integral equations are pointed out. Abbasbandy and et al. in [26] used the homotopy analysis method (HAM) to obtain solution of fuzzy integro-differential equation But, in this paper we used Bernstein Polynomials to obtain solution of equation (4). IV. NUMERICAL EXAMPLES To illustrate the technique proposed in this paper, consider the following examples. Example 4.1.We consider the fuzzy Volterra integral equation of the second kind given by, The exact solution in this case is given by 1 According to Eqs. (7) and (8) we have the following two crisp Volterra integral equations Now we approximate the unknown functions where we calculate the error of the exact solution and obtained solution of fuzzy Volterra integral equation with Bernstein approximation.Table 1 show the convergence behavior for Here a very simple and straight method, based on approximation of the fuzzy unknown function of an fuzzy Volterra integral equation on the Bernstein polynomial basis is used.Our achieve results in this paper, show that Bernstein's approximation method for solving fuzzy Volterra integral equations of second kind, is very effective and the answers are trusty and their accuracy are high and we can execute this method in a computer simply. are real numbers b and c , , . The set of all the fuzzy numbers (as given in definition 1monotonically increasing, left continuous function on (0,1] and right continuous at 0 ; ii.) (r uis a bounded monotonically decreasing, left continuous function on (0,1] and right continuous at 0 For this example, we use 2 . We consider the fuzzy Volterra integral equation of the second kind given by, According to Eqs. (7) and (8) we have the following two crisp Volterra integral equations 1we calculate the error of the exact solution and obtained solution of fuzzy Volterra integral equation with Bernstein approximation. Table 1 and obtained solution of fuzzy Volterra integral equation in this example at 0 Figure 1 . Figure 1.Compares the exact solution and obtained solutions Figure 2 . Figure 2. Compares the exact solution and obtained solutions
1,512
2013-01-08T00:00:00.000
[ "Computer Science", "Mathematics" ]
The Discontinuation of Antibiotics in Patients with Chronic Obstructive Pulmonary Disease Exacerbation and a Positive Respiratory Viral Assay: A Single-center Retrospective Analysis Background The use of antibiotics in chronic obstructive pulmonary disorder (COPD) exacerbations attributed to viral infections is observed in this study. The aim of this analysis is to describe the rate of discontinuation of antibiotics in patients who have an acute exacerbation of COPD (AECOPD) caused by viral infections, in turn encouraging the use of the respiratory viral panel in an effort to improve antibiotic stewardship at our facility. Methods A retrospective chart review was performed. A total of 92 patients were analyzed who had a positive respiratory viral polymerase chain reaction (PCR) (RVP) admitted for COPD exacerbations, of which 20 patients had a bacterial co-infection by a sputum analysis. Patients with a positive infiltrate on chest X-ray (CXR) were excluded. The rate of discontinuation of antibiotics, excluding azithromycin and doxycycline, in patients with a positive RVP with and without a bacterial co-infection were analyzed. Results Of these 92 patients, we found that a bacterial co-infection was detected by sputum culture in 20 patients. The average number of days until discontinuation for patients with no bacterial coinfection was 1.67 days while for those with a bacterial co-infection was 3.20 days. The difference in the number of days was statistically significant (p<0.001). Conclusion In conclusion, patients with an identified viral etiology of COPD exacerbations had antibiotics discontinued significantly sooner than those patients with bacterial coinfections. Introduction Viral infections cause about 60% of chronic obstructive pulmonary disorder (COPD) exacerbations while bacterial infections account for about 40% of COPD exacerbations [1]. The viral causes of COPD exacerbations seldom require antibiotics but specific viral etiologies, such as influenza, require oseltamivir. Patients admitted for COPD exacerbations often are treated with antibiotics for presumed pneumonia or possibly for their anti-inflammatory effects. These events contribute to increased health care costs and the progressive deterioration of a patient's health care status [1]. This study is aimed at observing the utilization of the respiratory viral polymerase chain reaction (RVP) and the subsequent discontinuation of antibiotics in patients who have COPD exacerbations caused by viral infections. Patients presenting with acute exacerbation of COPD (AECOPD), who had a positive RVP indicating a viral cause of their COPD exacerbation, were analyzed in this study to assess the use of antibiotics in our institution. The identification of a viral pathogen for a patient's COPD exacerbation could lead to a reduction in cost and length of stay, limit adverse medication effects, and mitigate other workups along with the discontinuation of antibiotics and prompt supportive care. The reduction in antibiotic use and the appropriate supportive care for viral COPD exacerbations should be an ongoing effort with this type of analysis. With the use of the RVP, we can identify viral pathogens that can promote antibiotic stewardship and appropriate isolation precautions. The aim of this study was to observe the use of antibiotics and the rate of discontinuation of antibiotics in patients with a viral cause of their COPD exacerbation. Materials And Methods In this observational study, we conducted a retrospective chart review of patients with an acute exacerbation of COPD and antibiotic use. We included patients admitted to St. Mary's Hospital from July 1, 2017, to April 20, 2018. Inclusion criteria were adult male and female patients aged 18 to 85 years with a diagnosis of COPD exacerbation and a positive RVP. Exclusion criteria were a positive infiltrate seen on a chest radiograph. A total of 92 patients were included in this analysis, of which 20 patients had a bacterial co-infection based on sputum culture, which was collected on the day of admission. Sputum gram stains were available within one day of admission, and cultures were available within two days of admission. Azithromycin and doxycycline, typically given for the adjunctive treatment of COPD exacerbations, were not included in the antibiotic discontinuation analysis. In our study, 59/92 (64%) received either azithromycin or doxycycline for the purpose of their anti-inflammatory effects. This study was approved by the Trinity Health of New England Institutional Review Board. Data collection The following parameters were collected on chart review. The patient's age, positive history of COPD, use of azithromycin or doxycycline in the acute setting, the use of other antibiotics, number of days until the discontinuation of antibiotics, a positive result of the RVP, date of RVP, sputum culture, and absence of a chest X-ray infiltrate. Statistical analysis Descriptive statistics included frequencies with percentages for qualitative variables and means with standard deviations for quantitative variables. To compare groups on the number of days until discontinuation, a Poisson regression model for count data was used. Analyses were conducted in SPSS v25 (IBM Corp., Armonk, NY) and the level for statistical significance was set at 0.05. Results A total of 92 patient charts were included in the study, and 20 patients had a bacterial coinfection. In 28 out of 72 patients (38.9%), antibiotics were discontinued on the same day. Within the first two days, 47 out of 72 patients (65.3%) had antibiotics discontinued. By day eight, 100% of the patients were off antibiotics ( Table 1). TABLE 1: The frequencies of the number of days until the discontinuation of antibiotics in patients with a respiratory viral panel without a bacterial co-infection The distribution of the number of days until discontinuation by bacterial coinfection status is reflected in Figure 1. The mean number of days to the discontinuation of antibiotics in patients with a viral infection causing a COPD exacerbation was 1.67 days (SD = 2.13) while for those with a bacterial coinfection, it was 3.20 days (SD = 2.71). The difference in the number of days was statistically significant (p<0.001). There were more patients without a coinfection, and those with a coinfection were less likely to be discontinued in 0 or 1 day. FIGURE 1: Distribution of number of days to discontinuation of antibiotics by whether patients had a bacterial coinfection 1A represents the days to the discontinuation of antibiotics in patients without a bacterial coinfection. 1B represents the days to the discontinuation of antibiotics with a bacterial coinfection. We further stratified infections by month and found that in the months of September, October, and November, the average number of days to antibiotic discontinuation was the highest ( mean) number of days to discontinuation by month in patients with a positive respiratory viral panel without a bacterial co-infection The most common virus encountered was all the influenza subtypes and, incidentally, this was associated with the least number of days on antibiotics ( Table 3). Nine patients (45%) had antibiotics discontinued on day zero, which is the highest percentage. There was a significant difference in the number of days until discontinuation by virus type (p=0.003). Discussion AECOPD is a common cause of hospital admissions, which inflicts a substantial burden of morbidity, mortality, and health care costs [2]. The prevalence of respiratory viruses detected in patients with COPD vary. Hurst et al. report that respiratory viruses account for around 30% of exacerbations, whereas Clark et al. report that respiratory viruses are detected in 22%-44% in patients with COPD exacerbations [3]. In another study by Dimpolous et al., viral infections were associated with up to 60% of COPD exacerbations [1]. Antibiotic resistance is a worldwide public health issue that requires antibiotic stewardship, international attention, and ongoing efforts in mitigating the effects of emerging resistance [4]. Rhode et al. report that bacterial pathogens are absent in about 50% of COPD exacerbations, highlighting the importance of recognizing viral and other non-infectious etiologies of COPD exacerbations [5]. The recognition of clinical characteristics and the use of the RVP providing important clinical data in patients with AECOPD is an important issue highlighted in this study. Antibiotics have been considered part of the treatment for patients with an acute exacerbation of severe COPD, which includes increased sputum purulence and worsening shortness of breath, however, this population has not been studied if a positive RVP is noted. The RVP in our institution detects the following viruses: Human metapneumovirus, respiratory syncytial virus (RSV) subtypes, influenza subtypes, rhinovirus, parainfluenza subtypes, and adenovirus. We observed patients with viral and bacterial coinfections as a cause of AECOPD and the use of antibiotic therapy from July 1, 2017, to April 20, 2018. We decided to exclude the use of azithromycin or doxycycline given ongoing studies of these antibiotics and their role in managing severe COPD exacerbation in the acute or chronic setting through their antiinflammatory effects [4]. In patients with AECOPD, a positive RVP, and bacterial coinfection, the mean number of days to the discontinuation of antibiotics was 3.20 days. In patients without a bacterial co-infection, the mean days to the discontinuation of antibiotics was 1.67 days. The duration of antibiotics was found to be significantly less in the viral group alone, with a statistically significant difference in the number of days to discontinuation (p<0.001). Possible reasons for why antibiotics were discontinued on days zero to two in patients noted to have a bacterial coinfection may have been due to the patient's clinical improvement or the patients were suspected to have colonizing bacteria based on their clinical history. In the months of September through November, patients with a viral infection alone were treated with antibiotics for a longer duration as compared to the other months. We noticed that patients with influenza had the highest percentage of antibiotics discontinued on the first day of hospitalization but some remained on antibiotics for up to a week. It is known that viral infections are more prevalent in winter months and are associated with longer periods of recovery, as cold weather can cause a reduction in lung function, which may increase vulnerability to pollutants and viruses to further increase the risk of an AECOPD [1][2]. Patients with parainfluenza had the highest average with a mean of 3.25 days before antibiotic discontinuation, followed by rhinovirus with a mean of 1.95 days before antibiotic discontinuation. Rhinovirus is one of the more commonly identified viruses in AECOPD and a known etiology of the common cold [2,6]. This was consistent with our study, where rhinovirus was noted to be the most prevalent in our patient population throughout the year, with a noticeable increase in the winter months. A total of 29 (31.5%) cases of rhinovirus were detected in our patient population ( Figure 2). FIGURE 2: Distribution of viruses by month Early testing for viral infections is optimal for ensuring appropriate management [7]. In our study, the RVP was obtained within 24 hours of admission. The limitation of the RVP at our institution is that it takes two to three hours to get a result, therefore, it is not a preferred test in the emergency department. In addition, the RVP is not processed during the night shift hours of 7 pm to 7 am. These limitations of the RVP, as well as the available results of sputum cultures, influence the decision of how quickly to discontinue antibiotics. The challenge of coinfections with bacteria is whether patients with COPD are colonized with bacteria in their respiratory tract or have a true bacterial coinfection. Our study is not without limitations, as this was a retrospective single-center study. Exacerbations of COPD were not stratified by severity. Some patients who were continued on antibiotics may have had other indicators suggestive of a bacterial pulmonary infection, such as elevated white blood cell count or fever, which prompted the continuation of antibiotic therapy [7]. We did not account for the severity of infection in these patients regarding the need for mechanical ventilation or a critical level of care. Our study population was limited to one center with a small population, which limits generalizability and applicability to large populations. Our study does not address the sensitivity or specificity of the RVP, nor does it address clinical outcomes if antibiotics were discontinued sooner with the use of the RVP. These observed limitations offer potential avenues for further study. Conclusions In conclusion, the time to the discontinuation of antibiotic therapy in patients with AECOPD with an isolated viral etiology was 1.67 days, while for those with a bacterial co-infection, it was 3.20 days. The difference in the number of days was statistically significant (p<0.001). Though 20 (21.7%) patients had bacterial coinfections, the majority of patients did not require antibiotics because of the detection of a single viral etiology. Therefore, the identification of a viral etiology can avoid unnecessary antibiotic usage thereby minimizing antibiotic resistance. The use of the respiratory viral panel may encourage antibiotic stewardship in this patient population. However future studies would be needed to ascertain data regarding antibiotic stewardship. Given the findings of this study, we encourage stopping antibiotics sooner if the RVP is positive, sputum cultures are negative, and a chest X-ray is normal. Further studies are needed to observe the rate of discontinuation of antibiotics after the identification of a viral etiology and its impact on clinical outcomes, including the length of hospital stay as well as the risk of readmission. In addition, future studies could analyze these two subsets of patients after the withdrawal of antibiotics and the impact of clinical deterioration requiring a higher level of care, need for intubation or noninvasive positive-pressure ventilation (NIPPV), adverse drug effects, death and re-initiation of antibiotics in multiple centers to improve the applicability of antibiotic stewardship in this subset of patients.
3,110.8
2019-12-01T00:00:00.000
[ "Medicine", "Biology" ]
COVID-19 Vaccination in People Living with HIV (PLWH) in China: A Cross Sectional Study of Vaccine Hesitancy, Safety, and Immunogenicity The administration of COVID-19 vaccines is the primary strategy used to prevent further infections by COVID-19, especially in people living with HIV (PLWH), who are at increased risk for severe symptoms and mortality. However, the vaccine hesitancy, safety, and immunogenicity of COVID-19 vaccines among PLWH have not been fully characterized. We estimated vaccine hesitancy and status of COVID-19 vaccination in Chinese PLWH, explored the safety and impact on antiviral therapy (ART) efficacy and compared the immunogenicity of an inactivated vaccine between PLWH and healthy controls (HC). In total, 27.5% (104/378) of PLWH hesitated to take the vaccine. The barriers included concerns about safety and efficacy, and physician counselling might help patients overcome this vaccine hesitancy. A COVID-19 vaccination did not cause severe side effects and had no negative impact on CD4+ T cell counts and HIV RNA viral load. Comparable spike receptor binding domain IgG titer were elicited in PLWH and HC after a second dose of the CoronaVac vaccine, but antibody responses were lower in poor immunological responders (CD4+ T cell counts < 350 cells/µL) compared with immunological responders (CD4+ T cell counts ≥ 350 cells/µL). These data showed that PLWH have comparable safety and immune response following inactivated COVID-19 vaccination compared with HC, but the poor immunological response in PLWH is associated with impaired humoral response. Introduction The rapid spread of coronavirus disease 2019 (COVID-19) caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) led to significant morbidity and mortality as well as substantial psychological and economic costs worldwide [1]. The COVID-19 pandemic has led to decreased access to HIV-prevention services, HIV testing, HIV treatment and viral suppression, which could lead to less control over the HIV epidemic [2]. People living with HIV (PLWH) have been disproportionately affected by COVID-19 and are at increased risk for severe clinical symptoms and mortality due to SARS-CoV-2 infection, especially among those with lower CD4 + T cell counts or unsuppressed HIV viral replication [3][4][5][6]. The administration of a COVID-19 vaccine is considered the most effective and economic way to prevent infection by COVID-19 and to control its spread. Central to achieving high levels of vaccination coverage needed to effectively control the spread of COVID-19 is overcoming vaccine hesitancy [7]. However, attitudes toward COVID-19 vaccines and potential risk factors of vaccine hesitancy have not yet been well characterized. Several studies have explored the reasons for COVID-19 vaccine hesitancy in a general population, with vaccine-specific concerns (side effects and efficacy) being the most commonly cited [8][9][10][11]. A French study showed that emphasizing the collective benefits of herd immunity and reassuring the safety of the proposed COVID-19 vaccine to PLWH is important to minimize vaccine hesitancy [12]. To date, two inactivated vaccines are widely used in China (CoronaVac vaccine and BBIBP-CorV vaccine), with satisfactory safety and immunogenicity among the general population in clinical trials [13,14]. Based on a low theoretical risk and the high potential benefit of vaccination, a panel convened by the Chinese Association of Infectious Diseases recommended that PLWH with suppressed viral load be immunized with a COVID-19 vaccine as soon as possible [15]. However, with limited information on vaccine safety and limited efficacy data available but noting their increased risk, PLWH may have conflicted COVID-19 vaccine attitudes. To address this lacuna, we initiated a questionnaire-based survey to explore issues surrounding COVID-19 vaccine hesitancy in this vulnerable population. Additionally, we sought to explore the safety experiences, including the impact on the efficacy of ART among those who had already been vaccinated with the first dose, and to learn about the immunogenicity of the CoronaVac vaccine in PLWH and health controls (HC), as this might provide information useful for combating hesitancy. Materials and Methods This was a cross-sectional, observational study. The survey was conducted in an out-patient clinic of Beijing Ditan Hospital, Capital Medical University, a large hospital designated for treating the COVID-19 pandemic and HIV infections, to investigate vaccination statuses, willingness to be vaccinated, and adverse reactions towards COVID-19 vaccines. Most patients who visited the out-patient clinic were followed up every 6 months to perform CD4 + T cell counts and HIV RNA viral load (VL) testing and were prescribed ART. PLWH were eligible if they met the following inclusion criteria: (1) 18-60 years old; (2) have been receiving a stable ART regimen for at least 1 year with an VL ≤ 50 copies/mL; (3) have no COVID-19 infection history and no contact history, including close or indirect contact with a person with a confirmed COVID-19 infection; (4) completed the questionnaire; and (5) signed written informed consent. We used two methods to recruit participants. We approached patients in the outpatient clinic in person and invited them to participate. If they agreed and were eligible, we provided a private room in which the participants and the research assistant could interact, and participants then completed a paper-based questionnaire. On the other hand, we recruited age-and sex-matched HC who had been vaccinated with two doses (0.5 mL/dose) of CoronaVac (Sinovac Life Sciences, Beijing, China) for at least 2 weeks by advertisements on the Internet. Plasma samples of the PLWH and HC were collected to measure humoral response to SARS-CoV-2 anti spike receptor binding domain-protein (S-RBD). The participants were recruited from 20 July to 4 August 2021 and the data were collected from 20 July to 20 August 2021. The study flow diagram is shown in Figure 1. The study was approved by the Human Science Ethical Committee of Beijing Ditan Hospital, Capital Medical University (No. 2021-021-02). Participation was voluntary, and completion of the questionnaire implied consent for study participation. All information gathered was anonymized and kept confidential. The questionnaire was completed by PLWH with assistance from the researcher. It involved three items: (1) demographics, HIV characteristics, and health status; (2) perception of COVID-19 vaccination; and (3) vaccination status and safety of the COVID-19 vaccine. The demographics included gender, age, marital status, educational background, and occupation. The HIV characteristics included duration of ART treatment, mode of HIV transmission, CD4 + T cell counts, and VL prior to ART initiation and 6 months ago. Health status was measured using 12-item short form health survey (SF-12), which is a 12item questionnaire of which the answers allow for the calculation of Physical Component Summary (PCS) and Mental Component Summary (MCS) scores [16]. We assessed intent to be vaccinated for SARS-CoV-2 using the question, "Have you been vaccinated against COVID-19?", followed by the response options "Yes" and "No". Participants who responded "No" were asked the following multiple-choice question: "What is preventing you from becoming vaccinated?". The response options were "Afraid of the side effects and/or poor efficacy", "Contraindications for the vaccine", "No perceived need for vaccination", "Waiting to be scheduled", and "Scheduling conflicts". Participants who responded "Yes" were asked the following open-ended question: "Do you have any concerns after vaccination? If so, please specify." To explore the role of physicians in encouraging vaccine acceptance, all participants were asked whether they discussed the COVID-19 vaccine with their physicians. PLWH who were vaccinated with at least one dose were asked to provide the details of vaccination, including the manufacturer of their COVID-19 vaccine, the date of each dose, and any adverse reactions that occurred within 28 days after each dose. The safety of the COVID-19 vaccine was assessed by local (pain, swelling, redness, and itching) and systemic (fever, fatigue, diarrhea, muscle pain, nausea, headache, vomiting, cough, joint pain, and hypersensitivity) adverse events. The adverse The study was approved by the Human Science Ethical Committee of Beijing Ditan Hospital, Capital Medical University (No. 2021-021-02). Participation was voluntary, and completion of the questionnaire implied consent for study participation. All information gathered was anonymized and kept confidential. The questionnaire was completed by PLWH with assistance from the researcher. It involved three items: (1) demographics, HIV characteristics, and health status; (2) perception of COVID-19 vaccination; and (3) vaccination status and safety of the COVID-19 vaccine. The demographics included gender, age, marital status, educational background, and occupation. The HIV characteristics included duration of ART treatment, mode of HIV transmission, CD4 + T cell counts, and VL prior to ART initiation and 6 months ago. Health status was measured using 12-item short form health survey (SF-12), which is a 12-item questionnaire of which the answers allow for the calculation of Physical Component Summary (PCS) and Mental Component Summary (MCS) scores [16]. We assessed intent to be vaccinated for SARS-CoV-2 using the question, "Have you been vaccinated against COVID-19?", followed by the response options "Yes" and "No". Participants who responded "No" were asked the following multiple-choice question: "What is preventing you from becoming vaccinated?". The response options were "Afraid of the side effects and/or poor efficacy", "Contraindications for the vaccine", "No perceived need for vaccination", "Waiting to be scheduled", and "Scheduling conflicts". Participants who responded "Yes" were asked the following open-ended question: "Do you have any concerns after vaccination? If so, please specify." To explore the role of physicians in encouraging vaccine acceptance, all participants were asked whether they discussed the COVID-19 vaccine with their physicians. PLWH who were vaccinated with at least one dose were asked to provide the details of vaccination, including the manufacturer of their COVID-19 vaccine, the date of each dose, and any adverse reactions that occurred within 28 days after each dose. The safety of the COVID-19 vaccine was assessed by local (pain, swelling, redness, and itching) and systemic (fever, fatigue, diarrhea, muscle pain, nausea, headache, vomiting, cough, joint pain, and hypersensitivity) adverse events. The adverse events reported were graded according to the China National Medical Products Administration guidelines [17]. The causal association between adverse events and vaccination was determined by the investigators. The survey items are shown in Table S1. We used ELISA kits (Wantai BioPharm, Beijing, China) to evaluate the spike receptor binding domain-protein specific IgG (S-RBD-IgG) antibody titers according to the manufacturer's protocol. Briefly, the plasma was inactivated at 56 • C for 30 min to safety considerations, diluted 11-fold, and then applied to 96-well plates coated with purified SARS-CoV-2 RBD protein for 30 min at 37 • C. After washing five times, antibody binding was revealed using anti-human IgG labeled by HRP. Subsequently, the substrate solution and stop solution were added sequentially, and the plate absorbance was read at 450 nm and 630 nm after the reaction stopped. The optical density (OD) values were then converted into the equivalent enzyme units (U/mL) using a standard curve derived from known concentrations of a SARS-CoV-2-IgG antibody standard. The maximum percentage of missing values did not exceeded 5% (3.2%, n = 12) in the present study and the missing values were excluded from analysis [18]. The characteristics of the survey respondents were summarized using frequencies (percentages) or medians (interquartile intervals, IQR). We used crosstabulations and chi-square tests to estimate the unadjusted associations between participant characteristics with the intent to become vaccinated. The Mann-Whitney U test was performed to compare continuous variables, and the chi-square test was chosen to test associations between vaccine willingness and categorical predictor variables. Paired continuous variables were compared using the Wilcoxon signed rank test. A p-value of 0.05 or lower was considered statistically significant. Statistical analysis was conducted using SPSS, version 26.0 (IBM Corp, Armonk, NY, USA), GraphPad Prism, version 8.0.1 (GraphPad Software, La Jolla, CA, USA) and R studio, version 4.0.3 (R Core Team, Vienna, Austria, 2020). Demographics, HIV Characteristics, and Health Status of PLWH A total of 383 PLWH were recruited, and 378 questionnaires were available (response rate was 98.7%). The participants consisted of 374 males (98.9%) and 4 females (1.1%), with a median age of 34 years (IQR 30-39, Table 1). The majority of participants (51.3%) were 31-40 years old. They had varied levels of educational attainment, with more than two thirds (67.2%) having a college or undergraduate diploma. Most participants were unmarried (70.0%). Business/service staff (35.1%) accounted for the highest occupational group, followed by professional and technical personnel (20.7%), public officials (9.8%), and farmers/workers (5.7%). All PLWH had undetectable plasma VL (<50 copies/mL) for at least 6 months and had received efavirenz (EFV), tenofovir disoproxil fumarate (TDF), and lamivudine (3TC) for at least 1 year without interruption. The median CD4 + T cell counts and VL prior to ART initiation was 305 cells/µL (IQR 203-433) and 4.67 log 10 copies/mL (IQR 4.16-5.01), respectively. Their median duration of ART treatment was 4.3 years (IQR 2.8-6.0), and the median CD4 + T cell counts 6 months ago was 578 cells/µL (IQR 428-725). Men who have sex with men (MSM, 72.7%) were the major HIV transmission risk group in this cohort, followed by those with other or unknown transmissions (16.1%). Median scores of PCS and MCS in PLWH were 53 (IQR, 47-55) and 53 (IQR, 46-56), respectively. Detailed results are presented in Table 1. (Table 2). Next, we explored the reasons for not becoming vaccinated among the 157 unvaccinated patients (Table 3). Concerns about the side effects and/or poor efficacy of the vaccine was the most common reason (56.0%), followed by waiting to be scheduled (19.5%), having contraindications for the vaccine (13.8%), no perceived need for vaccination (9.4%), and scheduling conflicts (1.9%). Patients who hesitated to become vaccinated are defined as those worried about safety and/or efficacy and described vaccination as being unnecessary. Overall, 27.5% (104/378) of PLWH hesitated to receive a COVID-19 vaccine. The univariate analysis shows that age, marital status, educational background, occupation, duration of ART treatment, CD4 + T cell counts before 6 months, CD4 + T cell counts and VL prior to ART initiation, and SF-12 scores had no impact on vaccination willingness (all p > 0.05, Table 4), while the number of PLWH who consulted their physicians about the COVID-19 vaccine was significantly lower in those with vaccine hesitancy (36.5% vs. 54.0%, p = 0.002). These results demonstrate that concerns about the safety and efficacy of the vaccine are major obstacles to COVID-19 vaccination. Physicians play an important role in encouraging vaccine acceptance among PLWH. Safety and the Impact on ART Efficacy of the COVID-19 Vaccine on ART Efficacy in PLWH In total, 215 PLWH have received at least one dose of an inactivated vaccine. Approximately one third (33.4%) reported at least one adverse reaction within 28 days after each dose. For the CoronaVac vaccine and the BBIBP-CorV vaccine, the incidence of adverse reactions was 35.1% and 31.2%, respectively. The most common local adverse reaction was injection site pain (25.1%, Figure 2A). The most common systemic adverse reaction was fatigue (13.5%), followed by fever (4.7%) and headache (3.3%). All adverse reactions were mild (grade 1 or grade 2) and self-limited. These results suggest that the COVID-19 vaccine had a good safety profile. In order to explore the impact of a COVID-19 vaccine on ART efficacy in PLWH, we compared the CD4 + T cell counts and VL 6 months ago versus the results during this visit. The median CD4 + T cell counts of vaccinated patients was 580 (447-723) cells/µ L before vaccination and significantly increased to 604 (452-752) cells/mL after vaccination (p = 0.035, Figure 2B), while the CD4 + T cell counts of unvaccinated people did not change markedly (578 (420-758) cells/µ L vs. 562 (420-734) cells/µ L, p = 0.752). No event of viral rebound (>50 copies/mL) was reported. Since residual viremia below 50 copies/mL has been associated with a higher risk of virologic failure in previous studies [19], we further confirmed that no significant difference was found in the proportion of VL remaining to be "target not detected" (TND) between the vaccinated and unvaccinated group (91.9% vs. 94.3%, p = 0.412, Figure 2C), suggesting no negative impact of COVID-19 vaccine on ART efficacy. The CoronaVac Vaccine Elicited Comparable Antibody Responses in PLWH Compared with HC To investigate the humoral responses to COVID-19 vaccines, we recruited 55 PLWH and 21 age-and sex-matched HC who completed vaccination with two doses of the Coro-naVac vaccine for at least 2 weeks (ranging from 2 to 18 weeks) and measured their plasma S-RBD-IgG antibody titers. As expected, PLWH had lower CD4 + T cell counts compared with HC (572 ± 203 cells/µ L vs. 769 ± 262 cells/µ L, p = 0.001, Table 5). All PLWH had a CD4 + T cell count of above 200 cells/µ L before vaccination. The median time interval between administration of the second dose and blood collection and the vaccination interval In order to explore the impact of a COVID-19 vaccine on ART efficacy in PLWH, we compared the CD4 + T cell counts and VL 6 months ago versus the results during this visit. The median CD4 + T cell counts of vaccinated patients was 580 (447-723) cells/µL before vaccination and significantly increased to 604 (452-752) cells/mL after vaccination (p = 0.035, Figure 2B), while the CD4 + T cell counts of unvaccinated people did not change markedly (578 (420-758) cells/µL vs. 562 (420-734) cells/µL, p = 0.752). No event of viral rebound (>50 copies/mL) was reported. Since residual viremia below 50 copies/mL has been associated with a higher risk of virologic failure in previous studies [19], we further confirmed that no significant difference was found in the proportion of VL remaining to be "target not detected" (TND) between the vaccinated and unvaccinated group (91.9% vs. 94.3%, p = 0.412, Figure 2C), suggesting no negative impact of COVID-19 vaccine on ART efficacy. The CoronaVac Vaccine Elicited Comparable Antibody Responses in PLWH Compared with HC To investigate the humoral responses to COVID-19 vaccines, we recruited 55 PLWH and 21 age-and sex-matched HC who completed vaccination with two doses of the CoronaVac vaccine for at least 2 weeks (ranging from 2 to 18 weeks) and measured their plasma S-RBD-IgG antibody titers. As expected, PLWH had lower CD4 + T cell counts compared with HC (572 ± 203 cells/µL vs. 769 ± 262 cells/µL, p = 0.001, Table 5). All PLWH had a CD4 + T cell count of above 200 cells/µL before vaccination. The median time interval between administration of the second dose and blood collection and the vaccination interval between two doses were comparable between groups (p = 0.921; p = 0.969, respectively). After the whole-course vaccination, the S-RBD-IgG titers were similar in PLWH and HC (15.8 U/mL (IQR,3) vs. 16 U/mL (IQR, 11.3-23.2), p = 0.839, Figure 2D), and the two groups had a similar dynamic curve for S-RBD-IgG titers ( Figure 2E). Therefore, a similar immunogenicity of CoronaVac was noted in PLWH compared with HC. Poor Immunological Response Was Associated with Impaired Antibody Responses to CoronaVac in PLWH We further evaluated the immunogenicity of CoronaVac for different immune statuses. The definition of an immunological responder has been a confounding matter, in that different criteria are used by different researchers. In this study, PLWH with CD4 + T cell counts ≥ 350 cells/µL were defined as immunological responders. The results showed that the S-RBD-IgG titers of immunological responders (CD4 + T cell counts ≥ 350 cells/µL) was significantly higher than that of poor immunological responders (CD4 + T cell counts < 350 cells/µL) (22.4 U/mL (IQR, 17-24.4) vs. 11.2 U/mL (IQR, 4.6-21.2), p = 0.023, Figure 2F,G), and two groups were well matched in age and time since whole-course vaccination (p = 0.346 and p = 0.235, respectively). Thus, the CoronaVac vaccine was more likely to elicit lower humoral immune responses in poor immunological responders. Discussion This study was conducted when the coronavirus outbreak in China was largely under control and the free vaccination policy was implemented. The results indicate that PLWH have more vaccine hesitancy. COVID-19 vaccine hesitancy was driven primarily by safety and efficacy concerns. The results of adverse effects revealed that COVID-19 vaccines led to a tolerable safety profile in PLWH. Our data also showed that PLWH have a comparable immune response following CoronaVac vaccinations compared with HC, but poor immunological response might be associated with impaired humoral response in PLWH. In the general Chinese population, the hesitancy rate of COVID-19 vaccination was 17.75% under the free vaccination policy [10], which is lower compared with the vaccine hesitancy rate of PLWH in the present study (27.5%). The vaccination rate with a first dose in our study was 57.9%, which is significantly lower than that of adult residents reported by Beijing Daily in the same period (94.5%) [20]. This situation is also observed in other vaccine inoculations, such as vaccines for influenza, human papillomavirus, and hepatitis B virus [21][22][23]. In the previous studies, the rates of vaccine hesitancy among PLWH towards the COVID-19 vaccine ranged from 28.7% to 54% [12,[24][25][26]. Individually, vaccine hesitancy rates in PLWH were highest in black Americans (54%) [25] and were lowest in the French PLWH (28.7%) [12]. Nevertheless, we should be cautious when comparing vaccine hesitancy rates across regions because the influence of the vaccine type available in a study setting and different definition of vaccine hesitancy should not be overlooked. We conducted univariate analyses for factors associated with vaccine hesitancy in PLWH. The demographic characteristics, HIV characteristics, and self-rated health status were not significantly associated with vaccine hesitancy. Of importance, PLWH with vaccine hesitancy were less likely to consult physicians than those without vaccine hesitancy. Evidence suggests that patients whose physicians recommend a vaccine are more likely to become vaccinated than patients who do not [27]. Most patients actively seek information about the vaccine and value their physician's opinion in this area. This finding has been also confirmed by our study, which underlines the role of physicians in encouraging vaccine acceptance among patients. Next, the perceived barriers against COVID-19 vaccination found in this study, namely concerns about safety and efficacy, have likewise been reported in other studies related to the introduction of a COVID-19 vaccine [12,28]. Feng et al. evaluated the safety of BBIBP-CorV inactivated vaccine in Chinese PLWH who are stable on ART with CD4 + T cell counts >200 cells/µL and their results were satisfactory [29]. The ChAdOx1 nCoV-19 (AZD1222) vaccine (an adenovirus-vectored vaccine) and the BNT162b2 mRNA vaccine also showed favorable safety among PLWH in South Africa and America, respectively [30,31]. To provide more evidence on the safety of inactivated COVID-19 vaccines, we evaluated the adverse reaction rates of two inactivated COVID-19 vaccines in PLWH that had favorable safety profiles in the general population [13,14]. Similarly, our data suggested that the adverse reactions were mild and self-limiting. No unexpected safety issues were found, and the adverse reaction profile observed was consistent with that previously reported for inactivated vaccines and other kinds of COVID-19 vaccines, such as the BNT162b2 mRNA COVID-19 vaccine [32]. All of the above results suggest that the safety of these two kinds of inactivated COVID-19 vaccines in PLWH is tolerable. Moreover, as specific indexes for the evaluating effect of ART, the impact of a COVID-19 vaccine on CD4 + T cell counts and VL is not conclusive. We further measured the changes in CD4 + T cell counts and VL before and after vaccination, and the results demonstrated that a COVID-19 vaccine had no negative impact on either CD4 + T cell counts or VL during the study period. Furthermore, the CD4 + T cell counts of vaccinated PLWH were significantly increased. Based on previous studies on other vaccines, we speculate that the proliferation of CD4 + T cells may be relevant to the generation of a virus-specific neutralizing antibody [33]. However, the exact underlying mechanism needs to be further investigated. We next examined whether the CoronaVac vaccine can elicit a similar humoral response in PLWH compared with HC. Our results supported recent reports that humoral responses elicited by COVID-19 vaccines are comparable in PLWH and HC within 4 weeks [23][24][25], and we further demonstrated a similar outcome over a longer period of time, supporting the current advice for PLWH to be immunized with COVID-19 vaccines. In previous studies, PLWH with CD4 + T cells counts <200 cells/µL have shown diminished SARS-CoV-2 antibody production after acute infection [34], as well as blunted immune responses to multiple vaccine types [35]. Our data showed that poor immunological response was associated with significantly lower S-RBD-IgG levels, suggesting that the impaired humoral response of COVID-19 vaccine in PLWH is possibly related to CD4 + T cell counts. In line with previous studies, CD4 + T cells, especially T-follicular helper (Tfh) cells, are required for the induction of high-affinity antibody responses and the formation of longlived B cell memory. The structural changes in the germinal center and functionally altered Tfh cells derived by HIV replication and the consequent impaired interaction between Tfh cells and germinal center B cells might contribute to impaired immune response [36,37]. In addition, a third booster shot of a COVID-19 vaccine was reported to potentially provide more protection in the general population [38]. Whether adding additional doses to poor immunological responders is worthwhile needs to be further investigated. Our study has several limitations. First, this study was an observational study over a short period, and the sample size was small. Second, the single-center on-spot survey resulted in sampling bias, for example, most participants were male and highly educated, and had good adherence to ART, so the results might not be generalizable to a random population sample. Third, limited by the natural characteristics of a cross-sectional study, the data on adverse reactions after vaccination were collected through the patients' memory, which might cause ambiguous information and need to be verified in large prospective studies. Conclusions In summary, we found that the rate of COVID-19 vaccine hesitancy in adult PLWH on ART with virological suppression was lower than that in the general population. Evidence of the safety and efficacy of COVID-19 vaccines are key to enhancing the rates of vaccine coverage. Overall, an inactivated COVID-19 vaccine is safe and tolerable. It is not associated with HIV RNA rebound but might increase CD4 + T cells counts. Finally, our results add to a growing body of evidence that PLWH develop similar humoral immune responses to an inactivated COVID-19 vaccine compared with the general population, but poor immunological responders might need more effective vaccination strategies.
6,071.8
2021-12-01T00:00:00.000
[ "Medicine", "Biology" ]
Symmetric Diffeomorphic Image Registration with Multi-Label Segmentation Masks : Image registration aims to align two images through a spatial transformation. It plays a significant role in brain imaging analysis. In this research, we propose a symmetric diffeomorphic image registration model based on multi-label segmentation masks to solve the problems in brain MRI registration. We first introduce the similarity metric of the multi-label masks to the energy function, which improves the alignment of the brain region boundaries and the robustness to the noise. Next, we establish the model on the diffeomorphism group through the relaxation method and the inverse consistent constraint. The algorithm is designed through the local linearization and least-squares method. We then give spatially adaptive parameters to coordinate the descent of the energy function in different regions. The results show that our approach, compared with the mainstream methods, has better accuracy and noise resistance, and the transformations are more smooth and more reasonable. Introduction Image registration plays a crucial role in biomedical imaging applications, especially brain imaging analysis. It aims to find a spatial transformation to align datasets across subjects, modalities, or times geometrically. A variety of imaging processing approaches require registration as a preprocessing step. For example, a considerable amount of structural or functional information can be obtained from the brain atlases established from images of a large population [1]. However, it is a great challenge to find potential links between the images because they are of different people, ages, or modalities. We can settle these images through image registration into a standard space where shapes and structures are well aligned. Many subsequent analyses, such as the analysis of anatomical and connectivity patterns, can be performed after image registration [2]. Image registration can be divided into linear and non-linear according to the representation of the transformation. Linear registration methods compute affine transformations. Nevertheless, linear registration generally fails to meet the demands of processing. The reason is that the physiological movements of human bodies may lead to the organs' unregulated changes in position, volume, and shape. Therefore, scholars focus their eyes on non-linear registration because affine transformations cannot describe these changes [3]. Non-linear registration methods are classified into two categories: model-driven and data-driven methods [4]. Model-driven means establishing the explicit expressions of optimization models and then obtaining transformations through optimization methods. In contrast, data-driven approaches do not require explicit expressions. Their main task is to compute the mappings from the image pairs to the transformations [5]. We focus on model-driven methods in our study. The reason is that data-driven methods are prone to overfitting due to the high dimensionality of medical images and the small number of training samples. Under model-driven settings, the transformations can be expressed as simple parametric functions, such as B-spline functions [6][7][8], radial basis functions [9][10][11], and thin-plate spline functions [12][13][14]. The registration models are then turned into parametric optimization models by doing this. The non-parametric treatment is also currently popular. Non-parametric methods consider the registration models as variational problems in which Euler-Lagrange equations should be solved. There are several common non-parametric approaches, such as large deformation diffeomorphic metric mapping (LDDMM) [15][16][17][18], elastic registration [19][20][21], fluid registration [22][23][24], and diffusion registration [25][26][27]. Non-parametric methods are more suitable than parametric methods for our research topic on brain magnetic resonance imaging (MRI) registration. We have known a lot of effective methods for brain MRI registration [28][29][30][31], which shows that non-parametric methods perform better than parametric methods. The main reason is that non-parametric methods set independent transformation functions at every pixel. Therefore, they have far higher degrees of freedom than parametric methods and can describe the complex deformation of structures in the brain more easily. Moreover, non-parametric methods have many acceleration techniques [32][33][34], which save the computational time greatly. However, the existing methods for brain MRI registration have several disadvantages. These methods pay closer attention to the intensity-based local similarity of the images than to the boundary alignment of the brain regions. The complex features of the brain, such as the sulcus gyrus of the cerebral cortex [35], may pull the optimization into local minima. Furthermore, the terminal conditions of these models are based on the average level of the global energy function descent. It easily leads to the situation that some brain regions have not yet been aligned when the iteration stops. Additionally, there is noise in the image acquisition process. These models do not emphasize the robustness to the noise, which can affect the performance in practice. In this study, we propose a symmetric diffeomorphic image registration method based on multi-label segmentation masks to compensate for the above shortcomings. Firstly, we introduce the similarity metric of the multi-label segmentation masks, i.e., the segmentation result of large regions, into the energy function, which strengthens the alignment of the region boundaries and improves the noise resistance. The acquisition of the masks is not difficult because today's deep learning-based segmentation models [36] offer significant improvements in accuracy and computation time compared to traditional segmentation methods [37][38][39]. Secondly, we give a selection of spatially adaptive parameters based on the masks. It can prioritize the optimization of both the image similarity metric in the aligned regions and the mask similarity metric in the unaligned regions, so it coordinates the decline of the energy function spatially. Thirdly, we design an effective approximation algorithm through model relaxation, least-squares method, etc. We validate the effectiveness of our method in three different experiments. The results show that our method has better accuracy and robustness compared to the mainstream approaches, and meanwhile, the deformation field retains excellent reversibility, smoothness, and reasonableness. Mathematical Background Knowledge Let F, M : Ω → [0, 1] denote the fixed image and the moving image, respectively. The domain of the images is Ω ⊂ R n , where n = 2 for 2D images and n = 3 for 3D images. The task of the non-linear image registration is to compute a deformation field (transformation) ϕ : Ω → Ω to make the warped moving image M • ϕ as similar as possible to the fixed image F. In the setting of non-parametric registration, the deformation field is expressed as ϕ = Id − u, where Id is the identity map and u is called the displacement field of ϕ. The accuracy of the transformation is measured by a similarity metric E 1 , which often takes the form of the sum of squared difference (SSD) In addition, the transformation is required to have the presupposed properties, which are measured by a regularization term E 2 . The most common property is the global smoothness, i.e., L 2 regularization expressed as E 2 (ϕ) = ∇ϕ 2 L 2 . The registration problem is written as where 1/σ 2 is a positive number that controls the balance between E 1 and E 2 . We omit the notation L 2 for simplicity. The solution space of this problem can be restricted to the diffeomorphism group Diff(Ω) = ϕ|ϕ −1 exists and ϕ, ϕ −1 ∈ C ∞ (Ω, Ω) , which is a Lie group. Any tangent vector v ∈ V of ϕ 0 = Id is a vector field on Ω, where V = T Id Diff(Ω) is a Lie algebra. We often call v a velocity field in the research fields of registration [4]. A diffeomorphism can be generated by the exponential map on Diff(Ω), i.e., ϕ = exp(v). It provides the intrinsic update step [40,41] on the diffeomorphism group: where v is the velocity field update, and • is both the function composition and the multiplication on the Lie group. A practical method to compute the exponential map of vector fields, given by Arsigny et al. [42], is based on the idea of "scaling and squaring" [43]. For any integer N, it holds exp(v) = exp N −1 v N because of the property of the exponential map, Suppose v is a small vector field, it is reasonable to use the first-order approximation of the exponential map. We denote w = exp(v) as the result transformation, and the algorithm of the exponential map is described as follows (Algorithm 1). Algorithm 1 The first-order algorithm for the exponential map of vector fields Choose a proper N such that 2 −N v is close enough to 0, e.g., max x 2 −N v(x) ≤ 0.5. Implement the first-order integration: w(x) = 2 −N v(x) for every x ∈ Ω. Do N recursive squarings: w ← w • w. Therefore, in the above setting, the velocity field update of each iteration is selected in V discretely, and the result velocity field is regarded as a constant. We often call it a stationary velocity field (SVF). By contrast, the velocity field of LDDMM [15][16][17][18] varies over time, i.e., v = v t is a continuous curve in V. The SVF methods, such as Diffeomorphic Demons [27] and Log-domain Diffeomorphic Demons [44], do not optimize the global variational problem like LDDMM because they do not update the velocity field in the whole time flow. However, the SVF methods have less computational cost and can thus obtain a diffeomorphism quickly. The Reassignment of the Segmentation Masks We can improve the boundary alignment of brain regions and the robustness to the noise through the multi-label segmentation masks of large regions, which are the regions of white matter (WM), grey matter (GM), and cerebrospinal fluid (CSF). The masks are available easily because there is an obvious difference in the intensity values (see Figure 1). We propose reassigning the intensity values on the masks. Suppose there are m images participating in the registration, we set the intensity value of the r-th region to be where B is the segmentation mask, I j is the j-th image, and Ω j r is the r-th region on I j . It means we first average the intensity value on the region of one image and then average among different images. For example, we assume to use only two images I 1 , I 2 , and there are two regions on the images. The average intensity values of the first region of We then obtain the average intensity value on the first region, i.e., B 1 = 1 2 B 1 1 + B 2 1 . Therefore, we calculate the intensity values of the segmentation masks once throughout the process. Significantly, we need histogram matching [45] on the images before the reassignment because we should avoid the gap of the intensity values in the same region among different images. We add the information of the average intensity values of the regions to the masks through the reassignment. Consequently, the intensity magnitude of the masks coincides with that of the images. The spatial information of the region boundaries is also retained. Furthermore, we can improve the robustness to the noise by the reassignment. The reason is that the intensity values of the contaminated pixels are pulled back towards the average values on the regions when we introduce the similarity metric of the masks to the energy function. Therefore, the influence of the noise is weakened, and we improve the anti-noise ability of the model by doing this. The Proposed Model Firstly, we take the SSD as the similarity metric and the L 2 smoothness as the regularization term. In the integral form, the energy function is We restrict the solution space to the diffeomorphism group Diff(Ω). Therefore, we introduce the energy of the inverse transformation to ensure reversibility. The energy function is then extended to a symmetric form with the transformation and the inverse transformation as variables. We denote ψ as the slack variable of ϕ −1 and reformulate the energy function, Equation (4), as Next, we penalize Equation (6) to the energy function Equation (5), i.e., we add the inverse consistent constraint [46,47] We tie ϕ and ψ together in the energy function by doing this. However, we should retain both of them because there is an error between ψ and ϕ −1 , which will be amplified during calculation. Moreover, keeping E 3 symmetric can reduce the difficulty of solving because we can divide them into two subproblems later. After that, we introduce the similarity metric of the multi-label segmentation masks. We denote B F , B M : Ω → [0, 1] as the segmentation masks of the fixed image F and the moving image M, respectively. The magnitude of B F and B M is consistent with that of the images because we have carried out the reassignment of the masks. We use the symmetric SSD form, i.e., Minimizing E 4 helps the alignment of the region boundaries. It should be noted that the segmentation masks are different from the images even if they both take the SSD form. The reason is that the intensity value of the segmentation masks is a constant within one region, leading to E 4 = 0. Therefore, there is little force to cause deformation in the overlap of B F and B M when E 1 is not involved in the registration. Numerical Implementation Firstly, we split the registration problem Equation (9) into two subproblems so that the alternating iteration strategy can be applied. where k is the number of external iterations. Next, we consider only the subproblem Equation (11) in the following steps because Equation (10) can be solved in the same way. We introduce a slack variable c to avoid hard point-to-point correspondences following the strategy of Cachier et al. [48]. Consequently, we turn the subproblem Equation (11) into min ϕ,c where 1 σ 2 x controls the spatial correspondence error. We can separate Equation (12) further into two new subproblems: where l is the number of internal iterations. In addition, we will use the approximation of in Equation (13), i.e., applying the fixed image F on it, which changes it to It can match the magnitude of this item with that of the images and the masks. After that, we use the first-order approximation of the intrinsic update step c = ϕ k,l • exp v k,l to simplify Equation (13), where v k,l is the update velocity field. The approximation is reasonable because v k,l is small. Therefore, Equation (13) becomes where ) is a row vector, and We can rewrite Equation (16) as the following least-squares problem: where I is the identity matrix of size n. Finally, we obtain the least-squares solution of Equation (20) through the Sherman-Morrison formula: where we omit the position x for simplicity. Similarly, we can solve the least-squares problem acquired from Equation (10) from the above steps: where s k,l is the update velocity field of ψ k,l , and As for the new subproblem Equation (14), we can obtain the closed-form solution with a Gaussian convolution, i.e., ϕ k,l+1 = K * ϕ k,l • exp v k,l , where K is a Gaussian kernel. This is because of the special form of the regularization term [49]. Consequently, the closed-form solution of the subproblem Equation (10) is also obtained: The algorithm of our proposed method is as follows (Algorithm 2). Algorithm 2 Symmetric diffeomorphic image registration with multi-label segmentation masks Initialize ϕ , ψ , v , s. repeat {Update the backward transform ψ} repeat Compute the velocity field s using Equation (22). Compute the velocity field v using Equation (21). The Selection of Spatially Adaptive Parameters In this section, we consider only the parameters in Equation (21) because we can acquire those in Equation (22) similarly. Firstly, we need to review the traditional selection of parameters based on statistics. We assume that {FM(x)| x ∈ Ω} is independent and identically distributed (IID) in a normal distribution N(0, σ 2 1 ). This setting is reasonable when Ω is huge because of the Lindberg-Lévy central limit theorem. We choose the zero mean because we hope that the warped moving image fully aligns with the fixed image. We write the likelihood function, based on the maximum likelihood estimation (MLE), as The optimal estimation of σ 2 1 is obtained through −∂ ln(W)/∂σ 1 = 0, i.e., where |Ω| is the area of Ω. In particular, we can regard Ω as the neighborhood of a specific position x, denoted as Ω x , where we can change the radius of it arbitrarily. The extreme condition is when the radius is zero, leading to σ 2 1 (x) = FM(x) 2 , which is the selection of Thirion [50]. This parameter not only adjusts the influence between the similarity metric and the regularization term dynamically but also controls the magnitude of E 1 by transferring the distribution of FM(x)/σ 1 to N(0, 1). Next, we evaluate the misalignment based on the multi-label segmentation masks. We treat B F B M (x) as a continuous random variable. The reason is that the activation functions [51], such as the Sigmoid function and the tanh function, can smooth the gap around the area boundaries, although B F B M (x) can take only some values. We denote 1 as the indicator function, which fulfills 1(x) = 1 if x > 0 and 1(x) = 0 if x = 0. We then introduce the probability of misalignment based on the masks: where N x is the neighborhood of x. The indicator function helps distinguish whether the alignment improves or the low-intensity values occur when B F B M (y) declines. This probability p is especially a widely used overlap metric called the "target overlap" [28] when N x is large enough to cover Ω r After that, we give our selection of spatially adaptive parameters. We think that 1/σ 2 i , i = 1, 2, 3 are connected because the decline of E 1 and E 3 is not meaningful unless the alignment of region boundaries is good. Therefore, the parameters should satisfy two requirements: • When the alignment is poor (p approaches 1), we reduce the effect of E 1 and E 3 and increase that of E 4 . When the alignment is good (p approaches 0), we do the opposite. • The role of E 1 should be stronger than that of E 3 because the decline of E 3 cannot improve registration accuracy. Therefore, we denote the parameters as functions of p, i.e., 1/σ 2 i = q i (p). Specifically, we define where λ ∈ [0, 1], and c 1 and c 2 are the MLEs of 1/σ 2 1 and 1/σ 2 2 (see Equation (27)), respectively. We denote the radius of N x as R. The parameters, λ and R, are both determined by the user because the influence of E 3 on E 1 and the amount of computation vary in different tasks. It is worth noting that we need to add a small positive number to the denominator of c i when c i is not well defined. Finally, we validate the effectiveness of the parameters. When 0 < p < 1, we express the ratio of the importance between the similarity metric of the images and that of the masks, i.e., the ratio of E 1 to E 4 , as We know that N x FM(y) 2 dy is constant when x is fixed. Meanwhile, we infer that p and N x B F B M (y) 2 dy are positively related based on Equation (28). We thus conclude from Equation (32) that q 1 /q 2 decreases when p increases, which means E 4 plays a key role; q 1 /q 2 increases when p decreases, which means E 1 plays a key role. Therefore, the parameters vary according to the current position and the decline of the energy function, which coordinates the optimization process spatially. Results In this section, we evaluate the performance of our method with three experiments. The experimental images include synthetic 2D images, the OASIS-1 dataset [52], and the IBSR18 dataset [53]. All these experiments are implemented using C++ in a Ubuntu 16 system, with two Intel Xeon Silver 4216 @2.1GHz CPUs and 128GB 2666MHz memory. Implementation Details and Algorithmic Comparison Firstly, we regard our algorithm's user-determined parameters. We choose the radius of N x to be R = 1 considering the calculation time. We select λ = 0.5, which means the ratio of E 1 to E 3 is 2 : 1. In addition, the algorithm has two terminal conditions, i.e., the maximum iterations and the minimum magnitude of the update step. Next, we implement the algorithm based on the open-source library Insight ToolKit (ITK) 5.1 (https://itk.org, accessed on 31 May 2022). We use the combination of two itkPDEDeformableRegistrationFilters to iterate alternatively, as shown in Figure 2a. There are four modules in itkPDEDeformableRegistrationFilter, as shown in Figure 2b. We denote two Gaussian kernels as K fluid and K diff , where K fluid smooths the update field and K diff smooths the deformation field. The exponential map is computed through itkExponentialD-isplacementFieldImageFilter. All modules are set to compute parallelly by the multithreading technique [54]. After that, we use two mainstream approaches for comparison, i.e., Diffeomorphic Demons and SyN. Vercauteren et al. [27], who proposed Diffeomorphic Demons, applied the SVF framework to the classic Demons algorithm [55], enhancing the smoothness and deformability of the transformations. SyN is a symmetric diffeomorphic registration method, which uses the local correlation coefficient (LCC) as the similarity metric. Avants et al. [56] provided the SyN approach based on the LDDMM algorithm [15], strengthening the registration accuracy and reversibility. Diffeomorphic Demons and SyN are both widely used to compare with the brain registration methods [28]. In addtion, K fluid and K diff are also used in these two approaches. Finally, we introduce the algorithmic comparison and related notations. We denote our method as Ours, our method without the inverse consistent constraint as Ours-NId, and our method without the similarity metric of the masks as Ours-NSeg. We also denote Diffeomorphic Demons [27] as DiffDe and SyN [56] as SyN. In addition, we denote the number of iterations as numIt and denote two Gaussian kernels K fuild and K diff as Kfσ and Kdσ, respectively, where σ is the standard deviation (stDev). All methods are performed only once at the highest resolution of the images for fairness. Moreover, we introduce the following five evaluation metrics considering accuracy, reversibility, reasonability, and smoothness. • Accuracy: The Dice ratio (DR(Ω)) [28] is defined by where r is the index of the regions, and |·| is the volume. It is a measure of region overlap, which should be as high as possible. • Reversibility: The identity error (IdErr) is defined by 1 |Ω| Id − ϕ • ψ 2 . It reflects the difference between the identity map and the composition of the forward and backward transformation, which should be as low as possible. Furthermore, we denote the masks of WM, GM, and CSF segmentation as WGCS, which are the multi-label segmentation masks B F and B M . We denote the detailed masks of brain regions' segmentation as BRS, which are used for computing the Dice ratio. The Ablation Experiments on Synthetic 2D Data In this section, we conduct ablation experiments to verify the effectiveness of E 3 and E 4 , i.e., the inverse consistent constraint and the similarity metric of the masks. The synthetic 2D data, with the size of 100 × 100, is a simple simulation of the brain, as shown in the first row of Figure 3. We generate the fixed image first with three ellipses of the same center. Then, we set the asymptotic intensity values to avoid being the same as the masks, as shown in the second row of Figure 3. We produce the moving image last by changing the long and short axes, which mimics the possible deformation in the brain [58]. We also apply a shear transform towards the x-axis with 84 • to test the ability to recover simple linear transformations. We will carry out registration with numIt = 300, which makes the optimization process converged. In particular, we set 20 external iterations × 15 internal iterations in our method. Moreover, we select two kinds of smoothing parameters, i.e., Kf2 Kd0.5 and Kf1 Kd1. The reason to select Kf1 Kd1 is that it is the same as the classic DiffDe [27], and many methods verify its effectiveness. By contrast, Kf2 Kd0.5 is a compromise between the standard setting of DiffDe and SyN [59]. Since DiffDe cannot obtain the inverse transformation in one registration, it performs an additional backward registration, i.e., exchanging M and F. The qualitative results are shown in Figure 4, and the quantitative results are shown in Tables 1 and 2. Firstly, we discuss, based on Figure 4, the effect of the smoothing parameters. The standard deviations of Kf and Kd determine the sharpness of the deformation, as shown in the magnitude images of the transformations. It reveals that Kd affects smoothness the most because the higher the stDev of Kd, the smoother the transformation. In addition, the results of Kf2 Kd0.5 are better than those of Kf1 Kd1, as shown in the warped moving images and the residual images. It suggests that low stDevs lead to more accurate results. Therefore, we should choose the smoothing parameters carefully in practice to balance accuracy and smoothness. Secondly, we discuss, based on Tables 1 and 2, the results of the ablation experiments. Ours-NSeg shows excellent results in the items of IdErr, M(DetJ), and SMErr, but the DR is relatively low. However, the behavior of Ours-NId is just the opposite. We conclude that the similarity metric of the masks improves the accuracy, and the inverse consistent constraint improves the reversibility, reasonability, and smoothness. Therefore, the good results of Ours shows it combines the advantages of both E 3 and E 4 . In each part, the three rows display the warped moving images, the deformation grids, and the residual images, respectively. Different columns are the results of different methods. The color of the deformation grids, whose scale bar ranges from −4.5 to 5.8 pixels, represents the magnitude of the transformations. The color of the residual images, whose scale bar ranges from −3.0 to 3.2, represents the difference between the intensity values of the fixed image and the warped moving images. Thirdly, we discuss the comparison among DiffDe, SyN, and Ours. As described in Tables 1 and 2, DiffDe shows poor results in the items of IdErr, M(DetJ), and SMErr, although the warped moving images of DiffDe are very similar to those of Ours. Moreover, SyN achieves the best reversibility. However, the warped moving images of SyN are not satisfactory because the shapes of the white ellipses do not maintain, as displayed in Figure 4. By contrast, the shapes of Ours are good, which means Ours can recover the deformation. Moreover, Ours has the best accuracy and reasonability, as revealed in Tables 1 and 2. The Anti-Noise Experiments on the OASIS-1 Dataset In this section, we show the robustness of our method to the noise through the antinoise experiments. The OASIS-1 dataset [52], which we use, contains 416 subjects aged from 18 to 96. This dataset is a part of the OASIS project to make neuroimaging data freely available (https://www.oasis-brains.org, accessed on 31 May 2022). It is used widely in the comparison of registration algorithms [60][61][62]. Each subject is equipped with a T1-weighted skull-stripped image, a BRS of 35 brain regions, and a WGCS, as shown in the first three images of Figure 5. All the masks are verified by experts. The size of the images is 160 × 192 × 224 voxels with a voxel size of 1 × 1× 1 mm 3 . We selected five subjects in the OASIS-1 dataset randomly to avoid the influence of the sort order in the dataset. Next, we designate one subject as the fixed image and leave the other four as moving images. Specifically, the fixed image is #161, and the moving images are #227, #287, #319, and #333, respectively. We then add Gaussian noise of four standard deviations to the images, i.e., σ n = 0, 0.01, 0.025, 0.05, where 0 means no noise. Therefore, each method performs 4 × 4 = 16 registration. The higher the standard deviation, the stronger the noise is, as shown in the last three images of Figure 5. We use additive Gaussian white noise because it is common in nature. After that, we carry out histogram matching to reduce the uncertainty caused by the different distributions of the intensity values in the images. Meanwhile, we reassign the intensity value of the masks. Finally, we conduct registration with numIt = 150. In particular, we set 10 external iterations × 15 internal iterations in our method. The smoothing parameters are chosen to be Kf2 Kd0.5 for all methods. The reason is that Kf2 Kd0.5 can obtain high accuracy while maintaining good smoothness, which is suggested by the quantitative and qualitative results in Section 3.2. Firstly, we analyze an example in which #227 and #161 are the moving image and the fixed image, respectively. The registration of this example is performed with the noise of σ n = 0.05. We present the warped BRSs in Figure 6, which can show the accuracy is poor if a large difference between the warped BRSs and the fixed BRSs occurs. The results of DiffDe and SyN are not satisfactory, e.g., DiffDe's and SyN's volume of the left and right ventricles is larger than the fixed BRS's. By contrast, Ours has the best results in this example because the volume and position of the brain regions are very close to the fixed BRS. Secondly, we plot how the DR varies with the noise intensity in Figure 7. The high slope of the line segments indicates that the noise can affect the accuracy greatly. The stability of DiffDe is weak because the DR decreases rapidly when the noise is enhanced. By contrast, SyN's performance is better than DiffDe's, e.g., the line segments of 0.01 → 0.025 are almost flat. The results of Ours are similar to those of SyN, but the overall performance of Ours is better. Consequently, Ours has strong robustness to the noise. The Performance Experiments on the IBSR18 Dataset In this section, we demonstrate the excellent comprehensive capabilities of our approach through the performance experiments. The Internet Brain Segmentation Repository (IBSR) contains T1-weighted MRI brain images of 18 subjects. The original images were preprocessed by the Center for Morphometric Analysis, Massachusetts General Hospital in Boston, U.S. [28]. We use the IBSR18 v2.0 dataset (https://www.nitrc.org/projects/ibsr/, accessed on 31 May 2022), in which Rohlfing et al. [53] modified the IBSR18 dataset by removing non-brain regions, etc. We can see this dataset is commonly used in the field of brain image registration [28,[63][64][65]. Each subject is equipped with a skull-stripped image, a BRS of 84 brain regions, and a WGCS. The size of the images is 256 × 256 × 128 voxels with a voxel size of (0.837 ∼ 1) × (0.837 ∼ 1) × 1.5 mm 3 . We first crop the unnecessary area of the images, which changes the size into 166 × 161 × 128. We also unify the voxel size to 0.97 × 0.97 × 1 mm 3 . We then choose the same smoothing parameters as in Section 3.3, i.e., Kf2 Kd0.5. After that, we conduct registration with numIt = 150. In particular, we set 10 external iterations × 15 internal iterations in our method. Finally, we select the first ten subjects in the dataset. We make each subject the fixed image and the other nine subjects the moving images. Therefore, each method performs 10 × 9 = 90 registration. Firstly, we analyze an example in which #02 and #01 are the moving and fixed images, respectively. We display, in Figure 8, the moving image, the fixed image, and the warped moving images of different methods. We enlarge representative positions to analyze the difference in the results conveniently. For example, the first row of Figure 8 shows the axial sections. The result of Ours is the most similar to the fixed image considering the contour of the left ventricle and the left thalamus. Furthermore, we can also verify the excellent performance of Ours from the second and third rows of Figure 8, representing the coronal and sagittal sections of the images, respectively. DiffDe Ours SyN Moving Image Fixed Image Axial Sagittal Coronal Figure 8. The registration results of an example in Section 3.4 are shown. All images are plotted in the axial, coronal, and sagittal sections. The first column displays the moving image #02, and the second column displays the fixed image #01. The third to fifth columns represent the warped moving images of different methods, i.e., DiffDe, SyN, and Ours, respectively. Secondly, we can also compare the results from the volumetric plots and point cloud maps of the evaluation metrics based on the above example. We show the volumetric plots of the DR for each brain region after stripping the upper part of the brain, as displayed in the first row of Figure 9. We can see that Ours achieves the optimal result because its color is the closest to blue. In the second row of Figure 9, we plot the volume of the SMErr of the transformations over the entire brain. All three methods appear to be smooth overall, but SyN shows more large deformations because there is more volume of red. We last display in the third row of Figure 9 the point cloud maps where the negative Jacobian determinant occurs. In contrast, SyN has the most points, DiffDe has the second most, and Ours has the fewest. Therefore, Ours shows, in this example, wonderful results considering accuracy, smoothness, and reasonability. SMErr DetJ < 0 Thirdly, we organize the quantitative results of 90 registration in Table 3, which lists the mean and standard deviation of each evaluation metric. The results of DiffDe are relatively stable because its standard deviations are lower than other methods' in most items. It is worth noting that SyN achieves the best result of the IdErr, but SyN is not remarkable in the remaining items. By contrast, Ours is the best considering the DR, P(DetJ), M(DetJ), and SMErr, which means the results of Ours have excellent accuracy, reasonability, and smoothness. Discussion We verify the advantages of our method through the experimental results. The multilabel segmentation masks are added to the model as a priori information. The reassignment ensures the masks contain both the spatial information of the region boundaries and the mean intensity values in the regions. Therefore, the masks improve the alignment of the regions and robustness to the noise. Moreover, the symmetric form of the model strengthens the reversibility, reasonability, and smoothness without losing much accuracy. Furthermore, we need the masks of large regions only, which are accessible easily in practice. The masks can also be changed with different region definitions. Finally, our method has a very small computational cost and can thus obtain transformations quickly because of the SVF framework. Our method also has some disadvantages. Firstly, the method relies on additional segmentation methods to obtain the masks. Secondly, the accuracy can be reduced if the inaccurate masks of large regions are used. Thirdly, the amount of calculation increases exponentially with the radius of N x , which is (2R + 1) n specifically, where n is the image dimensionality and R is the radius. If the image size and dimensionality are small, enlarging the radius is a good choice, which improves the accuracy of the parameter estimation. However, selecting a large radius is not applicable for high-resolution 3D medical images. We plan to provide the masks for the model by establishing a relating segmentation method in the future. In addition, we will research to strengthen the convergence of reversibility. We will also apply the method to multi-modality registration. Conclusions In this study, we propose a symmetric diffeomorphic image registration model based on the multi-label segmentation masks to solve the problems in brain MRI registration. We tackle the issue that existing methods pay little attention to the alignment of the region boundaries by introducing the similarity metric of the multi-label segmentation masks. It also improves the robustness to the noise. We build the model on the diffeomorphism group, with the relaxation method and the inverse consistent constraint to strengthen the smoothness and reversibility. Moreover, we help the descent of the energy function in different regions through the spatially adaptive parameters. Finally, we verify the effectiveness of our method through three experiments. Compared with mainstream methods, the approach has better accuracy and noise resistance, and the transformations are more smooth and more reasonable.
8,544.2
2022-06-06T00:00:00.000
[ "Computer Science" ]
Clinical Network Systems Biology: Traversing the Cancer Multiverse In recent decades, cancer biology and medicine have ushered in a new age of precision medicine through high-throughput approaches that led to the development of novel targeted therapies and immunotherapies for different cancers. The availability of multifaceted high-throughput omics data has revealed that cancer, beyond its genomic heterogeneity, is a complex system of microenvironments, sub-clonal tumor populations, and a variety of other cell types that impinge on the genetic and non-genetic mechanisms underlying the disease. Thus, a systems approach to cancer biology has become instrumental in identifying the key components of tumor initiation, progression, and the eventual emergence of drug resistance. Through the union of clinical medicine and basic sciences, there has been a revolution in the development and approval of cancer therapeutic drug options including tyrosine kinase inhibitors, antibody–drug conjugates, and immunotherapy. This ‘Team Medicine’ approach within the cancer systems biology framework can be further improved upon through the development of high-throughput clinical trial models that utilize machine learning models, rapid sample processing to grow patient tumor cell cultures, test multiple therapeutic options and assign appropriate therapy to individual patients quickly and efficiently. The integration of systems biology into the clinical network would allow for rapid advances in personalized medicine that are often hindered by a lack of drug development and drug testing. Introduction Cancer is a complex disease that is caused by a dysfunction of normal cell biology through genetic and non-genetic changes including epigenetic changes that corrode the cell's ability to promote cell death, resulting in a process of dysregulated growth and proliferation. Every year, approximately over 1.9 million people are diagnosed with cancer and 609,820 die from cancer in the United States alone [1]. The discovery of new diagnostic tools, immunotherapy, and novel therapies has helped to reduce the cancer death rate by 33% since 1991, but despite this positive milestone, the improvement in outcomes has not been uniform across all tumor types [1]. This is largely in part due to the heterogeneity of cancer as a multi-modal disease that is driven by a collection of genetic and non-genetic mechanisms, which means tumors from a single tissue type do not respond to the same therapies despite similar histological profiles [2][3][4]. Therefore, considerable effort has been invested over the last 20 years to understand the biology of cancer and more importantly cancer within individual patients to decipher the heterogeneity of cancer types [3,[5][6][7]. The revolution in next-generation sequencing, liquid biopsy, single-cell sequencing, proteomics, and other novel diagnostic techniques has generated large libraries of whole genome, transcriptome, epigenetic, proteomic, and metabolomic data [3,[5][6][7][8]. However, the relationship between the individual gene and protein discoveries is not intrinsic in affecting tumor pathology, and often times, intricate cascade effects in transcriptional, translational and post-translational modification limit therapeutic efficacy [9,10]. In essence, effective cancer therapeutics cannot be achieved through understanding a cancer's individual parts but requires a systems biology approach where large cross-collaborations of multi-modal scientists, clinicians, and experts collaborate to understand the entirety of the oncogenic network. Systems biology at its foundation is comprehending that the whole is greater than the sum of its individual parts and is a heuristic process of collaboration, prediction, and discovery that has yielded several scientific discoveries in the last century [11,12]. Within a biological system, key processes are necessary for system-level insight and understanding including system structures, systems dynamics, the control method, and the design method as initially described by Kitano et al. [13]. Within the cancer systems biology paradigm, the system structures can be separated into five networks including the gene regulatory network, molecular network, cellular network, organ network, and clinical and research network. The systems dynamics process aims to understand how cancer as a complex system of abnormal cell growth behaves and changes over time from an initial set of conditions [14]. The cancer control method of systems biology relies on modulating the state of the cell to limit cancer growth or induce apoptosis to validate potential therapeutic options [15,16]. The highest level of cancer systems biology is a design method or design principles where multi-dimensional models, from in silico mathematical models to cell cultures to organoids to mouse PDXs, are constructed to mimic and mirror the oncogenic properties of individual patients or a cohort of patients so that therapies can be tested and applied based on the definitive initial conditions of the tumor [16][17][18]. Due to this multiscale and multi-modal persistence of cancer, we propose a novel highly adaptive approach of clinical network cancer systems biology that integrates basic science expertise and novel methodology with physician-level expertise and patient access to achieve the dream of personalized medicine. With the advent of modern technology, especially machine learning and artificial intelligence (AI), it is noticeably clear that cancer systems biology ought to take on an integrated approach where preclinical biology, patient translational specimens, and clinical care are all merged under a singular umbrella. Systems Biology in Cancer One of the primary challenges in cancer is that it cannot be understood through a simplistic lens due to the nonlinear nature of the disease process and its subsequent evolution. At the organ level, cancers exhibit differential patterns, and more evidence has shown that cancer metastasis may have a deterministic pattern to its chaotic process where certain genotypes show a preference toward target organs [19]. Furthermore, the tumor tissue and its tumor microenvironment (TME) vary by cancer type, and recent evidence shows that the TME may have an active role in the proliferation, migration, invasion, survival, angiogenesis, and EMT within the cancer cell network [20]. This is further complicated by protein signaling networks and biochemical signaling pathways involved in cancer progression that are difficult to predict and overcome therapeutically due to distinct perturbations in genotypes and phenotypes that drive their formation and interaction [21,22]. At the lower magnification, genomic instability in DNA repair and maintenance mechanisms as well as the disruption of epigenetic regulators has led to the discovery of several genomic alterations and chromatin modifications. This has unfortunately led to a high failure rate with only 6.7% of therapies reaching the phase II trial phase with regulatory approval between 2009 and 2018 [23]. Ultimately, the issue of cancer drug discovery is two-fold in that while with the help of next-generation sequencing, large cohorts of patients have been identified with novel targeted therapeutic options such as NSCLC EGFR-mutated patients or BRCA-2 positive breast cancer, there were also numerous cohorts of patients discovered with genomic alterations that have no clinically proven drug options such as TP53, ARID1A or PIK3CA [24]. The discovery of novel therapeutics based on recent preclinical biological discoveries is an iterative process within cancer systems biology that can be represented as a life cycle of research that combines wet-lab and dry-lab efforts to arrive at validated therapeutics ( Figure 1). While traditionally systems biology begins its life cycle at preclinical basic research, this is different in cancer in that there is a wide berth of data that is publicly available from large cancer databases such as TCGA and publicly available results from individual large cohort studies. This makes the life cycle of cancer systems biology more fluid in that initial discoveries or drug targets can be made prior to any wet-lab experiments through bioinformatics analyses and in silico modeling. Nevertheless, wet-lab analytical modeling involving cell lines, 3D spheroids, tumoroids, and in vivo experiments is a required stepping stone toward verifying an underlying therapeutic hypothesis regardless of whether the foundation of that hypothesis was based on previous preclinical or clinical knowledge. Subsequently, predictive modeling and translational research go hand in hand in validating the clinical efficacy and viability of any therapeutic approach. This is then followed by biomarker discovery and computational modeling where potential therapeutics attempt to find the "best-fit niche" for their mechanism of action. However, it is important to underscore that the cancer systems biology life cycle is nonlinear, and each step may flow back into the previous step where further analytical modeling and predictive modeling work is required based on the computational and biomarker findings, which in turn may require new hypotheses to be made. This has further importance in clinical trials and personalized medicine where initial findings of the therapeutic in a clinical population such as toxicity or various omics profiling may yield results that require further drug optimization or drug repurposing. The arrival of next-generation sequencing in the clinical setting has allowed for the further stratification of individual cancer types beyond their histology or tumor locale. However, as mentioned previously, cancer systems biology is complicated by the fact that individual components of data do not represent the entire network of the cancer system. The arrival of next-generation sequencing in the clinical setting has allowed for the further stratification of individual cancer types beyond their histology or tumor locale. However, as mentioned previously, cancer systems biology is complicated by the fact that individual components of data do not represent the entire network of the cancer system. While genomic data has been valuable in developing targeted therapies and stratifying patients by biomarkers, it is not uniform with actionable mutation rates in patients varying from 10.8% to 90.6% depending on cancer type [25]. This leaves large cohorts of patients without viable therapeutic options. A recent example is EGFR to SCLC transformation following osimertinib therapy, which underscores the importance of non-genetic mechanisms at play in cancer resistance [26]. The underlying challenge for this beyond identifying the possible drug candidates and novel therapeutic approaches is clinical trial cost and a lack of clinical trial integration into the oncology standard of care, which in turn further increases clinical trial costs [27][28][29]. This is in part due to the traditional clinical trial model where cohorts of patients at different sites especially in the community network are screened for individual trials separately to identify an individual with a biomarker that is possibly present in less than 1% of that cancer population [27][28][29]. The implementation of large umbrella trials such as the Lung-MAP, ALCHEMIST, or NCI-MATCH trials that aim to screen patients' biomarkers and match the patients to appropriate therapies have been successful in the academic setting [30]. However, it has been reported that 40% of patients were more than 60 min away from a clinical trial location, which is a central issue in increasing NCI-MATCH trial recruitment in the community network setting [31,32]. This is further complicated by the lack of access to the community practice patients from the trial and drug development perspective in that often, the complex community network patients do not have access to trials that address their biomarker [33]. We believe the solution to these issues is a novel approach that integrates cancer systems biology with a concept that we previously identified called "Team Medicine" [34,35]. Team Medicine is a cross-collaborative effort to integrate basic scientists with clinicians to drive forward rapid-pace translational research. The merging of Team Medicine and cancer systems biology would result in a new paradigm called Clinical Network Systems Biology where the academic site, the clinical community network, and basic scientists at a research center would integrate under one umbrella to discover, develop, and test novel therapeutics at a rapid pace to achieve more personalized medicine ( Figure 2). The framework embodies the four biological networks involved in cancer including the organ network, cellular network, molecular network, and gene regulatory network, and it combines it with the clinical and research network that encompasses the primary academic site and community practice network. In the subsequent sections, we will delve deeper into the two components that comprise this framework by looking at the individual parts of the biological network that drive the patients' cancer and the various strategies that can be utilized in the clinical network to enhance the basics of systems biology toward precision medicine. Biological Network in Cancer Systems Biology Clinical Network Systems Biology is analogous to the Matryoshka nesting dolls: a set of wooden dolls of increasing size placed one inside another. Thus, Matryoshka serves as a great metaphor for a complex system. Analytically speaking, the metaphor is especially well suited since it is likened to thinking in systems. Relatively speaking, a system may be defined as an interconnected set of components that are organized toward a specific function or purpose. Complex systems are systems within a system. Indeed, a Clinical Network may be thought of as a complex system itself. Here, the biological network may be perceived as comprising the inner (smallest) doll representing a single cell with its gene network, i.e., the gene regulatory network (GRN) together with the non-genetic, protein interaction network (PIN), which is followed by the next (bigger) doll representing the cellular networks to form tissues that comprise the individual organs and, finally, a bigger doll representing a network of organs that constitute an individual. Thus, it follows that a Clinical Network is a complex system comprising many systems which may interact with each other with dependencies, competitions, relationships, or other types of interactions such as feedback loops between their parts or between the system and its environment. These interactive systems are traditionally called complex adaptive systems (CAS) where the biological behavior of one component does not predict the behavior of the other components. CASs are capable of self-organization that adapts to their environmental stimuli, which increases their chances of survival. Therefore, due to the unpredictable and temporal nature of these systems, they cannot be studied with traditional tools and require analysis using nonlinear dynamical models that can accurately predict emergent behaviors, cellular plasticity, and heterogeneous cells. We believe the solution to these issues is a novel approach that integrates cancer systems biology with a concept that we previously identified called "Team Medicine" [34,35]. Team Medicine is a cross-collaborative effort to integrate basic scientists with clinicians to drive forward rapid-pace translational research. The merging of Team Medicine and cancer systems biology would result in a new paradigm called Clinical Network Systems Biology where the academic site, the clinical community network, and basic scientists at a research center would integrate under one umbrella to discover, develop, and test novel therapeutics at a rapid pace to achieve more personalized medicine (Figure 2). The framework embodies the four biological networks involved in cancer including the organ network, cellular network, molecular network, and gene regulatory network, and it combines it with the clinical and research network that encompasses the primary academic site and community practice network. In the subsequent sections, we will delve deeper into the two components that comprise this framework by looking at the individual parts of the biological network that drive the patients' cancer and the various strategies that can be utilized in the clinical network to enhance the basics of systems biology toward precision medicine. GRN (Gene Regulatory Network): At the principal level, a GRN is a group of genes that are characterized by gene expression and linked to one another through target gene nodes that regulate a specific cell function. Such interactions are genetically "wired" to ensure transgenerational transfer with high fidelity. Regulators of gene expression include transcription factors (TFs) that typically bind specific DNA sequence motifs and transcriptional regulators that typically interact with the basic transcriptional machinery and specific transcription factors. Both TFs and regulators can act as either activators of gene expression or as repressors that repress gene expression. Other molecules that may also play important roles in regulating gene expression include RNA-binding proteins and regulatory RNAs. Elucidating the intricate regulatory relationships between TFs, transcriptional regulators and their targets is essential to understand cellular functions such as cell growth and division, differentiation, and development. They can also help shed light on evolution, especially in the past half a billion years or so [36]. Furthermore, identifying GRNs can also aid in understanding how the dysregulation of gene expression contributes to complex heritable diseases as well as diseases such as cancer that have both genetic and non-genetic underpinnings [37,38]. PIN (Protein Interacting Network): The proteins that result from differential gene expression regulated by the GRNs interact with their cognate partners to form cellular PINs. While it was initially believed that PIN configurations occur randomly, Barabási and colleagues showed that PINs have a "scale-free" architecture in which the degree distribution P(k) expresses a power-law behavior as a function of the degree k [39,40]. A major advantage of scale-free networks is that they are largely resistant to random node failure, but they are vulnerable to critical hub failures [39]. Intrinsically disordered proteins (IDPs) are proteins that lack unique 3D structures and constitute a significant fraction of the proteome [41,42]. Because IDPs exist as conformational ensembles (are highly malleable), they can interact with multiple partners [43]. Consistent with their unique ability to interact with multiple partners, IDPs occupy hub positions in the scale-free network and play critical biological roles including transcriptional regulation [44][45][46]. Furthermore, they also regulate several key processes such as cell cycle regulation and facilitate phenotypic plasticity [47][48][49][50]. Nevertheless, IDP dysregulation of expression can often bring about non-specific interactions and generate phenotypic plasticity due to PIN modulation. This heuristic can often discover dormant pathways in the network and result in phenotypic variability. When the environmental stressors are removed, the IDPs are capable of reconfiguring the PIN to its original state, which suggests a non-genetic mechanism in phenotypic reversal. However, when the stressors persist, they can result in chronic network frustration through the acquisition of DNA mutations and other genetic alterations, which can result in permanent phenotypic alterations. This pinpoints the genetic/non-genetic duality in nature such as the evolution of drug resistance in tumor cells. This duality helps us understand how non-genetic mechanisms are involved in acquired resistance through irreversible genetic alterations at the single cell level. Cellular Network: Individual cells, both in normal healthy tissue as well as in diseased tissue such as cancer for example, do not exist as individuals: they live in communities with other cells be it in their natural tissue environment or the tumor microenvironment. Therefore, they exhibit group behavior which can significantly influence their fitness. Thus, it is imperative to gain a systems perspective to fully understand their group behavior, leading to the expected physiological output or how cancer cells exploit group behavior to evade the toxic effects of a drug to eventually develop drug resistance. Nonetheless, previous studies have not investigated drug resistance from such a systems-level perspective. Most studies employ a reductionist approach focusing on a gene target, its mutated version(s), a pathway, or a small molecule. Alternatively, they endeavor to develop "intermittent/adaptive" therapy by studying group behavior at the population level but do not consider the role of individual molecules or the associated pathways. Organ network: The human body is a complex interconnected organ system where individual organs have their own morphology and functional diversity, which leads to temporary, shifting, nonlinear output biological changes. This process is interlinked in that one organ in the system has a direct effect on the behavior of the other systems. The multi-component organ systems regularly interface with one another through continuous feedback mechanisms and throughout varying scales of space and time to arrive at a precise physiological output. The lack of such coordinated interactions and communications can lead to the malfunction of individual systems or the entire organism [51]. Thus, it follows that a systems perspective rather than a reductionist approach is required to gain an in-depth understanding of the integrated physiologic function, which is an emergent phenomenon resulting from interactions between the diverse organ systems. Indeed, in recent years, a new field called network physiology has emerged [52,53]. The goal of network physiology is to horizontally integrate physiological systems where individual structures and regulation mechanisms lead to biological behavior and unique physiological functions. There is a necessity to develop innovative analytical instruments and theoretical structures to address dynamical networks observed in physiological systems, which has further underscored the need for a highly interdisciplinary 'Team Medicine' approach to the problem. Clinical Network in Cancer Systems Biology A Clinical Network may be likened to a complex system comprising individual physicians and physician-scientists at both academic and community practice sites who enhance cancer systems biology through biomarker discovery, translational research, and clinical trial enrollment with a focus on cross-collaborative precision medicine. Precision medicine is the tool that drives cancer systems biology, where the technologies of precision medicine are utilized in tandem with the clinical network to study the distinct biological and environmental factors of each patient toward the development of new therapeutics [54]. Precision medicine has revolutionized the field of cancer over the last two decades from identifying new cancer biomarkers, genetic alterations, and treatments to improving patient outcomes [55][56][57]. Despite all the successes, there are several shortcomings of current precision medicine that need to be addressed such as its incorporation into the clinical cancer network, more consistent serial specimen collection, and increased collaboration between the researchers and clinicians to harness the research network in real time [58,59]. Here, we introduce the clinical network as a part of cancer systems biology and build upon our approach by proposing a model for an AI-driven drug-matching algorithm. One crucial issue that needs to be resolved for the further widespread adaptation of precision oncology is the consistent use of biomarker platforms at the community and independent oncology clinic level. The availability of biomarker testing among practicing oncologists differs based on their geographical location and practice type with reported rates varying from 0.1% to 100% in actionable biomarkers in community practices, indicating the need for further policies that ensure all cancer patients have access to precision oncology [32,60]. Limited resources at both the clinics and in the community are a few of the multiple factors that contribute to this disparity [61]. Furthermore, the utilization of multiplex biomarker tests in clinical practice varied significantly among oncologists, and since many reported mixed confidences in interpreting these results, evidence-based guidelines and deploying pathways with the combination of physician education efforts may combat this issue [62]. The implementation of large panel omics testing across the clinical network would improve biomarker discovery in cancer systems biology. A multifaceted approach is needed to encompass as many solutions as possible comprising a wide array of parameters to include infrastructure changes such as the expansion of academic centers to incorporate community clinics or geographical sites into one large oncology network, the use of clinical pathways, and the development of molecular tumor boards within those networks and at the patient level such as community engagement, education, and empowerment [61,[63][64][65]. Our previous work highlighted the importance of a strong integrated clinical and research network at both academic and community practice sites [29,[32][33][34]66,67]. Oncology pathways that guide physicians have been implemented across the City of Hope network, and applying such a strategy can ensure that patients are assigned appropriate therapies based on their biomarker profile both in the academic and community practice settings [66,67]. Most cancer patients start their cancer journey with a community oncologist, and the main reason they are referred to an academic site is to enroll in a clinical trial; nevertheless, cooperation and communication between sites needs to increase [68]. The complete incorporation and cross-collaboration of clinical trial systems from the lowest levels (e.g., community sites) to the highest levels (e.g., national networks) is critical in expanding access to clinical trials, which are specifically biomarker-driven [30,67]. The decentralization of clinical trials conducted in the clinical network would address disparities of care, access to care, and raise trial accrual rates that will accelerate the cancer systems biology drug discovery pipeline [69]. We have previously designed a pyramidal decision support framework that leverages this cross-collaboration through four distinct levels including a clinical pathway program, network and academic clinician consultations, disease team tumor boards, and complex oncology case discussions [33]. This would allow for a better examination of rarer cancer-type populations such as Nuclear protein of the Testis (NUT) carcinomas or narrow targets for traditionally hard-to-treat cancers such as pancreatic cancer [30,[70][71][72]. Additionally, multidisciplinary cancer teams' collaboration and sub-specialties are a vital component in the clinical network systems biology where knowledge and expertise need to be diversified beyond individual cancer specialists such as the involvement of pathologists, radiologists, and others to improve patient outcomes, particularly in complex cases and through the utilization of Precision Oncology Tumor Boards [33]. Baseline and serial sample collections need to be improved across the network. The use of technologies such as liquid biopsies and single-cell sequencing can help determine the early signs of possible recurrence of early-stage cancers, monitor treatment response, and follow the evolutionary heterogeneity diversity between cancer clones [73][74][75]. Yet, despite the prevalence and importance of biobanking protocols at institutions, many fail to capture the necessary specimens and data to accelerate its adaptation networkwide [76][77][78]. Previously mentioned stopgaps to precision medicine and more recently personalized medicine have largely been due to the high cost of various sequencing techniques as well as the cost of drug development or repurposing, and they have been limited by a lack of high-throughput drug screening. With the advent of liquid biopsies, it is now possible to study circulating tumor cells and detect protein expression from standard blood as well as cerebrospinal fluid (CSF) in patients with leptomeningeal metastases [74,79]. Advances in microbiome analysis have resulted in the identification of temporal changes in microbiome composition as a potential marker for immunotherapy response [80]. Microbiome discoveries have resulted in novel techniques of fecal microbiota transplants and have been shown in advanced melanoma to help immunotherapy-resistant patients overcome anti-PD-1 resistance [81]. Novel biopsy analysis techniques to detect and study circulating cancer cells, epigenetic modifications, point mutations, translocations, amplifications, deletions, chromosomal abnormalities, protein expression, and phosphorylation are now more readily used for liquid and tissue samples. Alongside this, the development of rapid 3D cell cultures and tumor organoids allows for high-throughput drug screening [82][83][84]. The recent developments in artificial intelligence, specifically machine learning, can further enhance personalized drug screening and match patients quickly with appropriate therapies and discover new therapeutics or candidates for drug repurposing [82,85,86]. Taken altogether, harnessing the clinical data and specimens and the research network, we have designed and proposed a novel real-time AI-driven drug-matching algorithm that could be utilized to enhance future personalized medicine ( Figure 3). Additionally, the hope is that this technology ultimately assists in drug discovery and the development of novel therapies by taking advantage of retrospective samples leading to clinical trials. ther enhance personalized drug screening and match patients quickly with appropriate therapies and discover new therapeutics or candidates for drug repurposing [82,85,86]. Taken altogether, harnessing the clinical data and specimens and the research network, we have designed and proposed a novel real-time AI-driven drug-matching algorithm that could be utilized to enhance future personalized medicine ( Figure 3). Additionally, the hope is that this technology ultimately assists in drug discovery and the development of novel therapies by taking advantage of retrospective samples leading to clinical trials. Conclusions Cancer systems biology has been instrumental in the recent discoveries of precision medicine. Furthermore, the integration of traditional basic science and clinical cancer researchers with a multidisciplinary team of scientists from other fields has allowed for the study of cancer at multiple scales with a deeper understanding of its biology and evolution. While previously, sequencing cost remained a barrier for clinical research, novel technologies have made it possible to quantitate tumor samples beyond genomic sequencing toward understanding protein expression and phosphorylation, epigenetic, chromosomal abnormalities, and other non-genetic mechanisms in real-world clinical samples. Furthermore, adaptive therapy (also known as intermittent therapy) based on the principles of ecology and evolution may help address the issue of drug resistance, which is almost inevitable [87,88]. This has allowed for the study of cancer biology at multiple scales enhanced by the traditional experimental and computational models. However, further Conclusions Cancer systems biology has been instrumental in the recent discoveries of precision medicine. Furthermore, the integration of traditional basic science and clinical cancer researchers with a multidisciplinary team of scientists from other fields has allowed for the study of cancer at multiple scales with a deeper understanding of its biology and evolution. While previously, sequencing cost remained a barrier for clinical research, novel technologies have made it possible to quantitate tumor samples beyond genomic sequencing toward understanding protein expression and phosphorylation, epigenetic, chromosomal abnormalities, and other non-genetic mechanisms in real-world clinical samples. Furthermore, adaptive therapy (also known as intermittent therapy) based on the principles of ecology and evolution may help address the issue of drug resistance, which is almost inevitable [87,88]. This has allowed for the study of cancer biology at multiple scales enhanced by the traditional experimental and computational models. However, further cross-collaboration and integration between individual academic sites, national cancer networks, and community practices is required to achieve truly personalized medicine. The implementation of these ideas powered by recent advances in artificial intelligence and machine learning would in the future allow for personalized high-throughput drug screenings that would yield faster drug discoveries and approved therapeutics.
6,725.6
2023-07-01T00:00:00.000
[ "Biology" ]
First measurement of quarkonium polarization in nuclear collisions at the LHC Citation for published version (APA): Acharya, S., Adamova, D., Adler, A., Adolfsson, J., Aggarwal, MM., Rinella, G. A., Agnello, M., Agrawal, N., Ahammed, Z., Ahmad, S., Bearden, I., rtc312, R., Gaardhøje, J. J., bsm989, B., Zhou, Y., Vislavicius, V., Nielsen, B. S., Thoresen, F., Schukraft, J., ... Alice Collaboration (2021). First measurement of quarkonium polarization in nuclear collisions at the LHC. Physics Letters B, 815, [136146]. https://doi.org/10.1016/j.physletb.2021.136146 Introduction Quarkonia, bound states of charm (c) and anticharm (c) or bottom (b) and antibottom (b) quarks, represent an important tool to test our understanding of quantum chromodynamics (QCD), since their production process involves both perturbative and nonperturbative aspects. At high energy, the creation of the heavy quark-antiquark pair is a process that can be described using a perturbative QCD approach, due to the large value of the charm and bottom quark masses (m c ∼ 1.3 GeV/c 2 , m b ∼ 4.2 GeV/c 2 [1]). However, the subsequent formation of the bound state is a nonperturbative process that can be described only by empirical models or effective field theory approaches. Among those, models based on Non-Relativistic QCD (NRQCD) [2] give the most successful description of the production cross section, as measured at high-energy hadron colliders (Tevatron, RHIC, LHC) [3][4][5][6][7][8][9][10][11][12][13][14]. In the NRQCD approach, the non-perturbative aspects are parameterized via long-distance matrix elements (LDME), corresponding to the possible intermediate color, spin and angular momentum states of the evolving quark-antiquark pair. The values of LDMEs need to be fitted on a subset of the available measurements and can be then considered as universal quantities, in the sense that they can be used in the calculation of production cross sections and other observables corresponding, for example, to different collision systems and energies. Other theory approaches, as the Color E-mail address: alice -publications @cern .ch. Singlet Model [15], the Color Evaporation Model [16] and the k Tfactorization [17] are also used to describe the quarkonium production process. Among the various charmonium states, the J/ψ meson, with quantum numbers J PC = 1 −− , was the first to be discovered. It is surely the most studied, also due to the sizeable decay branching ratio to dilepton pairs ((5.961 ± 0.033)% for the μ + μ − channel [1]) that represents an excellent experimental signature. While the J/ψ production cross sections are well reproduced by NRQCD-based models, it was soon realized that describing the measured polarization of this state represents a much more difficult problem [18]. The polarization, corresponding to the orientation of the particle spin with respect to a chosen axis, can be accessed via a study of the polar (θ) and azimuthal (φ) production angles, relative to that axis, of the two-body decay products in the quarkonium rest frame. Their angular distribution W (θ, φ) is parameterized as W (θ, φ) ∝ 1 3 + λ θ × 1 + λ θ cos 2 θ + λ φ sin 2 θ cos 2φ + λ θφ sin 2θ cos φ , (1) with the polarization parameters λ θ , λ φ and λ θφ corresponding to various combinations of the elements of the spin density matrix of J/ψ production [19]. In particular, the two cases (λ θ = 1, λ φ = 0, λ θφ = 0) and (λ θ = −1, λ φ = 0, λ θφ = 0) correspond to the leading order, the high-p T production is dominated by gluon fragmentation and therefore the J/ψ would be expected to be transversely polarized [18]. However, the results from the CDF experiment at Tevatron showed that the J/ψ exhibits a very small polarization [20,21], an observation which was impossible to reconcile with the NRQCD prediction. As of today, on the experimental side, accurate results on inclusive and prompt (i.e., removing contributions from b-quark decays) J/ψ polarization have become available at LHC energies [22][23][24][25]. They confirm that this state shows little or no polarization in a wide rapidity (up to y = 4.5) and transverse momentum region (from 2 to 70 GeV/c), with the exception of the LHCb measurements at √ s = 7 TeV [24], where the value λ θ = −0.145 ± 0.027, corresponding to a weak longitudinal polarization, was obtained in the interval 2 < p T < 15 GeV/c and 2 < y < 4.5, in the helicity frame (its definition will be given later in Sec. 3). On the theory side, a huge effort was pursued in order to move to a complete next-to-leading order (NLO) description of the J/ψ production process [26,27], and to the calculation of the polarization variables [28,29]. Further important progress includes a quantitative evaluation of the contribution of feed-down processes (J/ψ coming from the decay of χ c and ψ(2S) states) on the polarization observables [30]. It was shown that at NLO there are rather large cancellations between contributions corresponding to the different possible combinations of the spin and angular momentum of the intermediate cc states, reaching a more satisfactory description of the absence of polarization observed in the data [31]. However, those descriptions usually require the inclusion of both cross section and polarization results in the fit of the LDME, leading to a more limited predictive power on the polarization observables and to large variations in the values of the extracted LDME values, depending on the set of data used for their determination. Finally, the description of the J/ψ production in the NRQCD framework was recently extended to the low-p T region, and the polarization parameters were studied in a color glass condensate (CGC) + NRQCD formalism, obtaining a fair agreement with LHC data at forward rapidity [32]. Measurements of the polarization parameters are also available for several bottomonium states, and in particular for the ϒ(1S), ϒ(2S) and ϒ(3S) resonances, which were shown to exhibit little or no polarization at LHC energies [33][34][35]. Approaches similar to that adopted for charmonium, which also need to take into account the rather complex feed-down decay structure for these states, lead to a fair agreement with the experimental results [36]. In this Letter, we move a step forward by presenting the first measurement of J/ψ and ϒ(1S) polarization in ultrarelativistic heavy-ion interactions performed by the ALICE Collaboration by studying Pb-Pb collisions at √ s NN = 5.02 TeV. Such collisions represent an important source of information for the investigation of the phase diagram of QCD [37], and in particular for the study of the properties of the quark-gluon plasma (QGP), a state of matter where quarks and gluons are not confined inside hadrons [38]. Among the experimental observables studied in heavy-ion collisions the suppression of heavy quarkonium production is a fundamental signal, since QGP formation prevents the binding of the heavy-quark pair due to the screening of the color charge [39] and, more generally, has strong effects on the spectral functions [40]. At LHC energies, another mechanism, corresponding to the (re)generation of charmonium states in the QGP and/or when the system hadronizes, becomes relevant [41,42], in particular at low p T , due to the large charm-quark multiplicity (> 100 pairs in a central Pb-Pb collision). The presence of a deconfined system may in principle affect also the polarization of quarkonium states. In Ref. [43] the observation of a partial transverse polarization for the J/ψ was predicted in case of QGP formation, due to a modification of the non-perturbative effects in the high energy-density phase. More generally, the observed prompt J/ψ are known to be a mixture of direct production and decay products from higher-mass charmonium states (ψ(2S), χ c ). In nuclear collisions, since suppression effects are expected to affect more strongly the less bound states, the relative contribution of direct and feed-down production would change with respect to that in pp collisions, and the overall measured polarization may be different according to the potentially different polarization of the various states [44,45]. On the other hand, the contribution of the regeneration mechanism in the J/ψ formation process by recombination of uncorrelated cc pairs is likely to give rise to unpolarized production at low p T . Finally, the possible presence of polarization is known to strongly affect the acceptance for J/ψ detection in the dilepton decay (up to 20-30% in ALICE [22]), and its measurement is an important requisite for an unbiased evaluation of the absolute yields in nuclear collisions. A first measurement of ϒ(1S) polarization in Pb-Pb collisions is also presented in this Letter, even if the corresponding candidate sample is smaller by a factor ∼30, leading to larger uncertainties. For such a state, considerations similar to those discussed for the J/ψ should hold, except that the contribution of the regeneration mechanism should be negligible due to the much lower multiplicity of bottom quarks with respect to charm. The next sections of the Letter are organized as follows. Section 2 contains a short description of the experimental apparatus and some details on the data sample used in this analysis. The analysis procedure and the evaluation of systematic uncertainties are presented in Sec. 3, while the results on the J/ψ and ϒ(1S) polarization parameters λ θ , λ φ and λ θφ are shown in Sec. 4. The conclusions are presented in Sec. 5. Experimental setup and data sample The measurement described in this Letter is performed with the ALICE detector [46,47], whose main components are a central barrel and a forward muon spectrometer. The latter covers the pseudorapidity region −4 < η < −2.5 and is used to detect muon pairs from quarkonium decays [48]. The muon spectrometer includes a hadron absorber made of concrete, carbon and steel with a thickness of 10 interaction lengths, followed by five tracking stations (cathode-pad chambers), with the central one embedded inside a dipole magnet with a 3 T·m field integral. Downstream of the tracking system, an iron wall filters out the remaining hadrons as well as low-momentum muons originating from pion and kaon decays, and is followed by two trigger stations (resistive plate chambers). Another forward detector, the V0 [49], composed of two scintillator arrays located at opposite sides of the interaction point (IP) and covering the pseudorapidity intervals −3.7 < η < −1.7 and 2.8 < η < 5.1, provides the minimum bias (MB) trigger which is given by a coincidence of signals from the two sides. Among the central barrel detectors, the two layers of the Silicon Pixel Detector (SPD), with |η| < 2 and |η| < 1.4 coverage, and corresponding to the inner part of the ALICE Inner Tracking System (ITS) [50], are used to determine the position of the interaction vertex. Finally, the Zero Degree Calorimeters (ZDC) [51], located on either side of the IP at ± 112.5 m along the beam axis, detect spectator nucleons emitted at zero degrees with respect to the LHC beam axis and are used to reject electromagnetic Pb-Pb interactions. The analysis is based on events where, in addition to the MB condition, two opposite-sign tracks are detected in the triggering system of the muon spectrometer (dimuon trigger). The dimuon trigger selects tracks each having a transverse momentum above a threshold nominally set at p μ T = 1 GeV/c, corresponding to the value for which the single-muon trigger efficiency reaches 50% [52]. The single-muon trigger efficiency reaches a plateau value of 98% at ∼ 2.5 GeV/c. The events are further characterized according to their centrality, i.e., the degree of geometric overlap of the colliding nuclei. It is estimated by means of a Glauber model fit to the V0 signal am- plitude distribution [53,54], with more central events leading to a larger signal in the V0. In this analysis, events corresponding to the most central 90% of the inelastic Pb-Pb cross section are selected, as for these events the MB trigger is fully efficient and the residual contamination from electromagnetic processes is negligible. The results of the analysis are obtained using the √ s NN = 5.02 TeV Pb-Pb data samples collected by the ALICE experiment during the years 2015 and 2018, corresponding to an integrated luminosity L int ∼ 750 μb −1 . Data analysis The J/ψ and ϒ(1S) candidates are formed by combining opposite-sign muons reconstructed using the tracking algorithm described in Ref. [48]. In order to reject tracks at the edge of the spectrometer acceptance, the condition −4 < η μ < −2.5 is required. In addition, tracks must have a radial transverse position at the end of the absorber in the range 17.6 < R abs < 88.9 cm. This selection is applied to remove tracks passing through the inner and denser part of the absorber, which are strongly affected by multiple scattering. For each muon candidate, a match between tracks reconstructed in the tracking system and track segments in the muon trigger system is required. The J/ψ polarization parameters λ θ , λ φ and λ θφ are studied as a function of transverse momentum in the intervals 2 < p T < 4, 4 < p T < 6 and 6 < p T < 10 GeV/c. For each p T interval, a twodimensional (2D) grid of dimuon invariant-mass spectra is created, corresponding to intervals in cos θ and φ, where θ and φ are the polar and azimuthal emission angles, respectively, of the decay products in the J/ψ rest frame, with respect to the reference axis. More in detail, the 2D grid covers the fiducial region −0.8 < cos θ < 0.8 (17 intervals), 0.5 < φ < π − 0.5 rad (8 intervals, assuming a symmetric distribution around φ = π ), with the choice of the boundaries as well as the width of the intervals dictated by acceptance considerations. The analysis is performed choosing two different reference systems for the determination of the angular variables. In the Collins-Soper (CS) frame the z-axis is defined as the bisector of the angle between the direction of one beam and the opposite of the direction of the other one in the rest frame of the decaying particle, allowing therefore an evaluation of the polarization parameters with respect to the direction of motion of the colliding hadrons. In the helicity (HE) reference frame the z-axis is given by the direction of the decaying particle in the center-of-mass frame of the collision, and therefore the polarization can be evaluated with respect to the momentum direction of the J/ψ itself. The φ = 0 plane is the one containing the two beams in the J/ψ rest frame. For each dimuon invariant-mass spectrum, the J/ψ raw yield is obtained by means of a binned maximum likelihood fit in the interval 2.1 < m μμ < 4.9 GeV/c 2 . The background continuum is parameterized with a Gaussian distribution whose width varies linearly with the mass or, alternatively, with a fourth degree polynomial function times an exponential. The J/ψ signal is modeled with a pseudo-Gaussian function or with a Crystal Ball function with asymmetric tails on both sides of the peak [55]. The J/ψ mass is kept free in the fits, while for each interval i.e., scaling the resonance width extracted from Monte Carlo (MC) simulations (σ i, j,MC J/ψ ) by the ratio between the width obtained by fitting the angle-integrated spectrum in data (σ J/ψ ) and MC (σ MC J/ψ ) for the p T interval under consideration. The parameters of the non-Gaussian tails of the resonance are kept fixed to the MC values. The ψ(2S) contribution, although comparatively negligible, is also taken into account in the fits, with its width and mass fixed in each fit to those of the J/ψ according to the relations J/ψ , with the Particle Data Group (PDG) masses taken from Ref. [1]. In Fig. 1 (left) an example of a fit to the invariant-mass spectrum in the J/ψ mass region is shown. Due to the stability of the extracted J/ψ parameters (mass, width), the fits were carried out directly on the sum of the 2015 and 2018 invariant mass spectra. The J/ψ raw yields as a function of the angular variables are then corrected by the product of the acceptance and detector efficiency ( A × ε), which is evaluated as a function of cos θ and φ on a 2D grid via MC simulations. The J/ψ are generated according to p T and y distributions directly tuned on data [56] via an iterative procedure [57], and their decay muons are propagated inside a realistic description of the ALICE setup, based on GEANT 3.21 [58]. The misalignment of the detection elements and the time-dependent status of each electronic channel during the data taking period are taken into account as well. In the J/ψ generation an isotropic distribution of decay products, corresponding to the assumption of no polarization, is adopted. Due to the choice of relatively small (cos θ , φ) intervals, the A × ε values for each interval are quite insensitive to the specific angular distribution assumed in the generation. The three polarization parameters λ θ , λ φ and λ θφ are obtained through χ 2 -minimization fits of the 2D J/ψ distributions, cor- rected for acceptance and efficiency, according to Eq. (1). For each combination of signal and background shape used in the fit to the dimuon invariant-mass spectra, a separate evaluation of the polarization parameters is carried out and their average is taken as the best estimate. The statistical uncertainty is given by the average of the statistical uncertainties of the 2D fits, while the root mean square of the results provides the systematic uncertainty on the signal extraction, with the absolute values ranging between 0.002 and 0.039. The overall procedure described above was checked beforehand with a MC closure test. The 2D fits on the (cos θ , φ) distributions only allow a determination of the absolute value of λ θφ , due to the presence of sin 2θ in the corresponding term that induces an ambiguity in its sign. It is checked that the values of λ θ and λ φ are stable against the choice of the sign of the λ θφ term. In the following the λ θφ values corresponding to the choice of a positive sign are quoted. Fig. 2 illustrates an example of the fit to the angular distributions. For better visibility, both the distribution and the fitted function are projected along one dimension. In addition to the systematic uncertainty related to the choice of the mass shapes for signal and background, several other sources are taken into account. First, an alternative procedure for extracting the J/ψ signal is carried out, by keeping its width as a free parameter in the invariant-mass fits. The corresponding results for the polarization parameters are then obtained and the averages of the values corresponding to fixing the width or not are taken as the central values for λ θ , λ φ and λ θφ . Half the difference between the results obtained with free or MC-anchored widths is then considered as a further systematic uncertainty related to the signal extraction. This uncertainty is found to be the leading contribution to the total absolute systematic uncertainty on the polarization parameters, and ranges between 0.001 and 0.063, the latter value corresponding to the uncertainty on λ HE θ for 2 < p T < 4 GeV/c. Another source of systematic uncertainty is related to the evaluation of the trigger efficiency. The muon trigger response function as a function of the single muon transverse momentum p μ T can be obtained via MC or with a procedure based on data [59]. Small deviations are found for p μ t < 2 GeV/c which induce an effect on A × ε for the J/ψ . Therefore, the polarization parameters are recalculated with A × ε values weighted in such a way to account for the deviations. The variation of the polarization parameters between the different trigger efficiency estimates is taken as the related systematic uncertainty, with values ranging from 0.001 to 0.043, the highest values being found for λ HE θ in 2 < p T < 4 GeV/c. The systematic uncertainty related to the evaluation of the muon tracking efficiency is found to be negligible for this analysis, allow-ing a significant reduction of the total systematic uncertainty with respect to previous pp analyses [23]. Indeed, although the difference between efficiencies calculated via MC or from data [59] is of the order of 2%, a detailed investigation has shown no dependence on the angular variables and therefore no effect on the polarization parameters. Finally, the systematic uncertainty induced by the choice of the p T and y distributions used as an input for the calculation of A × ε is evaluated testing alternative p T and y parameterizations, which are obtained by varying within their uncertainties the default distributions directly tuned on Pb-Pb data. The polarization parameters extracted with the modified values of A × ε are compared with those obtained with the default input shapes and the corresponding systematic uncertainty extracted in this way is found to range between 0.001 and 0.030, with the largest value assigned to λ HE θ for 2 < p T < 4 GeV/c. The influence of the choice of the angular distributions of the J/ψ decay products for the A × ε calculation is also investigated by means of an iterative procedure on these input distributions. The effect is found to be negligible, also due to the fact that the 2D correction procedure on the angular variables is by definition relatively insensitive to the specific choice of the corresponding distributions. A summary of the values of all the absolute systematic uncertainties, which are considered as uncorrelated as a function of p T , is reported in Table 1. The total systematic uncertainties are obtained, for each parameter and p T interval, as the quadratic sum of the values. A similar procedure is followed for the extraction of the ϒ(1S) polarization parameters. Due to the smaller candidate sample, integrated values over the kinematic interval 2.5 < y < 4, p T < 15 GeV/c are obtained. The main difference with respect to the 2D approach followed for the J/ψ is the use of a simultaneous fit to the 1D angular distributions [23], after integration over the other variables. The requirement p μ T > 2 GeV/c, which helps reducing the combinatorial background, is included [60]. The ϒ(1S) signal extraction in the various cos θ and φ intervals is performed by means of invariant-mass fits (see the right panel of Fig. 1 for an example). The functions chosen for the resonances are the same as in the J/ψ analysis (pseudo-Gaussian or Crystal Ball), the mass value is fixed to that obtained from a fit to the integrated invariant-mass distribution, while the width for each angular interval is fixed to the MC value scaled by the ratio of the widths between data and MC for the angle-integrated distributions. The tail parameters are fixed to MC values. The small contribution from ϒ(2S) is also included in the fits [60]. The background continuum is parameterized with a Gaussian distribution whose width varies linearly with the mass Table 1 Summary of the absolute systematic uncertainties on the evaluation of the J/ψ polarization parameters. All the uncertainties are considered as uncorrelated as a function of p T . Helicity Collins-Soper or, alternatively, with a second degree polynomial function times an exponential. The systematic uncertainty on the signal extraction is calculated with the same procedure adopted for the J/ψ . An uncertainty related to the choice of the signal width has also been considered, taken as the half-difference between the results obtained with the prescription described above and using as an alternative prescription the pure MC values. The uncertainty on the trigger efficiency is negligible, due to the additional requirement on the single-muon transverse momentum which selects a p T -region where the trigger efficiency is very high and its evaluation via data and MC is consistent. Finally, the procedure for the determination of the uncertainty related to the ϒ(1S) kinematic distributions used in the MC is the same as for the J/ψ . The total systematic uncertainties for the ϒ(1S) analysis are reported in Table 3, together with the results. Results The polarization parameters for J/ψ inclusive production in Pb-Pb collisions at √ s NN = 5.02 TeV in the helicity and Collins-Soper reference frames are shown in Fig. 3 ϒ(1S) polarization parameters in the helicity and Collins-Soper reference frames measured in Pb-Pb collisions at √ s NN = 5.02 TeV in the rapidity interval 2.5 < y < 4 and for transverse momentum p T < 15 GeV/c. The first uncertainty is statistical and the second systematic. Helicity Collins-Soper λ θ −0.090 ± 0.395 ± 0.101 0.418 ± 0.526 ± 0.178 λ φ −0.094 ± 0.072 ± 0.020 −0.141 ± 0.087 ± 0.033 λ θφ −0.074 ± 0.099 ± 0.020 0.017 ± 0.113 ± 0.024 3.3σ in the interval 2 < p T < 4 GeV/c in the helicity reference frame, where pp data [24] indicate a small but significant degree of longitudinal polarization, while the Pb-Pb results favor a slightly transverse polarization. In Pb-Pb collisions at LHC energies, a significant fraction of the detected J/ψ originates from the recombination of cc pairs in the QGP phase or when the system hadronizes. Moreover, the contribution from higher-mass charmonium states decaying to J/ψ could vary between pp and Pb-Pb due to different suppression effects for each state in nuclear collisions. Therefore, the observed hint for a different polarization in pp and Pb-Pb might be a reflection of the different production and suppression mechanisms in the two systems, but more precise data, along with quantitative theory estimates, are needed for a definite conclusion. It should also be noted that the ALICE results refer to inclusive production, while LHCb has measured prompt J/ψ . However, as discussed in Ref. [22], the size of the non-prompt component is small in the covered p T region (of the order of 15% at high p T ) and its polarization was also measured to be small by CDF (λ HE θ ∼ −0.1 [21]), implying that the net effect of this source on inclusive J/ψ polarization should be negligible. In Table 3 the values of the ϒ(1S) polarization parameters are shown. The λ θ values are consistent with zero, with large uncertainties that prevent a firm conclusion on the absence of polarization in nuclear collisions. The λ φ and λ θφ values are also consistent with zero. The relatively smaller uncertainties for these parameters are related to a more uniform acceptance distribution as a function of the azimuthal angular variable. (the LHCb markers were shifted horizontally by +0.3 GeV/c for better visibility) in the rapidity interval 3 < y < 3.5. The error bars represent the total uncertainties for the pp results, while for Pb-Pb statistical and systematic uncertainties are plotted separately as a vertical bar and a shaded box, respectively. In the left part of the plot the polarization parameters in the helicity reference frame are reported, in the right those for the Collins-Soper frame. Conclusions The first measurement of the polarization parameters for J/ψ production in nuclear collisions at LHC energies was carried out by the ALICE Collaboration in Pb-Pb interactions at √ s NN = 5.02 TeV. The λ θ , λ φ and λ θφ parameters were evaluated in the helicity and Collins-Soper reference frames in the rapidity interval 2.5 < y < 4 and in the transverse momentum interval 2 < p T < 10 GeV/c. All the parameter values are close to zero, with a ∼ 2.1σ indication for a small transverse polarization in the helicity frame at low p T , and a corresponding indication for a small longitudinal polarization in the Collins-Soper frame (∼ 2.1σ effect). When comparing these results with pp data taken at higher energy at the LHC, an interesting feature is a significant difference in λ HE θ with respect to the LHCb results which showed instead a small longitudinal polarization in a similar kinematic domain. This first result obtained for J/ψ in nuclear collisions and described in this Letter represents therefore a starting point for future studies connecting such features with the known differences in the production mechanisms between pp and nucleus-nucleus collisions. Results were also obtained for the first time for the ϒ(1S) polarization, integrated over p T and y, showing, within the large uncertainties of the measurement, values compatible with the absence of polarization. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgements The ALICE Collaboration would like to thank all its engineers and technicians for their invaluable contributions to the construction of the experiment and the CERN accelerator teams for the outstanding performance of the LHC complex. The ALICE Collaboration gratefully acknowledges the resources and support pro- [1]
6,756.4
2021-04-10T00:00:00.000
[ "Physics" ]
Faults and Novel Countermeasures for Optical Fiber Connections in Fiber-To-The-Home Networks The number of subscribers to broadband services in Japan now exceeds 34 million, and about 20 million were using fiber-to-the-home (FTTH) services in December 2011 [1]. The number of optical fiber cables continues to increase as the number of FTTH subscribers in‐ creases; however, unexpected faults have occurred along with this increase. One such fault is damage caused by wildlife including rodents, insects, and birds [2], and another is that caused by defective optical fiber connectors [3]. It is very important to detect and investigate the causes of these faults and to apply correct countermeasures. Introduction The number of subscribers to broadband services in Japan now exceeds 34 million, and about 20 million were using fiber-to-the-home (FTTH) services in December 2011 [1].The number of optical fiber cables continues to increase as the number of FTTH subscribers increases; however, unexpected faults have occurred along with this increase.One such fault is damage caused by wildlife including rodents, insects, and birds [2], and another is that caused by defective optical fiber connectors [3].It is very important to detect and investigate the causes of these faults and to apply correct countermeasures. The Technical Assistance and Support Center (TASC), Nippon Telegraph and Telephone (NTT) East Corporation is engaged in technical consultation and the analysis of optical fiber network faults for the NTT group in Japan and is contributing to eliminating the causes and reducing the number of faults in the optical fiber facilities of FTTH networks.The TASC has investigated and reported faults in various fiber connections using refractive index matching material with wide gaps between fiber ends and faults in fiber connectors with imperfect physical contact [4][5][6]. This chapter describes some of the faults with optical fiber connections in FTTH networks that the TASC has investigated.In addition, it introduces novel countermeasures for dealing with the faults.The various faults and countermeasures described in this chapter are shown in Fig. 1.First, section 2 briefly reviews a typical FTTH network and various fiber connections in Japan.Then section 3.1 reports faults with fiber connections that employ refractive index matching material.These faults have two major causes: One is a wide gap between fiber ends and the other is incorrectly cleaved fiber ends.Next, section 3.2 describes faults with fiber connections that employ physical contact (PC).This fault has the potential to occur when connector endfaces are contaminated.The characteristics of these faults are outlined.Novel countermeasures against the above-mentioned faults are introduced in section 4. In section 4.1, a new connection method using solid refractive index matching material is proposed as a countermeasure against faults caused by a wide gap between fiber ends.In section 4.2, a fiber optic Fabry-Perot interferometer based sensor is introduced as a way of detecting faults caused by incorrectly cleaved fiber ends.The sensor mainly uses laser diodes, an optical power meter, a 3-dB coupler, and an XY lateral adjustment fiber stage.In section 4.3, a novel tool for inspecting optical fiber ends is proposed as a countermeasure designed to detect faults caused both by incorrectly cleaved fiber ends and contaminated connector endfaces.The proposed tool has a simple structure and does not require focal adjustment.It can be used to inspect a fiber and clearly determine whether it has been cleaved correctly and whether the connector endfaces are contaminated or scratched.This chapter is summarized in section 5. Fiber-to-the-home network and various fiber connections Figure 2 shows the configuration of a typical FTTH network in Japan, which is mainly composed of an optical line terminal (OLT) in a central office, underground and aerial optical fiber cables, and an optical network unit (ONU) inside a customer's home.The network requires various fiber connections at the central office, outdoors, and in homes.With the aerial and home-sited fiber connections in particular, field installable connectors or mechanical splices are used to make it possible to employ the most suitable wiring for the aerial condition and room arrangement.Field assembly (FA) termination connectors and field assembly small (FAS) connectors are types of field installable connectors [7][8]. In contrast, manufactured physical contact (PC)-type connectors, such as miniature-unit coupling optical fiber (MU) and single fiber coupling optical fiber (SC) connectors [9][10], are used in central offices and homes.These connectors require more frequent reconnection than field installable connectors.shows that of a mechanical splice and 3(c) shows that of a field installable connector.With PC-type connectors, two ferrules are aligned in an alignment sleeve and connected using compressive force.Normally, two fiber ends in ferrules are connected without a gap and without offset or tilt misalignment.A mechanical splice is suitable for joining optical fibers simply in the field.It consists of a base with a V-groove guide, three coupling plates, and a clamp spring.When a wedge is inserted between the plates and the base, optical fibers can be inserted though the V-groove guide to connect and fix them in position by releasing the wedge between the plates and base [11].Refractive index matching material is used to reduce Fresnel reflection.This connection procedure requires no electricity. A field installable connector is composed of three main parts, a polished ferrule containing a short optical fiber (built-in optical fiber), a mechanical splicer, and a clamp.This connector holds the optical fiber drop cable or indoor cable sheath.To assemble the connection, the optical fiber end is cleaved and connected to the built-in optical fiber using a mechanical splice, and the cable sheath is fixed in the clamp.The structure allows connection to another optical fiber connector in the field.In addition, the field installable connector is fabricated based on the above-mentioned mechanical splice technique; therefore, the connection can be assembled without the use of special tools or electricity.Both the mechanical splice and the field installable connector use the same fiber end preparation process before fiber installation.Figure 4 shows the fiber end preparation procedures.The coating of a fiber is stripped.Then the stripped fiber (bare fiber) is cleaned with alcohol, cut with a cleaver, inserted into the mechanical splice or the splicer inside a field installable connector, and joined to the opposite fiber or built-in fiber.Finally, the inserted fibers are fixed in position with a clamp.Stripping, cleaning, and cutting are important for successful fiber connection (to provide good performance) in the field.If any of these procedures are not conducted correctly, the performance of the fiber connection might deteriorate. Faults with optical fiber connections This section reports some of the faults with optical fiber connections in FTTH networks that the TASC has investigated.First, faults related to fiber connection using refractive index matching material are reported in section 3.1.Faults involving PC fiber connection are described in section 3.2. Fiber connection with refractive index matching material There are two major causes of faults related to fiber connection using refractive index matching material: one is a wide gap between fiber ends and the other is an incorrectly cleaved fiber end.Figure 5 shows three connection models using refractive index matching material; (a) shows the normal connection state with a narrow gap between flat fiber ends, (b) shows an abnormal connection state with a wide gap between flat fiber ends, and (c) shows an abnormal state with an incorrectly cleaved (uneven) fiber end.With the normal connection (a), there is a very narrow (sub-micron) gap between the fiber ends because a normal fiber end is not completely flat.The very narrow gap is filled with silicone oil compound, which is used as the refractive index matching material in a normal connection.In the abnormal connection state (b) there is a very wide gap between the flat fiber ends, and the gap is not filled with refractive index matching material but is a mixed state consisting of refractive index matching material and air.In the abnormal connection state (c) there is a wide gap between flat and incorrectly cleaved (uneven) fiber ends.However, the gap between the fiber ends is filled with matching material. The optical performance of various fiber connections using refractive index matching material was investigated experimentally.Wide gaps were formed between flat fiber ends by using MT connectors [12] and feeler gauges.A feeler gauge (thickness gauge tape) was installed and fixed in place between the two MT ferrules of a connector with a certain gap size by using a clamp spring.By changing the thickness of the feeler gauge, gaps of various sizes were obtained [13].In contrast, incorrectly cleaved fiber ends were intentionally formed by adjusting the fiber cleaver so that the bend radius would be too small [14].The cracks in these incorrectly cleaved fiber ends were from 30 to 200 μm in the axial direction.Using these incorrectly cleaved fiber ends, we fabricated field installable connectors as experimental samples.The fabricated MT connector with a feeler gauge and field installable connector samples were subjected to a heat-cycle test in accordance with IEC 61300-2-22 (-40 to 70°C, 10 cycles, 6 h/cycle) to simulate conditions in the field.The insertion and return losses were measured.The insertion and return losses of an abnormal connection sample with a wide gap between flat fiber ends are shown in Fig. 6.The optical performance changed and was unstable.The insertion loss was initially 2.7 dB and then varied when the temperature changed.The maximum insertion loss exceeded 30 dB.The return losses also varied from 20 dB to more than 60 dB.This performance deterioration is thought to be caused by the mixture of refractive index matching material and air-filled gaps between the fiber ends in the MT connector sample. Current Developments in Optical Fiber Technology Refractive index matching material moved in the gap when the temperature changed, and the mixed state change of the refractive index matching material and the air between the fiber ends induced the change in optical performance.When there is a mixed state consisting of refractive index matching material and air between the fiber ends, the boundary between the refractive index matching material and air could be uneven.In this state, the transmitted light spread randomly in every direction at the boundary.Therefore, the insertion loss increased to more than 30 dB.Consequently, the optical performance of fiber connections with a wide gap between flat fiber ends might be extremely unstable and vary widely.Therefore, it is important to prevent the gap from becoming wider and avoid mixing air with the refractive index matching material in the gap between fiber ends for these fiber connections.The insertion and return losses of an abnormal connection sample with an incorrectly cleaved (uneven) fiber end also changed greatly and were unstable.Figure 7 shows a scatter diagram plotted from the measured insertion and return loss values to enable the values to be easily and simultaneously understood.The horizontal lines indicate insertion loss and the vertical lines indicate return loss.The scatter diagram plots minute insertion and return losses that occurred during the heat cycle test.There are both huge vertical and horizontal fluctuations in the plotted data in Fig. 7.The insertion and return loss values changed periodically during temperature cycles.The initial insertion loss was low at about 1 dB and the initial return loss was high at more than 40 dB.The insertion loss increased greatly and then the return loss decreased as the temperature changed.At worst, the insertion loss changed to 43 dB and the return loss changed to 28 dB.The great changes in the insertion and return losses are also attributed to a partially air-filled gap.The gap was not completely filled with refractive index matching material and thus consisted of a mixed state of refractive index matching material and air because of the incorrectly cleaved fiber ends.The boundary between the refractive index matching material and air could be uneven.The transmitted light in this state spread randomly in every direction at the boundary.Therefore, the insertion and return losses became much worse.When the gap was filled with refractive index matching material and there was no air, the optical performance was not so bad.When the gap was a mixed state of refractive index matching material and air, the optical performance deteriorated.The connection state is thought to vary with temperature.These results suggest that the insertion and return losses of fiber connections using incorrectly cleaved fiber ends might change to, at worst, more than 40 dB for the former and less than 30 dB for the latter.Consequently, it is important to prevent gaps between the correctly and incorrectly cleaved ends of fiber connections from becoming wider, and air from mixing with the refractive index matching material in the gaps.Therefore, incorrectly cleaved fiber ends must not be used.An effective countermeasure is to check the fiber ends cleaved Current Developments in Optical Fiber Technology with fiber cleavers.Reference [6] is recommended to those readers requiring a more detailed analysis of these abnormal connection states. Physical Contact (PC) type connector This section discusses the deterioration in optical performance caused by the contamination of manufactured physical contact (PC)-type connectors.It has been reported that contamination on a PC-type connector may significantly degrade the performance of mated connectors [15][16][17].In this report, contamination was found on the connector endface and the sides of the connector ferrule.To study the effect of contamination on the optical performance of mated connectors, various connection conditions for PC-type connectors in abnormal states are discussed.The abnormal connection conditions are shown in Fig. 8.With PC-type connectors, two ferrules are aligned in an alignment sleeve and connected using compressive force.Normally, two fiber ends in ferrules are connected without a gap and without offset or tilt misalignment.However, if contamination is present, the connection state might become abnormal.An abnormal state can be induced by four conditions: (A) light-blocking caused by contamination on the fiber core, (B) an air-filled gap caused by contamination, (C) tilt misalignment caused by contamination, and (D) offset misalignment caused by contamination.Conditions (A) to (C) are caused by contamination on the ferrule endface.Conditions (C) and (D) are caused by contamination on the side of the ferrule.The performance deterioration caused by contamination (abnormal state) is calculated using the ratio of core contamination coverage and the Marcuse equations [18].Figure 9 shows the individual calculated insertion losses for the four abnormal conditions.In condition (A), as the core contamination coverage ratio increases, the insertion loss increases.When the ratios are 0.5 and 0.8, the insertion losses are 3 and 7 dB, respectively.This connection condition could degrade the return loss due to the difference between the refractive indices of the fiber core and contamination.Condition (B) may be caused by contamination on the ferrule endface or fiber cladding.As the gap width becomes larger, the calculated insertion loss increases.The insertion loss caused by an air-filled gap is dependent on wavelength.When the wavelengths are 1.31 and 1.55 μm, the insertion losses of a 50-μm gap are 1.0 and 0.4 dB, respectively.This connection condition could also degrade the return loss caused by the difference in the refractive index between the fiber core and air [19].Condition (C) may be caused by contamination on the edge of the ferrule endface and on the side of the ferrule.As the tilt angle increases, the calculated insertion loss increases.The insertion loss caused by tilt misalignment is dependent on wavelength.When the wavelengths are 1.31 and 1.55 μm, the insertion losses for a 3º misalignment angle are 1.4 and 1.3 dB, respectively.This connection condition might also have a detrimental effect on the return loss due to the difference between the refractive indices of the fiber core and air.Condition (D) may be caused by contamination on the side of the ferrule.When the offset is larger, the calculated insertion loss is higher.The insertion loss caused by offset misalignment is also dependent on wavelength.When the wavelengths are 1.31 and 1.55 μm, the insertion losses of a 3-μm offset are 1.9 and 1.5 dB, respectively.Current PC-type connectors usually have a small clearance between the outer diameter of the ferrule and inner diameter of the alignment sleeve.Therefore, the offset and tilt angle cannot be so large that the insertion losses become low.Conditions (A) and (B), which are caused by contamination on the fiber and ferrule endfaces, are thought to mainly affect the optical performance of connectors.Faults with PC-type connectors caused by contamination were investigated experimentally.Figure 10 shows examples of the investigated connectors.Figure 10 (a) is a normal sample (no contamination on the connector ferrule endface), and (b) to (e) are samples with contamination on the connector ferrule endface.The insertion losses at 1.31 and 1.55 μm were both 0.1 dB and the return loss at 1.55 μm was 58 dB for the uncontaminated sample.This optical performance is good and satisfies the required specifications for an SC connector.However, with the contaminated ferrule endfaces of samples (b) to (d), the insertion losses varied and exceeded 0.5 dB.The return losses were less than 40 dB.This optical performance does not satisfy the specifications for an SC connector.The losses with samples (b) and (c) are thought to be due to condition (A), and the loss with sample (d) is thought to be due to condition (B).With contamination sample (e), the optical performance was not bad and satisfied the SC connector specifications.Consequently, if there is contamination on a PC-type connector, the performance might deteriorate.An effective countermeasure against the loss increase caused by contamination is to inspect the PC-type connector endface prior to connection.When the connector endface is contaminated it must be cleaned with a special cleaner [20].The countermeasures against connector endface contamination and incorrect cleaving are effective in reducing connector faults. Novel countermeasures This section introduces novel countermeasures designed to deal with the faults described above.In section 4.1, a new connection method using solid refractive index matching material is proposed as a way of dealing faults caused by wide gaps between fiber ends.In section 4.2, a fiber optic Fabry-Perot interferometer based sensor is introduced as a countermeasure designed to prevent faults caused by incorrectly cleaved fiber ends.In section 4.3, a novel tool for inspecting optical fiber ends is proposed as a technique for detecting both faults caused by incorrectly cleaved fiber ends and those caused by contaminated connector endfaces. New connection method using solid refractive index matching material The optical performance of fiber connections with a wide gap between flat fiber ends might be extremely unstable and vary widely.This performance deterioration may not occur immediately after installation but intermittently over time.In the event of an unusual fault, it is difficult to find the defective connection, and it takes long time to repair the fault.Therefore, it is important to prevent the gap between fiber ends from becoming wider in joints that employ refractive index matching material.A novel optical fiber connection method that uses a solid resin as refractive index matching material has been proposed [21].The new connection method provides a high insertion loss that exceeds the loss budget between network devices when there is a wide gap between fiber ends (defective connection) and a suitable low insertion loss when the gap is less than a par-Current Developments in Optical Fiber Technology ticular width (normal connection).The experimental optical performance of the proposed connection method is also discussed in this section. The following two points are important as regards the new refractive index matching material. i. An elastic solid resin must be used that has almost the same refractive index as the fiber core. ii. Refractive index matching material with a particular width should be inserted between fiber ends (A and B) and tilted at a special tilt angle to the optical axis of the fiber. The refractive index matching material must maintain its shape; therefore, a solid resin is used since the connection state cannot be easily changed.Figure 11(a) and (b) show the principles of this connection method.The incident light is refracted at the boundary surface of the refractive index matching material when the fiber ends do not touch it (there is a wide gap between the fiber ends, as shown in Fig. 11(a)).In this case, there is a high insertion loss because of the offset misalignment.In contrast, the incident light will travel straight into the refractive index matching material when it is touched by both fiber ends (the gap between the fiber ends is less than a particular width, as shown in Fig. 11(b)).The insertion loss in Fig. 11(b) is much lower than that in Fig. 11(a).A connection method using solid refractive index matching material based on the abovementioned considerations was designed and used in the following procedure. First, a target low insertion loss was set when the gap was narrower than a particular width d and then the particular width of the solid matching material on the optical axis of the fiber was determined. Then the target high insertion loss was set when the gap was a wider than a particular width d and a special tilt angle θ was determined for the solid refractive index matching material.In step 1, the insertion loss caused by the gap between the fiber ends was calculated by using a Marcuse equation [18].The insertion loss should be less than 0.5 dB to satisfy the mechanical splice specifications.However, when the insertion loss was 0.5 dB, d was 60 μm, which was too small to handle the refractive index matching material.Therefore, the target d was doubled to 120 μm.The insertion loss then became 2 dB. In step 2, the insertion loss caused by the misalignment of the offset was calculated by using another Marcuse equation [3].Another target insertion loss of 20 dB was determined in order to exceed the loss budget between network devices.The insertion loss became 20 dB when θ was 16°. A sample made of the solid refractive index matching material (silicone resin) was fabricated based on the above parameters.Experiments were carried out with mechanical splices and samples of solid matching material.A groove was dug with the same shape as the sample, and the sample was tilted at 16° to the optical axis of the fiber, as shown in Fig. 12(a).A state was maintained whereby fiber end B always touched the sample, and fiber end A gradually moved toward the sample (Fig. 12(b)-(d)).The insertion and return losses were measured for different gap widths.Figure 12(b) shows the state in which fiber end A did not touch the sample.Figure 12(c) shows the state where fiber end A just touched the sample, and fiber end A was close to fiber end B, and Fig. 12(d) shows the state where the very narrow gap between the fiber ends was filled by the sample. Figure 13(a) and (b) show the insertion and return loss results at wavelengths of 1.31 and 1.55 μm, respectively.When fiber end A did not touch the sample, the insertion loss always exceeded 20 dB.Moreover, the return losses were constant at 15 dB.When fiber end A just touched the sample, the insertion losses decreased to 2.5 and 2.3 dB, and the return losses increased to 51.7 and 48.6 dB at wavelengths of 1.31 and 1.55 μm, respectively.In addition, Current Developments in Optical Fiber Technology when fiber end A was close to fiber end B and the very narrow gap between fiber ends was filled by the sample, both insertion losses decreased to around 0.1 dB, and the return losses were 53.4 and 45.5 dB at wavelengths of 1.31 and 1.55 μm, respectively.These experimental results were consistent with the target values based on the design.If there is a defective connection that has a wide gap, the insertion loss can always be extremely high.In this case, communication services may be immobilized.With the connection method, engineers can detect the defective connection immediately after installation. Consequently, a new connection method using solid refractive index matching material is proposed as a countermeasure against faults caused by a wide gap between fiber ends.This connection method can provide insertion losses of more than 20 dB or less than 2 dB, respectively, when the gap between the fiber ends is more or less than 120 μm. Fiber optic fabry-perot interferometer based sensor Field installable connections that have incorrectly cleaved fiber ends might lead to insertion losses of more than 40 dB induced by temperature changes, which may eventually result in faults in the optical networks.Therefore, it is important to use correctly cleaved fiber ends to prevent network failures caused by improper optical fiber connections.This means that we need a technique for inspecting cleaved optical fiber ends. Cleaved optical fiber ends are usually inspected before fusion splicing with a CCD camera and a video monitor installed in fusion splice machines [22].On the other hand, cleaved optical fiber ends are not usually inspected when mechanical splices and field installable connectors are assembled.These connections are easy to assemble and does not require electric power. Therefore, an inspection method is needed for these connections.A fiber optic Fabry-Perot interferometer based sensor for inspecting cleaved optical fiber ends has been proposed [23][24]. The basic concept of the proposed sensor for inspecting cleaved optical fiber ends is shown in Fig. 14. Figure 14(a) and (b), respectively, show fiber connections in which a fiber with a flattened end for detection is used in the inspection of incorrectly cleaved (uneven) and correctly cleaved (flat) fiber ends.The ratio of the reflected light power (P r or P r ') to the incident light power (P i or P i ') within each connection is measured.Two optical fibers are connected with an air gap S remaining between them.Misalignments of the offset and tilt between the fibers and the mode field mismatch are not taken into account.Under both conditions, Fresnel reflections occur at the fiber ends because of refractive discontinuity.In Fig. 14(a), the reflected light from the uneven end spreads in every direction, and the back-reflection efficiency ratio, P r /P i , is determined using the Fresnel reflection at the fiber end for detection in air.The Fresnel reflection R 0 is defined by the following equation. Here n 1 and n denote the refractive indices of the fiber core and air, respectively. In Fig. 14(b), some of the incident light is multiply reflected in the gap.The phase of the multiply reflected light changes whenever it is reflected, which interferes with the back-reflected light at the optical fiber connection.These multiple reflections between fiber ends are considered to behave like a Fabry-Perot interferometer [25][26][27].Two flat fiber ends make up a Fabry-Perot interferometer.Based on the model, the returned efficiency R (= P r '/P i ') is defined by the following equation.The return losses in dB are derived by multiplying the log of the reflection functions by -10. Here S and λ denote the gap size and wavelength, respectively.According to Equation ( 2), the return loss depends on S and λ. Figure 15 shows the calculated return losses from the uneven (incorrectly cleaved) and flat (correctly cleaved) fiber ends.The dashed and solid lines in the figure represent the calculations for the uneven and flat ends based on Equations ( 1) and ( 2), respectively.Here, the refractive indices n 1 and n were 1.454 and 1.0, and the gap size used for Equation ( 2) was 10 μm.The return losses of the uneven end were independent of wavelength and had a constant value of 14.7 dB.The return losses of the flat end varied greatly and periodically and resulted in a worst value of ~8.7 dB because of the Fabry-Perot interference. The return loss values at wavelengths of 1.31 and 1.55 μm were 11.2 and 18.9 dB, respectively.Even if the gap size and wavelength period were changed, the return losses varied as greatly as the values at a 10-μm gap [28].These results indicate that an inspected fiber end can be considered uneven or flat depending on whether or not the measured return losses from the fiber end at two wavelengths are both ~14.7 dB.The dimensions of the fabricated fiber stage are 90 x 100 x 110 mm, which is small enough to be portable in the field.It is also suitable for operation in an outside environment because it does not require a power source.Manual driving was adopted for moving the fiber ends.Two V-grooves for the alignment of two fiber ends were used to create a Fabry-Perot interferometer.These two V-grooves were originally one V-groove that was divided into two.By using the same V-groove for alignment, any tilting of the two fibers along their Z-axes can be reduced. The X-and Y-axes for the scanning direction of the fiber ends were chosen from several alternatives, the direction of the radius, spirally, or with one stroke, due to the streamlining of the fiber stage mechanism.The minimum distance the V-groove can move was designed to be 10 μm along both the X-and Y-axes.The stage was designed to move along both the X-and Y-axes to a maximum distance of 250 μm to cover the entire end of 125-μm-diameter fibers. Current Developments in Optical Fiber Technology Two levers are provided for manually operating only the left V-groove.The left lever moves the left V-groove along the Y-axis at 10 μm per pitch up to a maximum distance of 250 μm.Similarly, the right lever moves the left V-groove along the X-axis at 10 μm per pitch up to a maximum distance of 250 μm.In the experiments, the gap between the fiber for detection and the fiber under test was set at 40 μm, and each scanning distance was set at 10 μm.Typical experimental results are shown in Fig. 18.In the figure, (a) and (c) show the flat parts of the inspected fiber end found using the proposed inspection sensor and (b) and (d) show SEM images of the flat end.The fiber ends seen in Fig. 18(a) and (c) were found to be correctly and incorrectly cleaved, respectively.The experimental image with a correctly cleaved fiber end shows that the flat parts form a circle with a diameter of about 140 μm, which is slightly larger than the actual 125-μm-diameter fiber end.This is because the mode field area of light may radiate from the fiber end for detection.In contrast, the experimental results for the incorrectly cleaved fiber end show that half the fiber end parts are flat and half are uneven.The results obtained with the proposed inspection method and those obtained by SEM observation are in good agreement. The above results show that the proposed sensor made it possible to determine accurately whether the fiber ends were correctly or incorrectly cleaved for all the samples examined.Since the proposed sensor for cleaved optical fiber ends is based on the Fabry-Perot interferometer and a new fiber stage, it allows us to determine whether 10 x 10 μm areas of a cleaved optical fiber end are flat or uneven.The measured results of the inspected flat and uneven fiber ends were in good agreement with those obtained using an SEM. Simple tool for inspecting optical fiber ends The conventional inspection method for a cleaved fiber end involves checking it regularly (about once a week) to ensure good cleaving quality by using a CCD camera and the video monitor of a fusion splicer.If the cleaved fiber end is imperfect, first the fiber cleaver blade is replaced.If no improvements result from this countermeasure, the fiber cleaver itself must be repaired by the manufacturer.In contrast, the conventional inspection method for optical fiber connector endfaces is to check the surface before connecting the mated connector.This method uses a CCD camera and the video monitor of a specialized piece of inspection equipment [29].If the connector endface is contaminated, it must be cleaned with a special cleaner.These methods using a CCD camera and a video monitor are expensive and unsuitable for use with straightforward fiber connections in the field.Therefore, a simple and economical inspection tool for cleaved fiber ends and connector endfaces suitable for use in the field have been proposed [30][31].There are three important requirements for an inspection tool, namely it must provide a clear view, be portable, and easy to operate.We took these requirements into consideration when Current Developments in Optical Fiber Technology developing the tool.For the clear view requirement, the fiber ends or connector endfaces under test should be viewable with both the naked eye and a camera.Naked-eye inspection is easily applicable and effective during fiber end preparation and assembly procedures.Camera inspection is effective because it allows us to photograph an inspected cleaved fiber end or connector endface.To meet the portability requirement, the tool must be compact and easy to carry to any location including aerial sites.For the ease of operation requirement, the tool should not require any focal adjustment of a microscope, and the tool must be as easy as possible to handle to prevent the need for complex operations in the field. Several concrete specifications were determined on the basis of these requirements, as listed in Table 1.The tool must be small enough to carry with one hand.Its total weight should be less than 500 g.It should include a microscope that has a lens with a magnification power of a few hundred times.The target fiber is a 125-μm bare/250-μm coated fiber, which is placed in the FA holder used in field installable connectors or a holder for mechanical splicing.The target connectors are SC, MU, FA, and FAS connectors.The tool uses a cell phone equipped with a CCD camera and small video monitor.This enables the inspected fiber end to be photographed and sent over a cell phone network.LED light sources are used to allow visibility in dark places.A rechargeable battery is used for the LED light sources. Table 1. Specifications of new inspection tool The tool is designed to inspect both cleaved fiber ends and connector endfaces.Schematic views of the inspection method for a cleaved fiber end and an optical connector endace are shown in Fig. 19(a) to (c).The fundamental optical microscope system for the tool is shown in Fig. 19(a).The microscope system is composed of an objective lens, an eyepiece lens for a cell phone camera or a naked eye, a sample that can be inspected, and an LED light source.Their components must be arranged in a line at designated lengths.In this figure, S ob , L a , S ey , f ob , and f ey indicate the distance from the objective lens to the object point, the distance between the objective and eyepiece lenses, the distance from the eyepiece lens to the viewpoint for a cell phone camera or the naked eye, the focal distance of the objective lens, and the focal distance of the eyepiece lens, respectively.Here, S ob is designed to be slightly larger than f ob , and S ey is designed to be slightly larger than f ey .The figure also shows the light path.An LED light source emitting an almost parallel light beam, is used in this microscope system.After passing through the inspected sample, the light is focused at the back focal plane of the objective lens.It then proceeds to and is magnified by the eyepiece lens before passing into a cell phone camera or a naked eye.The magnified image of the inspected sample can be observed with the cell phone monitor or with the naked eye by using appropriate lenses and by designating appropriate distances; S ob , L a , and S ey .With normal optical microscopes, the inspected sample is placed on the stage and must be adjusted to S ob and aligned at the object point while L a and S ey are designated as constants.By contrast, with this microscope system, the inspected sample, which in placed in a special holder, can always be positioned at the object point without active alignment, i.e., without focal length adjustment.This special holder is described in detail in the following section.For the cleaved fiber inspection shown in Fig. 19(b), the side of the cleaved fiber end is designed to be viewed through the objective lens of the microscope system with the use of the LED light source.The distance between the fiber end and the objective lens a is designed to be equal to S ob .The fiber end, LED, and lens are designed to align passively and to set at each designated distance and not require focal adjustment.However, for the optical-fiber connector inspection in Fig. 19(c), the endface of the connector is designed to be viewed through the objective lens by using a half-mirror and another LED light source.The summation of the distance between the connector endface and the half-mirror b and that between the half-mirror and the object lens c is designed to be equal to S ob .The connector end, LED, half-mirror, and lens are also designed to align passively and to set at each designated distance and not require focal adjustment. On the basis of the described specifications and design, we developed a simple, mobile and cost-effective tool.The outer components of the proposed inspection tool and the internal makeup of the optical microscope system are shown in Fig. 20.It is composed of three main parts: a body that includes a microscope that has objective and eyepiece lenses and LED light sources, a cell phone and its attachment, and special holders for cleaved fiber ends or connector endfaces.The cell phone is equipped with a CCD camera and a small video monitor.This inspection tool is simple and light, and weighs about 500 g including the cell phone.The optical microscope system is also shown in this figure.The eyepiece lens, objective lens, and object point of the cleaved fiber end are aligned in the body of the tool.The two LEDs for the cleaved fiber end and connector endface are also installed in the body.The half-mirror is aligned in the special holder for the connector endface.The inspection procedure is as follows. i. The cleaved optical fiber or the optical connector to be inspected is placed in the appropriate special holder. ii. The special holder is installed at the center of the body. iii. The attachment with the cell phone is installed on top of the body. The special holders and body are designed to automatically align the inspected cleaved fiber end or connector end at each of the object points after step (ii).The attachment for a cell phone is also designed to automatically align the camera in the cell phone at the viewpoint after step (iii).This structure and procedure result in the inspection tool not requiring focal adjustment. Current Developments in Optical Fiber Technology The fiber ends or connector endfaces under test can be viewed through the top of the body (step ii) using the cell phone monitor (step iii).The conventional fiber end preparation procedure for an FAS connector has six steps: (1) cut the support wire of the dropped cable, (2) strip the cable coating, (3) place the fiber in the FA holder, (4) strip the fiber coating, (5) clean the stripped fiber (bare fiber) with alcohol, and ( 6) cut the bare optical fiber with a fiber cleaver.The assembly procedure comprises the next three steps: (7) insert the properly prepared bare optical fiber into the mechanical splice part in the FAS connector, (8) join it to the built-in optical fiber, and (9) fix the position of the bare optical fiber.The inspection procedure for the proposed inspection tool (i)-(iii) for a cleaved fiber end can be conducted between the fiber end preparation and assembly procedures, i.e., between steps (6) and (7).This indicates that the proposed inspection tool can work well with the conventional fiber end preparation and assembly procedures.For conventional FAS connector procedures, the fiber end preparation and assembly procedures take 72 and 28% of the total installation time, respectively.With the proposed tool, inspection took 11% longer than with the conventional procedure.These results indicate that using the inspection tool may result in a slight increase of 11% in operation time compared with that required with conventional fiber end preparation and assembly procedures. The fabricated inspection tool is compact, highly portable, and can inspect a fiber and clearly determine whether it has been cleaved correctly and whether contamination or scratches can be found on the connector endfaces.Thus, this tool will be highly practical for field use. Conclusion This chapter reported example faults and novel countermeasures with optical fiber connectors and mechanical splices in FTTH networks. After a brief introduction (section 1), section 2 described the FTTH network and optical fiber connectors and mechanical splices used in Japan, and section 3 reported example faults with these optical connections in FTTH networks.First, the faults with fiber connection using refractive-index matching material were reported in section 3.1.There are two major causes of these faults: one is a wide gap between fiber ends and the other is incorrectly cleaved fiber ends.Next, faults with fiber connection using physical contact were explained in section 3.2.This fault might occur when the connector endfaces are contaminated.The characteristics of these faults were outlined. Novel countermeasures against these above-mentioned faults were introduced in section 4. In section 4.1, a new connection method using solid refractive index matching material was proposed as a countermeasure against faults caused by the wide gap between fiber ends.This connection method can provide an insertion loss of more than 20 dB or less than 2 dB when the gap between the fiber ends is wider or narrower than 120 μm, respectively.If there is a defective connection that has a wide gap, the insertion loss will always be extremely high.In such cases, communication services may be immobilized.With the connection method, engineers undertaking detection work can notice the defective connection immediately after installation. In section 4.2, a fiber optic Fabry-Perot interferometer-based sensor was introduced as a countermeasure for detecting faults caused by incorrectly cleaved fiber ends.The sensor mainly uses laser diodes, an optical power meter, a 3-dB coupler, and an XY lateral adjustment fiber stage.Experimentally obtained fiber end images were in good agreement with scanning electron microscope observation images of incorrectly cleaved fiber ends. In section 4.3, a novel tool for inspecting optical fiber ends was proposed as a countermeasure for detecting faults caused both by incorrectly cleaved fiber ends and by contaminated connector endfaces.The proposed tool has a simple structure and does not require focal adjustment.It can be used to inspect a fiber and clearly determine whether it has been cleaved correctly and whether contamination or scratches are present on the connector endfaces.The tool requires a slight increase of 11% in operation time compared with conventional fiber end preparation and assembly procedures.The proposed tool provides a simple and cost-effective way of inspecting cleaved fiber ends and connector endfaces and is suitable for field use. These results support the practical use of optical fiber connections in the construction and operation of optical network systems such as FTTH. Figure 1 . Figure 1.Various faults and their countermeasures dealt with in this chapter Figure 2 . Figure 2. Typical FTTH network and various fiber connections Figure 3 ( Figure3(a) shows the basic structure of a PC-type connector,3(b) shows that of a mechanical splice and 3(c) shows that of a field installable connector.With PC-type connectors, two ferrules are aligned in an alignment sleeve and connected using compressive force.Normally, two fiber ends in ferrules are connected without a gap and without offset or tilt misalignment.A mechanical splice is suitable for joining optical fibers simply in the field.It consists of a base with a V-groove guide, three coupling plates, and a clamp spring.When a wedge is inserted between the plates and the base, optical fibers can be inserted though the V-groove guide to connect and fix them in position by releasing the wedge between the plates and base[11].Refractive index matching material is used to reduce Fresnel reflection.This connection procedure requires no electricity. Figure 3 . Figure 3. Basic structures of physical contact type connector, (b) mechanical splice, and (c) field installable connector Figure 4 . Figure 4. Optical fiber end preparation procedure Figure 5 . Figure 5. Fiber connection models using refractive index matching material: (a) normal connection with narrow gap between flat fiber ends, (b) abnormal connection with wide gap between flat fiber ends, and (c) abnormal connection with an incorrectly cleaved (uneven) fiber end Figure 6 . Figure 6.Heat-cycle test results for fiber connection with wide gap between flat fiber ends Figure 7 . Figure 7. Scatter diagrams of results from heat cycle test for fiber connection with an incorrectly cleaved (uneven) fiber end Figure 8 . Figure 8. Abnormal connection states for PC type connector with contamination Figure 9 . Figure 9. Calculated insertion loss, (A) cover ratio of contamination to fiber core, (B) caused by air-filled gap, (C) caused by tilt, and (D) caused by offset Figure 10 . Figure 10.Examples of contamination on connector endface, (a) uncontaminated connector endface, and (b-e) different contaminations on connector endface Figure 11 . Figure 11.Proposed connection method: (a) fiber ends do not touch matching material, and (b) fiber ends touch matching material Figure 12 . Figure 12.Composition of experimental conditions: (a) V-grooved substrate and sample, (b) fiber A does not touch sample, (c) fiber A just touches sample, and (d) fiber A is close to fiber B and very narrow gap is filled with sample Figure 13 .Figure 14 . Figure 13.Results of (a) insertion loss and (b) return loss Figure 15 . Figure 15.Return losses from uneven (dashed line) and flat (solid line) fiber ends Figure 16 . Figure 16.Experimental setup including fiber stageBased on the above principle, we have designed the inspection sensor shown in Fig.16.This sensor is composed of two light sources emitting at different wavelengths, an optical power meter, an optical coupler, and a fiber stage.In this proposed sensor, one light source is turned on and the other is turned off.The return loss values are measured separately at two wavelengths.The fiber stage is the most important component because a Fabry-Perot interferometer must be created in it by the fiber for detection and the fiber under test.The other equipment can be adapted from commercially available devices.Therefore, we fabricated a new fiber stage with the following characteristics to implement the proposed technique, as shown in Fig.17.The dimensions of the fabricated fiber stage are 90 x 100 x 110 mm, which is small enough to be portable in the field.It is also suitable for operation in an outside environment because it does not require a power source.Manual driving was adopted for moving the fiber ends.Two V-grooves for the alignment of two fiber ends were used to create a Fabry-Perot interferometer.These two V-grooves were originally one V-groove that was divided into two.By using the same V-groove for alignment, any tilting of the two fibers along their Z-axes can be reduced.The X-and Y-axes for the scanning direction of the fiber ends were chosen from several alternatives, the direction of the radius, spirally, or with one stroke, due to the streamlining of the fiber stage mechanism.The minimum distance the V-groove can move was designed to be 10 μm along both the X-and Y-axes.The stage was designed to move along both the X-and Y-axes to a maximum distance of 250 μm to cover the entire end of 125-μm-diameter fibers. . Figure 18 . Figure 18.Experimental results of correctly cleaved fiber end: (a) result with proposed sensor and (b) result of SEM observation, and experimental results for incorrectly cleaved fiber end: (c) result with proposed sensor and (d) result of SEM observation Figure 19 . Figure 19.Basic concept of inspection method with developed tool: (a) fundamental optical microscope system and inspecting (b) cleaved fiber end (side) and (c) connector endface (front) Figure 20 .Figure 21 . Figure 20.Outer components of fabricated inspection tool and internal makeup of optical microscope system
10,915.8
2013-06-13T00:00:00.000
[ "Engineering", "Environmental Science" ]
Mitigating the Effects of Mobility and Synchronization Error in OFDM Based Cooperative Communication Systems An Orthogonal Frequency Division Multiplexing based mobile wireless network with a sender, a destination and a third station acting as a cooperating node is modelled and analyzed. The length of cyclic prefix in the orthogonal frequency division multiplexed symbols is made to vary depending on the channel conditions and maximum likelihood estimator is used at the receiver in order to compensate for the carrier frequency offset that occurs during transmission. Simulation results show that maximum likelihood estimator has better performance than self-cancellation estimations. The channels between the source, the cooperating node and the destination are modelled containing thermal noise, Rayleigh fading, Rician fading and path loss. Amplify-and-Forward cooperation protocol is used at the cooperating node when the system is in cooperation mode. For a relatively short distance between the cooperating nodes, when compared to the distance between them and the base station, amplify and forward cooperation protocol has a better performance than decode-and forward protocol, unless an error correcting code is simulated. The cooperating node turns its cooperation mode switch ON or OFF depending on the channel state between the source and the cooperating nodes.  The performance of different combination protocols at the receiver is simulated and maximum ratio combining is found to have better performance. However, for immobile wireless sensor networks Extended SNR (ESNR) combiner has also better performance. The system has also showed that with any kind of combination protocol at the receiver it is possible to achieve second order diversity when there is only one cooperating node in the system.  Abstract-An Orthogonal Frequency Division Multiplexing based mobile wireless network with a sender, a destination and a third station acting as a cooperating node is modelled and analyzed.The length of cyclic prefix in the orthogonal frequency division multiplexed symbols is made to vary depending on the channel conditions and maximum likelihood estimator is used at the receiver in order to compensate for the carrier frequency offset that occurs during transmission.Simulation results show that maximum likelihood estimator has better performance than self-cancellation estimations.The channels between the source, the cooperating node and the destination are modelled containing thermal noise, Rayleigh fading, Rician fading and path loss.Amplify-and-Forward cooperation protocol is used at the cooperating node when the system is in cooperation mode.For a relatively short distance between the cooperating nodes, when compared to the distance between them and the base station, amplify and forward cooperation protocol has a better performance than decode-and forward protocol, unless an error correcting code is simulated.The cooperating node turns its cooperation mode switch ON or OFF depending on the channel state between the source and the cooperating nodes.The performance of different combination protocols at the receiver is simulated and maximum ratio combining is found to have better performance.However, for immobile wireless sensor networks Extended SNR (ESNR) combiner has also better performance.The system has also showed that with any kind of combination protocol at the receiver it is possible to achieve second order diversity when there is only one cooperating node in the system.Keywords-OFDM, CFO, ICI, cyclic prefix, cooperative communication, BER, SNR, maximum ratio combiner, amplifyand forward, decode-and-forward, subcarriers, maximum likelihood. I. INTRODUCTION During the last two decades, the wireless communications have experienced a huge growth in both capacity and variety.This growth has been possible to achieve due to some advancement and new discoveries of communication technologies, techniques and protocols.The mere motivation that stimulated intense interest to the advancement of Manuscript received September 10, 2014, revised October 26, 2014.Yetera Bereket, Department of Electrical Engineering, Pan African University of Science Technology and Innovation(Correspondence: berryboyy@yahoo.com)K. Langat, Department of Telecommunication and Information Systems, Jomo Kenyatta University of Agriculture and Technology Edward K. Ndungu, Department of Telecommunication and Information Systems, Jomo Kenyatta University of Agriculture and Technology existing technologies in wireless communication systems is an increased demand for services that need higher data rate and higher capacity.It has been seen and still expected that the wireless communication systems of the near future will require data rates up to few hundreds of mega bits per second (Mbps), which are able to deliver bandwidth hungry applications such as online gaming, virtual class room, and video streaming.The required data rate of the next generation wireless communication systems will be achieved by efficiently increasing the amount of the allocated bandwidth and using more advanced technologies, both in hardware and software. One of the major themes in today's broadband systems is the use of orthogonal frequency division multiplexing (OFDM).Orthogonal frequency division multiplexing is a modulation scheme suitable for frequency selective channels and for providing high speed data transmission, which makes it one of the promising solutions for the next generation wireless communications.OFDM mitigates the effect of multipath channel by essentially dividing the source spectrum into many narrow sub-bands that are transmitted simultaneously.In OFDM system, the source bit-stream to be transmitted over the air link is split into N parallel streams, which are later going to be modulated using N subcarriers.Because of using many sub-carriers, the symbol duration T s becomes N times larger.This reduces and even totally averts the effect of inter symbol interference (ISI) in multipath channels, and thereby reduces the equalization complexity.However, there is a need for more developments of OFDM systems in terms of complexity reduction and adaptation, therefore reconfigurable solutions are needed to achieve the user requirements.This is necessary because the end users require lightweight, compact size and power efficient devices besides the high bit rate capabilities. Furthermore, combining OFDM transmission technique with the new techniques such as multiple-input-multipleoutput (MIMO) or cooperative communication can also enhance the capacity and the bit rate of the emerging wireless communication systems.Also MIMO transmissions have been extensively studied as a means to improve spectral efficiency in wireless networks.While MIMO techniques offer tremendous advantages, its performance strongly depends on the number of antenna elements, spatial fading correlations between antennas, the presence of line of sight component, etc. Especially, multiple antennas at small handsets/cellular phones are unattractive for the achievement of transmit/receive diversity due to the limitation on size, power, hardware and price.The advantages of MIMO techniques can be achieved via cooperative communication.However, carrier frequency offset plays an important role when OFDM is integrated into cooperative communication systems [1].Carrier frequency offset (CFO) arises either due to mobility that results in different Doppler shifts for each relay, or due to oscillator instabilities that result in slightly different carrier frequencies for every relay.This CFO leads to inter carrier interference, which is the leakage of signal power, in any OFDM based communication system. In the context of addressing CFO issues surrounding cooperative communication systems based on OFDM, much of the work to date has focused on cooperative relaying techniques that utilize orthogonal space-time codes.For instance, in [1], [2] the authors design appropriate receivers to handle frequency offsets.In [1], the authors develop a frequency synchronization algorithm that exploits the structure of the cooperation protocol.In [2], the authors utilize a long cyclic prefix to mitigate the impact of carrier frequency offsets.This technique reduces the transmission bit rate.One mechanism of simplifying the receiver and transmitter design is to assign orthogonal subcarriers to each relay.The design of frequency offset estimation algorithms were considered in [3] from a multiuser perspective whereby each user is assigned a unique set of subcarriers.The other widely known carrier frequency offset mitigation technique is the self-cancellation (SC) technique [4].The main idea in self-cancellation is to modulate one data symbol onto a group of subcarriers with predefined weighting coefficients to minimize the average carrier to interference ratio (CIR).This is the main drawback of this method because it utilizes half of the available subcarriers for CFO estimation and hence, inter carrier interference reduction. Cooperative communication networks [5, 6 and references therein] are created via the help of cooperating terminals which are willing to help the communication of any sourcedestination pair.End to end spectral efficiency of a wireless network can be increased with the aid of cooperative strategies.The concept of node cooperation brings a new form of diversity.Transmit and/or receive diversity can be achieved even with single antenna terminals [7].By this way, the need for costly multiple transceiver circuitry diminishes.Furthermore, spatial fading correlation of a cooperative diversity scheme is expected to be much less than spatial fading correlation of multi-antenna arrays colocated at a terminal. In a cooperative communication system, each wireless user is assumed to transmit data as well as act as a cooperative agent for another user.One of the key components in such a cooperative relay network is a forwarding method used by a relay terminal.Amplify-and-Forward (AAF) and Decode-and-Forward (DAF) are the main forwarding protocols which can be used in the cooperative relay networks [7].A relay terminal using the AAF scheme amplifies and forwards the signal received from its immediate predecessor (the source node) in the network.A relay terminal using the DAF protocol decodes, re-encodes and forwards the signal received from its immediate predecessor in the network.The use of either AAF or DAF at a relay terminal achieves different performance results under given Signal to Noise Ratio (SNR) conditions.But when the cooperating node gets the line-of-sight signal from the source node, AAF cooperation protocol is preferable than the DAF protocol because of its simplicity and it also doesn't incur system losses in terms of introducing processing delay. With an intent to integrate the benefits of both cooperation as well as OFDM, OFDM based cooperative communication networks have been intensely investigated [8]- [13].In [8], [9] Gui and Cimini Jr devise bit loading algorithms for cooperative OFDM systems with decode-andforward (DF) cooperation protocol, considering a single source-destination pair and multiple cooperating nodes.In [10], the authors proposed an OFDM-based selective relaying scheme in a multihop cooperative network, where the relay selection at each hop is performed on a persubcarrier basis and joint selection is adopted at the last two hops.In [11], Jamshidi et al. derive exact expression and tight lower bound for the outage probability of spacefrequency coded cooperative OFDM system.Effect of carrier frequency offsets on the relay-to-destination links in cooperative OFD is investigated in [12].In [13], interference mitigation techniques to alleviate the effect of inter-symbol interference (ISI) and inter-carrier interference (ICI) caused due to frequency selectivity of the channel and violation of 'quasi-static' assumption in space-frequency block coded cooperative OFDM are presented and analyzed for amplifyand-forward (AAF) and decode-and-forward (DAF) cooperation protocols. In this paper an OFDM based cooperative communication system with AAF protocol is developed.In order to completely avert the effect of ISI from occurring some redundant information are added into the OFDM symbols before transmission.This added redundant information is called cyclic prefix (CP).Maximum likelihood (ML) estimation is used to estimate the CFO and compensate for its effect that occurs due to the Doppler shift and transmitterreceiver carrier frequency offset.ML estimation is compared with the self-cancellation (SC) estimation series and is found to be better in many aspects.The cooperating node is assumed to get line-of-sight signal from the source and hence, the channel between the cooperating nodes is assumed to have Rician fading characteristics.The channels between the source node and the receiver and the cooperating node and the receiver as well are assumed to be frequency selective and Rayleigh fading.We also assumed the cooperating node to be only a few meters apart (up to 10 meters) from the source node.This is due to the fact that the cooperating nodes need to be a few wavelengths apart in order to create a virtual MIMO system through spatial diversity.In most cases, if the distance between the source and cooperating nodes is longer than 10 meter the level of the noise added to the transmitted signal will become pronounced and eventually surpass the threshold value.In addition to this, by limiting the distance between the two cooperating nodes to 10 meter, the channel characteristics of source node to destination and cooperating node to destination will be nearly the same.Hence, the carrier frequency offset estimated at the receiver upon the arrival of a signal from the source node will be used to compensate for the effects of CFO on both channels, the channel between the source node and the destination and the channel between the cooperating node and the destination.Meaning, there is no need to estimate the CFO that occurs over the cooperating node to destination channel.This minimizes computational time and complexity at the receiver side. Our contributions are three-fold.First, we show through simulation results that cooperative communication systems with amplify and forward cooperation protocols are not affected by the carrier frequency offset of the relaying nodes when the distance between the cooperating nodes is limited to be in the range of 10 meter.That is, the final received signal at the destination is only affected by the carrier offset between the source and destination, much like relay-less system.This is significant finding which shows that from the point of view of the destination, it can use a receiver built for a conventional multiple antenna transmissions without employing a multiuser-like front-end to handle non coherent transmissions from multiple nodes. The second contribution is that when the distance between the cooperating nodes is limited to 10 meter range, there will be no need to deal with the issues of imperfect timing synchronization.Hence, in these types of cooperative communication systems, perfect timing synchronization can be assumed between the cooperating nodes.And, since the cooperating node is located far enough from the source node, there will be no spatial fading correlation between the channels from the source to the destination and from cooperating node to the destination.These reduce computational complexity at the cooperating node and at the intended destination. Lastly, we analyzed, and build a fully functional amplifyand-forward cooperative communication system which is based on an OFDM transmission technique.We have come up with new threshold value at the cooperating node based on which the cooperating node decides whether to cooperate with the source node.In most OFDM cooperative networks to date, the cooperating node amplifies and retransmits the received signal together with the noise added to the signal.In such cases the original message signal will be overshadowed by the noise signal when it reaches the intended destination and it will have no meaning.Through the simulation results, we have also shown how the performance of such systems improves when the cooperating node gets the line-of-sight signal from the source node. Generally, the solutions developed in this paper to mitigate the effects of CFO in a cooperative OFDM system have shown an advantage of improving system performance in terms of bit error rate (BER) against signal-to-noise ratio (SNR).To validate the achieved results, section VI compares the performances of the system developed in this paper with the widely known and used cooperative OFDM system with self-cancellation CFO estimator. II. SIGNAL MODEL The signal to be transmitted over the wireless channel must first be converted into OFDM symbols.In OFDM system with N subcarriers, N information symbols are used to construct one OFDM symbol.Each of the N symbols is used to modulate a subcarrier and the N modulated subcarriers are added together to form an OFDM symbol.Orthogonality among subcarriers is achieved by carefully selecting the carrier frequencies such that each OFDM symbol interval contains integer number of periods for all subcarriers.Using discrete-time baseband signal model, one of the most commonly used schemes is the IDFT-DFT based OFDM systems [14].Guard time, which is cyclically extended to maintain inter-carrier orthogonality, is inserted that is assumed longer than the maximum delay spread to totally eliminate inter-symbol-interference [15].In the presence of virtual carriers, only M out of N carriers is used to modulate information symbols.Without loss of generality, we assume that the first M carriers are used to modulate information symbol, while the last N − M carriers are virtual carriers.With symbol rate sampling, the discrete time OFDM model is: where each d k is used to modulate the subcarrier e j2πnk/N .Written in matrix form, we have: where W consists of the first M columns of the IDFT matrix and d = [d 0 ,…, d M−1 ] is the symbol vector.In the presence of time dispersive channel, additive noise, and carrier frequency offset, the OFDM signal at the receiver is now: where H(k) is the channel frequency response corresponding to subcarrier k, z(n) is additive complex Gaussian noise, and T s = T/N is the symbol interval with T being the IDFT interval (or OFDM symbol interval, excluding the guard time, as often termed in the literature).Here the initial phase due to frequency offset is assumed to be zero (equivalently, the initial phase can be absorbed into H(k)).Notice if we define φ = Δω•T s , then φ and the frequency offset Δω differ only by a constant scalar, hence estimation of Δω is equivalent to estimation of the normalized phase shift φ. III. III. OVERALL SYSTEM MODEL Consider a three node OFDM based cooperative communication network with source node (SN), cooperating node (CN) and receiving node (RN).The SN broadcasts its signal over the fading wireless communication channel and both the CN and RN receive the signal.The CN amplifies and re-transmits the received signal depending up on the link state between its node and the SN and also if the cooperating switch is on ON state by the time the signal arrives.It is assumed that the source and cooperating nodes are always near to each other and the cooperating node gets the LOS signal from the source node.Hence, the channel between the source node and the cooperating node assumed to have Rician fading characteristics.The channels between the source node and the receiving node and between the cooperating node and the receiving node are assumed to be frequency selective and Rayleigh fading. Consider N OFDM subcarriers.In the first time slot, SN transmits one OFDM frame of duration (N + N g )T s , where T s is one sample duration and N g is the cyclic prefix (CP) length.The transmitted OFDM frame consisting of data symbols X[k], k = 0,1,…,N -1, is given by: Let h sc [n], h sr [n] and h cr [n] denote the channel impulse responses (CIR) of the source node to cooperating node, the source node to the receiving node and the cooperating node to the receiving node links, respectively.The received OFDM symbols at the cooperating and receiving nodes, during the first phase (time slot), will be: where  indicates linear convolution, n sc [n] is the white Gaussian noise signal at the cooperating node, and n sr [n] is the white Gaussian noise signal at the receiving node, both white Gaussian noises with variance N o ; y sc [n] is the signal received at the cooperating node from the source node and y sr [n] is the signal received at the destination from the source node.During the second time slot (or cooperation phase), the cooperating node will amplify the received signal by gain β and re-transmits it to the receiving node.Hence, the received signal at the receiving node, y cr [n], in this second phase is: where  indicates linear convolution and n cr is the white Gaussian noise over the cooperating and receiving nodes link with variance N o .The magnitude of the amplifying factor is determined based on the transmitted signal energy and the received signal energy.Let , then the energy of the signal received at the cooperating node is: To re-transmit the data with the same power as the source node did, the cooperating node needs to amplify the received signal by a factor of: The system developed in this paper supports cooperation if and only if , otherwise the cooperating node turns its cooperation mode switch to OFF state.Therefore, the received signal at the receiving node in two time slots is: n y n y n x n h n n n x n h n n n h n n x n h n x n h n h n n n h n n n n n The output on the k th subcarrier is given by performing the discrete Fourier transform on the above equation (11).The schematic diagram in Fig. 1 shows the overall system followed by its description.The transmitting node should always perform signal modulation whenever it has a signal to transmit.Hence the bit stream will be converted into symbols.Then the source node performs serial to parallel conversion.The number of parallel symbols should coincide with the number of available subcarriers.After the conversion, IFFT is performed to obtain OFDM symbol.The cyclic prefix (CP) is lastly added before transmission to completely avert inter-symbol interference (ISI) and minimize Inter Carrier Interference (ICI) at the receiving end.Then the transmitter in the source node converts the parallel data into serial data and transmits it over the air link to the cooperating node and the receiving node.In this first phase, the cooperating node performs amplification of the received signal for re-transmission if the received signal passes the threshold quality for re-transmission.The cooperating node decides to cooperate depending on the state of the channel between the source and cooperating nodes.In the second phase, the receiver receives another copy of the signal from the cooperating node and combines it with the signal received from the source node directly using maximum ratio combiner (MRC).In this paper we have checked and tested different combiners under different conditions.After obtaining the received signal by combining the direct signal and the signal from the cooperating node by maximum ratio combiner, the process that was undertaken in the main transmitting node is reversed to obtain the decoded data.The CP is removed to obtain the data in the discrete time domain and then processed using the FFT for data recovery.Since the wireless channel is known to be fading and to introduce Doppler shift, the ML estimator that comes immediately after the FFT block in the system is used to perform CFO estimation.Then using the estimated values of CFO, the ICI that occurs during transmission will be compensated.Finally the symbols pass through the demapper (demodulation) in order to regenerate the received bit stream. IV. MAXIMUM LIKELIHOOD ESTIMATION FOR CFO AT THE RECEIVER In this paper the source node and the cooperating node are assumed to be separated only few meters apart (up to 10 meters maximum) and the cooperating node gets the LOS signal from the source node.Hence, it is also assumed that the channel characteristics between the cooperating node and the receiver are the same with the channel characteristics of the source node to the receiver, except for simulation less Doppler frequency for the cooperating node-to-receiver channel is used.This is due to the fact that only a node with lower speed will be chosen for cooperation.Therefore, the CFO estimated at the receiver for the signal coming from the source node will be used to compensate both the signal from the source and cooperating nodes.That means, there is no need to do CFO estimation for the signal arriving at the receiver from the cooperating node. This method has been presented in several papers in slightly varying forms [16,17].The training information required is at least two consecutive repeated symbols.The IEEE 802.11a preamble satisfies this requirement for both the short and long training sequence.Let the transmitted baseband signal be s n , then the complex baseband model of the passband signal y n is: where f tx is the transmitter carrier frequency and T s is the sampling interval.After the receiver down-converts the signal with a carrier frequency f rx , the received complex baseband signal r n is: where Δf=f txf rx is the carrier frequency offset and w n the white Gaussian noise with variance N o . Let D denote the delay between the identical samples of the two repeated symbols.Then the frequency offset estimator is developed as follows: The cross-correlation of the two consecutive symbols is computed as: Since w n is an additive white Gaussian noise white mean zero and covariance σ 2 , the above equation will reduce to: Hence, the maximum likelihood estimate gives us the frequency offset estimation as: The ML estimation of frequency offset can also be derived after the discrete Fourier Transform (DFT) processing (i.e. in frequency domain).The received signal during two repeated symbols is (ignoring noise for convenience): where X k 's are the transmitted data symbols, H k is the channel frequency response for the k th subcarrier, K is the total number of subcarriers, and f r is the relative frequency offset to the subcarrier spacing.The DFT of the first symbol and for the k th subcarrier is: The DFT of the second symbol is derived as: This, therefore, shows us that every subcarrier experiences the same shift that is proportional to the frequency offset. The cross-correlation of the two subcarriers is obtained as follow: r rr Thus, the frequency offset estimator governing equation is: which is quite similar in form to the time domain version of the ML estimation in (17).This CFO estimation is used at the receiver to estimate the carrier frequency offset, that occurs due to Doppler shift and frequency synchronization error of the transmitter and receiver.Then this estimated CFO values will be used to compensate both the signals from the source node and the cooperating node at the receiver node. V. SIMULATION RESULTS AND DISCUSSION For simulation, quadrature amplitude modulation (QAM) scheme with M = 4 is chosen and the total number of subcarriers is set to 64.These parameters are chosen because the current wireless communication systems, 3G and beyond, are based on them.The fading channel between the source node and the receiver and the fading channel between the cooperating node and the receiver are set to have six different signal propagation paths with 100 Hz and 80 Hz Doppler frequency, respectively.The 100 Hz Doppler shift between the source node and the base station is the worst case in wireless communication system.The reason we chose 80 Hz Doppler frequency between the cooperating node and the base station is that the source node is always assumed to select a cooperating node with relatively lower speed.Figure 2 shows the transmitted and received signal.Figure 3 shows the performance of the system when there is no cooperation, meaning, when there is no diversity.The broken curve in red color shows the system performance when there is no CFO.But, the broken curve in green color shows the effect of CFO with value of 0.2 in the system performance.The system performance, for instance, at 40 dB SNR is less that 10-4 when the system is CFO free.But, the system performance deteriorates to nearly 10 -1 for the CFO value of 0.2.The Figure 3 shows the bit error rate (BER) deteriorating as the system introduces more carrier frequency offset.However, the system performance should improve when carrier frequency estimation technique is incorporated at the receiver side of the system. Figure 4a shows the signal received at the cooperating node from the source node and Figure 4b shows the performance of an ideal OFDM based cooperative communication system, where there is no CFO in the system and hence, no need of mitigating the effects of mobility and frequency synchronization error.From the simulation result in Figure 4b, the BER of the system with cooperation improves as the SNR increases starting from 5 dB.Hence, it can be concluded that at higher level of SNR cooperation results in a very improved system performance in terms of BER. Figure 5 shows the effects of carrier frequency offset (CFO) deteriorating the performance of OFDM based second-order diversity cooperative communication system.Therefore, maximum likelihood estimation developed in the above session in chapter three to mitigate the effects of CFO is used to soothe the severity of CFO in the system.The simulation result shows how the system performance deteriorates for CFO = 0.2. Figure 6 shows how the integration of ML estimation into the system improved the system performance.The simulation result compares the performance of the system with and without cooperation for CFO values of 0 and 0.2.The performance of the system with AAF cooperation in the environment where 0.2 carrier frequency offset introduced to the system has nearly the same performance with the system without cooperation and carrier frequency offset value of 0. This result shows how cooperation in wireless communication systems can significantly improve the performance in terms of BER against SNR. VI. VALIDATION OF THE SIMULATION RESULTS The simulation results in Figure 7 and Figure 8 show how the performance of developed cooperative communication system with ML estimation technique overrides the performance obtained from cooperative communication system with self-cancellation (SC) estimation technique. The comparison between the ML estimation technique and the SC estimation technique is made for CFO values of 0.15, and 0.3 and for modulation types of QAM-4, and QAM-16.In all of the simulation scenarios ML estimation and compensation techniques have shown better performance.The maximum likelihood method gives the best overall results.Figure 7 shows simulation result of the cooperative communication systems based on ML and SC methods for carrier frequency offset value of 0.15 and 4-QAM modulation.The purple colored curve represents the performance curve of cooperative system with self cancellation CFO estimator and the green colored curve represents the performance of cooperative system with maximum likelihood CFO estimator.For these carrier frequency offset value and modulation type, the two methods showed nearly the same performance for SNR values of up to 10 dB.But, thence the performance obtained from the ML method showed better performance in terms of BER. For CFO value of 0.3 and modulation scheme of QAM-64, the ML based cooperative system shows better performance for the entire range of SNR values.This is shown in Figure 8. Hence, the system developed in this paper shows better performance when it is compared with the widely studied and previously developed OFDM based cooperative communication systems. VII. CONCLUSION The paper considered the most practical scenarios in cooperative communication systems where the cooperating node gets line-of-sight signal from the source node.It has also investigated the effects of carrier frequency offset in amplify and forward cooperative communication system and how these effects are mitigated by using multi-carrier transmission along with maximum-likelihood estimation of CFO.The simulation results of the developed structure have also proved that better BER can be achieved if the cooperating node is restricted to be in a limited range of distance from the source node. Figure 2 : Figure 2: Transmitted and received signal, when there is no cooperation. Figure 3 : Figure 3: System performance of OFDM system with no cooperation. in OFDM Based System, Rician enviroment b/n sorce and relay AAF Cooperation No Cooperation (b) Figure 4: (a) Signal received at the cooperating node from the source node, 20dB SNR, (b) Ideal OFDM based Cooperative communication system performance
7,112
2014-11-04T00:00:00.000
[ "Computer Science", "Business" ]
Peculiarities of Acoustooptic Transformation of Bessel Light Beams in Gyrotropic Crystals (*) The peculiarities have been studied of acoustooptic (AO) diffraction of quasinondiffracting vector Bessel light beams (BLB) on the ultrasound waves in optical gyrotropic cubic crystals. The system of coupled equations describing the process of acoustooptic interaction is solved, diffraction efficiency has been calculated. The mathematical description of AO interaction, which differs from the similar description for the plane optical waves means of two types of synchronism, is conducted. It is shown that besides the usual longitudinal synchronism realized at the equality of phase velocities transmitted and diffracted waves, for Bessel beams it is also necessary to perform the so-called transverse synchronism. It is related with the fact, that Bessel beams with differing cone angles have different spatial structure and, consequently, various values of overlap integral with the input beam. The possibility has been investigated of transformation of the order of phase dislocation of Bessel beams wave front due to AO diffraction. It is proposed to use the process of acoustooptic diffraction in gyrotropic cubic crystals as a method for dynamic manipulation of polarization state of output Bessel beam, particularly for transformation of leftto right (and vice versa) polarization states. Introduction An important area of Bessel light beams (BLB) research is the elaboration of the methods for transformation of phase dislocation order of their wave front. The acousto-optic (AO) interaction is a promising one for these purposes, because it allows one to control dynamically the transformation process, unlike, for example, the well-known methods of transformation of spatial structures and order of BLBs developed in papers [1][2][3]. It should be noted that whereas the acoustooptic diffraction of light field in the plane-wave approximation or Gaussian light beams has received a rather good study [4][5][6], there are only few papers on the AO transformation of BLB [7,8]. As with plane waves, Bessel beams can be represented as rigorous solutions of Maxwell equations [7,8]. This is of importance for studying the vector AO interactions because they give us the exact knowledge of the polarization states of such beams [9,10]. Among the various polarization states of BLB, the well-known ones are the radial (ρ-) and azimuthal (ϕ-) polarizations. The beams with such polarization states are more effective in some applications as compared to the beams with the linear or circular polarization. For example, a number of papers are devoted to applications of such beams in laser technology (see, for example, [11][12][13]). Due to their non-diffractive nature and a narrow dark central region, the high-order Bessel beams can be used for atom guiding over extended distances [14][15][16], as well as for focusing cold atoms [13]. In papers [14][15] the orbital angular momentum of BLB is calculated with the demonstration of the transfer of the orbital angular momentum to a low-index particle trapped in optical tweezers with the help of a high-order Bessel beam. Some properties of interfering high-order Bessel beams and BLBs with z-dependent cone angle were examined in [17][18][19][20]. The use of such beams for controlling the rotation of microscopic particles in optical tweezers and rotators is demonstrated. The self-healing properties of interfering Bessel beams allow the simultaneous manipulation and rotation of particles in spatially separated sample cells [21]. Thus, the development of methods for generating and transforming Bessel vortices is of both scientific and practical interest. The using of acousto-optic interaction for separation of transverse electric (TE-) and transverse magnetic (TH-) polarized components of Bessel beams in non gyrotropic crystals have been proposed in [7][8]. In paper [8] the theory of AO diffraction of Bessel light beams on plane acoustic wave in optically anisotropic nongyrotropic crystals is developed. In this paper the process of the transformation of Bessel beam order is investigated at AO interaction in optically gyrotropic crystals. It should be noted that at AO interaction in uniaxial or biaxial crystals an optical gyrotropy essential only for directions of light propagation in the vicinity of optical axe. But in isotropic medium and in cubic crystals optical gyrotropy should be taken into account at any direction of propagation of light. In spite of relative low Universal Journal of Physics and Application 9(5): 226-230, 2015 227 value of gyrotropy parameter, its taking into account is important for correct description of the AO interaction. It should be noted that the specific character of mathematical description of the indicated form of the AO interaction is in the necessity of fulfillment of two types of the phase-matching. Besides the usual longitudinal phase-matching realizable at the equality of the phase velocities of transmitted and diffracted waves, BLBs necessitate at the same time the so-called transversal phase-matching. The latter is related to the fact that BLBs with different cone angles have also the different spatial structure and, consequently, various values of the overlap integral with the diffracted beam. As a result, the AO interaction can effectively be realized only at the maximum of the overlap integral. The calculations of the appropriate integrals should be carried out and conditions should be determined when the overlap integrals are maximal. The paper is structured as follows. In Section 2 the geometry of AO interaction of vector Bessel light beams in a gyrotropic medium is considered. In Section 3 the tensor of the dielectric permittivity at the presence of the AO interaction in the cylindrical coordinates is presented. In Section 4 the equations for the slowly varying amplitudes of interacting Bessel beams at AO interaction of BLBs are considered. The analysis of overlap integrals and peculiarities of AO interaction of Bessel beams in gyrotropic cubic crystals of different symmetry will be considered in Section 5. A conclusion is given in Section 6. Bessel Beams in a Gyrotropic Medium Let us consider the geometry of AO interaction when TH-polarized Bessel beam incidents from isotropic medium with the refractive index n 1 on the boundary with a gyrotropic medium (optically gyrotropic crystal of a cubic class of symmetry) along a crystallographic axis z (4-fold,2-fold, or 3-fold axis) of cubic crystals (Fig. 1). It follows from the boundary conditions that in the crystal two Bessel beams with various phase velocities and polarization states will be propagate. Further they will be denoted with symbols right (+) and left (-) [22]. Cones of the wave vectors of these beams are circular ones. Wave vectors k +  and k −  belonging to the pointed cones and lying within the plane (x, z) are shown in Fig.1. Thus, at incidence of TH-polarized Bessel beam on the boundary with gyrotropic medium, two BLBs of right (+) and left (−) with various phase velocities propagate [22]. The vectors of the electric field for these beams can be written as where the following denotations for vector mode functions ( ) are the unite vectors of the cylindrical system, is the specific polarization rotation, γ 0 is the cone angle of the refracted BLB without gyrotropy, α is the parameter of medium gyrotropy, m is the integer, z , ,ϕ ρ are the cylindrical coordinates, Here the amplitudes of beam equally depend on the transverse coordinate, namely, as it follows from Eqs. (1), for m-order BLB this dependence is described by Bessel function of m and m+1 orders, respectively. Moreover, the electric fields contain the longitudinal component proportional to the m-order Bessel function. Note that the vector electric fields (1), (2) are the rigorous solutions of Maxwell or Helmholtz equations. The tensor of Dielectric Permittivity at the AO Interaction At the presence of AO transformation the interacting fields are described by the Helmholtz equation In the studied case of the field propagation along the crystallographic axes of cubic crystal the symmetry of the problem is an axially symmetric one. That is why the solution of the problem of AO transformation is suitable for fulfillment in the cylindrical system of coordinates. The undisturbed tensor of the dielectric permittivity for cubic crystals in the Cartesian coordinates has the Peculiarities of Acoustooptic Transformation of Bessel Light Beams in Gyrotropic Crystals (*) well-known diagonal form with components xx yy zz ε ε ε ε = = = . In the cylindrical coordinates with the z-axis parallel to the beam optical axis, this tensor is also diagonal and has the same components. The tensor Δε ij depends on the polarization state of an acoustic wave. We consider the case when the acoustic wave is transversely polarized along the y-axis and propagates along the z-axis, i.e. K is the wavenumber, Ω is the angular frequency. In this case the diagonal components of the dielectric tensor do not change, but nondiagonal components arise. For the gyrotropic crystals belonging to the symmetry classes of 23, 432 the nondiagonal components are: From the Eqs. (4), (5) it follows that photo-elasticity caused by a plane acoustic wave changes essentially the effective tensor of dielectric permittivity ε . Particularly, when the acoustic wave is transversely polarized along the y-axis, tensor ε ∆ has azimuthally depending non-diagonal components. Equations for Slowly Varying Amplitudes (SVA) It is assumed that the AO interaction of Bessel beams, similar to that of plane wave, leads, first of all, to a zmodulation of the scalar amplitudes of ± are considered to be unchanged. Such a regime of the AO transformation means the absence of transformation of spatial structure of BLBs in the process of energy exchange and can be explained physically. Firstly, due to the linearity of the AO process, its efficiency does not depend on the local intensity of the beams, but without local-inhomogeneous disturbances BLBs preserve their transverse profile due to the known nondiffracting property. Secondly, all plane-wave components of BLBs are transformed under identical conditions of the longitudinal and transverse phase-matching, due to the cylindrical symmetry of the problem resulting from the propagation of beams along the crystallographic axis. To derive SVA-equations for the AO interaction of Bessel beams, the solutions represented in the form of Eqs. where As it is seen from Eq. (6), the tensors of AO scattering into channels of m ± 1 have both imaginary and real components. The above results are enough to derive the following SVA-equations: Here the values The solution of Eqs. (7) and (9) for diffracted BLB corresponding to the boundary condition where ( ) The solution obtained describes the oscillating process of the reversible energy transfer between left (-) and right (+) Bessel beams. The Eq. (12) shows also that efficiency of energy transfer into this channel is determined by the corresponding parameter , , m n χ − + . Analysis of Overlap Integrals We have performed the numerical simulation of the AO interaction parameters and good AO efficiency [4,5]. The results are presented in Fig.2. From Fig. 2 it follows that the maximal value of AO parameters is realized at the approximate equality of transverse wave numbers of incident 0 q and scattered q BLB (q ≈ q 0 ). Also from Fig 2b it is seen that the condition of spatial synchronism does not depend on the incident BLB m-order. In addition, the width of main maximum decreases with the increase of the Bessel beam radius (Fig. 2a). The given plots allow one to calculate the length z 0 of energy transfer from incident beam to diffracted ones. As it is seen from Eq. (12), the transfer length is determined by the parameter p (Eq. 13) which for ∆k z = 0 is given by the expression is equal to approximately 10 mm. Consequently, within the transfer length the BLB is still preserved that means that the described regime of the BLB transformation at AO interaction can be practically realized. Conclusions The paper develops a theory of AO interaction of Bessel beams with plane acoustic waves in optically gyrotropic crystals. The geometry is considered, wherein incident and diffracted Bessel beam and acoustic wave propagate along the crystallographic axis of a gyrotropic cubic crystal. By going to the cylindrical coordinates it is possible to describe correctly different types of the AO interactions of vector Bessel beams in this geometry. The mathematical description of the AO interaction is fulfilled, which takes into account the influence on the efficiency of the interaction of two types of phase matching: i) the usual longitudinal synchronism realized at the equality of phase velocities transmitted and diffracted waves, and ii) transverse phase synchronism corresponding to the maximal value of overlap integral. The numerical calculation has been made of overlap integrals for crystals with cubic symmetry. As a result, the AO interaction is effectively realized only in the vicinity of the maximum of the overlap integral. The calculations of the specified integrals are carried out which allow one to determine the angular width of the diffracted light beams. The advantage of the proposed method is the possibility of controlling the efficiency of AO diffraction, due to which there arises the possibility of generation of various polarization states of output Bessel beam. When changing the power or frequency of the acoustic wave, it is possible to switch over the right (+) or left (-) Bessel modes, i.e. to obtain arbitrary polarized Bessel light beams at the output face of crystal plate and manipulate the polarization state of the output optical field in time. The acoustooptic process, which has been studied here can be also used as a method for formation of the radially and azimuthally polarized Bessel beams.
3,102.4
2015-01-01T00:00:00.000
[ "Physics" ]
E-Learning Financing Models in Russia for Sustainable Development E-learning brings new dimensions to traditional education. This especially affects countries that, due to many factors, have historically been considered the “talent pool” for the world community. In this study, a model for financing e-education has been developed that is applicable to Russian realities. The model was built around the balance between demand (global politics, economics, and principles of sustainable development) and supply (sources of direct financing). As a result, a key challenge of improving the e-learning financing methodology and models, specifically the efficiency of government spending and private investing, demands the use of new approaches and mechanisms. To improve e-learning financing, a clear understanding of the applied purpose of public and private means is required. Responsibilities for the e-learning outcome of institutions that receive financing are linked to their status. An unclear understanding of these issues is more likely associated with the issue of transparency of financing than with inefficiency. The proposed model allows transforming the “standards” of financing both in the field of e-education and Russian education in general and presents a new vision of participants’ interaction in the educational process, taking into account a set of restrictions and market features. Introduction Presently, e-learning development is driven by advances in information technology (IT) and e-learning itself connects many students across the world, enabling them to experience education in any educational institution without leaving home. There are many e-learning management systems that ensure a complete educational process through modern Internet technologies [1][2][3][4]. E-learning for sustainable development is generally aimed at (1) promoting and improving the quality of continuous education; (2) ensuring the acquisition of knowledge, skills and values necessary to reach sustainability; and (3) redefining curricula to increase public awareness through a better understanding of sustainable development. Ways to redefine educational framework include rethinking, integration, reforming and greening of education [1,[3][4][5][6]. The science of sustainable development is a separate research field that has capacities, scientific and technical skills, methodologies and competencies of its own. Yet, it associates knowledge with actions An optimal e-learning funding model for sustainable development is a predicator of the generation of leaders. Such a model may enable the emergence of effective learning drivers for students of all ages and help move towards sustainable social models. For this reason, the study presents a model of optimal financing of online education, created in the view of the current situation in the Russian market and its immense demand for e-learning [33,34]. Even though the Russian market is characterized by insufficient development of corresponding infrastructure [35] and strong geopolitical and macroeconomic pressure (primarily by sanctions and embargoes) [36], Russia remains an integral part of the world community due to its large population and high level of HDI. Therefore, the formation of an effective mechanism for financing the Russian e-learning system is relevant not only as a local task. It is also crucial for the theoretical improvement of personnel's knowledge since it can affect the sustainability of the global economy's development [37]. Research Design E-learning for sustainable development is closely linked to [38,39] sustainability in politics, planning and management; financing of courses and curricula; offline research; outreach and services; assessment and reporting. The emphasis is normally laid on the promotion of interdisciplinary thinking and analysis, which is the basis of sustainable development, by teaching more complex connections between economic, social and environmental concepts ( Figure 1). Research Design E-learning for sustainable development is closely linked to [38,39] sustainability in politics, planning and management; financing of courses and curricula; offline research; outreach and services; assessment and reporting. The emphasis is normally laid on the promotion of interdisciplinary thinking and analysis, which is the basis of sustainable development, by teaching more complex connections between economic, social and environmental concepts ( Figure 1). Figure 1. A graphical representation of e-learning for sustainable development and its dimensions, adapted from [39]. The pace and quality of e-learning for sustainable development are significantly influenced by administrative, technological, financial, and intellectual components as well as by efficiency assessment and control. The unequal access of country's inhabitants to the Internet is among challenges of educational strategy development. Progress in e-learning for sustainable development tends to slow down due to the relative high cost of broadband Internet services and infrastructure, causing the widening of the telecommunication gap between different cities and regions. It should be noted that the broadband Internet tariff policy is built around the broadband infrastructure, rather than financial strength of the regions, especially in Russia, where its enormous size causes the broadband infrastructure expenditures to grow. Therefore, the geographical location of the country is among the decisive factors influencing the infrastructure spending and, accordingly, the information product [40]. Telecommunication inequality between metropolitan and local (distant) consumers can be eliminated through rapid scaling with modern information technologies. Although broadband Internet tariffs in Russia are lower than those in Western Europe, some internal contradictions are exposed. For instance, the average Internet speed in Moscow and St. Petersburg is higher than in other cities of the country with the tariffs being lower. Therefore, it is critically important to take into account the objective economic situation, since it forms the mechanisms of interaction in the market. Today, it is customary to single out the main three models of interaction in the field of education financing that are based on the free market, social market and anti-market models [41] (Table 1). Table 1. E-learning financing models, adapted from [41]. Free Market Being in the market improves the quality of education owing to available mechanisms the market offers to control the education system. Among these are personnel training and mechanisms to accelerate the growth of profit of educational institutions. The government retains its control function but its role is reduced with the reduction of education spending. Equal opportunities to all Social Market Partnership between the government and private companies stimulates intense and productive activity owing to increasing private investments and the reduction of Outreach towards middle class learners; Figure 1. A graphical representation of e-learning for sustainable development and its dimensions, adapted from [39]. The pace and quality of e-learning for sustainable development are significantly influenced by administrative, technological, financial, and intellectual components as well as by efficiency assessment and control. The unequal access of country's inhabitants to the Internet is among challenges of educational strategy development. Progress in e-learning for sustainable development tends to slow down due to the relative high cost of broadband Internet services and infrastructure, causing the widening of the telecommunication gap between different cities and regions. It should be noted that the broadband Internet tariff policy is built around the broadband infrastructure, rather than financial strength of the regions, especially in Russia, where its enormous size causes the broadband infrastructure expenditures to grow. Therefore, the geographical location of the country is among the decisive factors influencing the infrastructure spending and, accordingly, the information product [40]. Telecommunication inequality between metropolitan and local (distant) consumers can be eliminated through rapid scaling with modern information technologies. Although broadband Internet tariffs in Russia are lower than those in Western Europe, some internal contradictions are exposed. For instance, the average Internet speed in Moscow and St. Petersburg is higher than in other cities of the country with the tariffs being lower. Therefore, it is critically important to take into account the objective economic situation, since it forms the mechanisms of interaction in the market. Today, it is customary to single out the main three models of interaction in the field of education financing that are based on the free market, social market and anti-market models [41] (Table 1). Table 1. E-learning financing models, adapted from [41]. Free Market Being in the market improves the quality of education owing to available mechanisms the market offers to control the education system. Among these are personnel training and mechanisms to accelerate the growth of profit of educational institutions. The government retains its control function but its role is reduced with the reduction of education spending. Equal opportunities to all Social Market Partnership between the government and private companies stimulates intense and productive activity owing to increasing private investments and the reduction of governmental funding. This model promotes e-learning development through privatization. Outreach towards middle class learners; Social tension relief;Social and education-related problem solving Anti-Market E-learning is financed from the government (i.e., tax revenue from non-educational organizations). This model suggests an increase of employees. Education quality improvement via feedback analysis The above models are designed for strategically important tasks of internal and external optimization and provide for cost reduction, efficient use of fixed assets, increase of enrollment, and the creation of a universal system for education financing and specialized educational institutions. In Russian realities, these models of interaction are not de facto presented in pure form, to be more precise, they are presented in a mixed form, moreover, they are heterogeneous by region. This is primarily due to the federal structure of the state, where each region determines its financial policy. The latter once again emphasizes the distinction and uniqueness of the research task. (1) The purpose of the study, namely the development of an optimal e-learning funding model for sustainable development, can be achieved through the gradual accomplishment of the following procedures: (2) Formation of the mechanism of resource efficiency of the organizational unit (educational institution). This procedure is considered to be the conceptual basis of the model. (3) Identification of financing principles to achieve sustainable development. This procedure defines the practical application of the model. (4) Synthesis of quantitative and qualitative data reflecting the conjuncture of the Russian e-learning market. This procedure adapts the model to be implemented in the online learning market of the Russian Federation. (5) High-quality implementation of each procedure will allow creating a model for financing e-education for sustainable development in the Russian Federation. Such a financing system embraces both private and public sources of finance and seeks to employ new forms of financial support. Apart from curriculum development and financing, institutions tend to explore pedagogical approaches to education and establish effective programs to ensure e-learning for sustainable development [42][43][44]. Data Analysis In order to ensure that the developed model meets the Russian realities as much as possible, the following elements of the Russian learning environment should be clarified and analyzed: (1) To what extent e-learning is widespread in Russia? How many academic programs does it cover? To what level of education do such programs belong? This information specifies the product for which the financing model will be developed. (2) What is the e-learning market size in the Russian Federation? To what sectors and how are expenses allocated? This data forms an understanding of Russian financial institutions and mechanisms. (3) Who are typical e-learners? What organizational units are implementing online education? This information will provide a better understanding of the range of study objects, features and needs of which will be taken into account in the developed model. (1) By electronic support in education in this study, the authors mean educational Internet resources, mobile applications, computer games, and educational video content. All of these formats are also suitable for home education, but require adaptation to general educational programs. To achieve this, b2b solutions in the field of developing distance and online learning methods can be adapted, training videos, online content for developing competencies in sustainable development, as well as blended forms of training can serve this purpose. At the beginning of the academic year 2017-2018, the Russian Federation implemented with electronic support 3097 bachelor's degree programs, 265 associate-level degree programs, and 1370 master's degree programs [45] (Figure 2). The latter was calculated as the current and historical number of students from official statistical information of the Ministry of Education of the Russian Federation. Private and state segments were taken into account (both on a federal level and by regions of the Russian Federation). Then, on the basis of the data obtained, students were divided by degrees. At the beginning of the academic year 2017-2018, the Russian Federation implemented with electronic support 3097 bachelor's degree programs, 265 associate-level degree programs, and 1370 master's degree programs [45] (Figure 2). The latter was calculated as the current and historical number of students from official statistical information of the Ministry of Education of the Russian Federation. Private and state segments were taken into account (both on a federal level and by regions of the Russian Federation). Then, on the basis of the data obtained, students were divided by degrees. From the chart above, distance education programs make up 33% of total higher education programs with a 12.6% share of bachelor's degree, an 8.2% share of associate-level degree programs, and a 9.5% portion of master's degree programs [45] (Figure 3), which was calculated taking into account the shares of distance and full-time programs in this segment of the education market. Such an analysis was made possible thanks to information obtained from the largest sites and aggregators of educational programs in higher and secondary vocational education. 2) E-learning expenses in Russia are projected to reach 52.8 billion rubles in 2021 whereas individual budgets are going to vary between segments of education [46] (Figure 4). Multiplying the size of the average check and the number of students, the authors got the estimated indicators of the volume of the online learning market. From the chart above, distance education programs make up 33% of total higher education programs with a 12.6% share of bachelor's degree, an 8.2% share of associate-level degree programs, and a 9.5% portion of master's degree programs [45] (Figure 3), which was calculated taking into account the shares of distance and full-time programs in this segment of the education market. Such an analysis was made possible thanks to information obtained from the largest sites and aggregators of educational programs in higher and secondary vocational education. At the beginning of the academic year 2017-2018, the Russian Federation implemented with electronic support 3097 bachelor's degree programs, 265 associate-level degree programs, and 1370 master's degree programs [45] (Figure 2). The latter was calculated as the current and historical number of students from official statistical information of the Ministry of Education of the Russian Federation. Private and state segments were taken into account (both on a federal level and by regions of the Russian Federation). Then, on the basis of the data obtained, students were divided by degrees. From the chart above, distance education programs make up 33% of total higher education programs with a 12.6% share of bachelor's degree, an 8.2% share of associate-level degree programs, and a 9.5% portion of master's degree programs [45] (Figure 3), which was calculated taking into account the shares of distance and full-time programs in this segment of the education market. Such an analysis was made possible thanks to information obtained from the largest sites and aggregators of educational programs in higher and secondary vocational education. 2) E-learning expenses in Russia are projected to reach 52.8 billion rubles in 2021 whereas individual budgets are going to vary between segments of education [46] (Figure 4). Multiplying the size of the average check and the number of students, the authors got the estimated indicators of the volume of the online learning market. These companies' e-learning covers employee orientation training, critical incident management training, team building, information systems and software products training, international standards training, as well as bookkeeping, accounting, and auditing training. Individual learners make use of all resources on the Internet, especially free lectures, training videos, and webinars. The use of elearning is free of restrictions specific to conventional education, whereas the effectiveness of the These companies' e-learning covers employee orientation training, critical incident management training, team building, information systems and software products training, international standards training, as well as bookkeeping, accounting, and auditing training. Individual learners make use of all resources on the Internet, especially free lectures, training videos, and webinars. The use of elearning is free of restrictions specific to conventional education, whereas the effectiveness of the (3a) According to the materials of the HSE study "Monitoring the economics of education" (2016), the following largest Russian companies take advantage of e-learning: These companies' e-learning covers employee orientation training, critical incident management training, team building, information systems and software products training, international standards training, as well as bookkeeping, accounting, and auditing training. Individual learners make use of all resources on the Internet, especially free lectures, training videos, and webinars. The use of e-learning is free of restrictions specific to conventional education, whereas the effectiveness of the method being used depends on the industry to which the e-learning course is linked. In the corporate sector, the distribution of e-learning applications across industries is not even, as evidenced by Figure 6, created using the adapted data from [45]. Sustainability 2020, 12, x FOR PEER REVIEW 7 of 14 method being used depends on the industry to which the e-learning course is linked. In the corporate sector, the distribution of e-learning applications across industries is not even, as evidenced by Figure 6, created using the adapted data from [45]. Companies in the e-learning market, apart from education service providers, are companies that generate e-learning software and participate in the development of educational materials and courses. This differentiation is conditional, since most modern companies offer e-learning in a comprehensive manner. 3b) The educational sector includes state-owned and private educational organizations including companies that provide educational services. It should be noted that e-learning in this sector penetrates all fields to one extent or another. Today, Russia has several state institutions that offer online education in any field. These institutions are: [44,47]. The practical value of the developed model and its further implementation are primarily oriented toward state-owned and private educational organizations (mentioned above research objects). Companies in the e-learning market, apart from education service providers, are companies that generate e-learning software and participate in the development of educational materials and courses. This differentiation is conditional, since most modern companies offer e-learning in a comprehensive manner. (3b) The educational sector includes state-owned and private educational organizations including companies that provide educational services. It should be noted that e-learning in this sector penetrates all fields to one extent or another. Today, Russia has several state institutions that offer online education in any field. These institutions are: [44,47]. The practical value of the developed model and its further implementation are primarily oriented toward state-owned and private educational organizations (mentioned above research objects). Results Based on the research findings, several models have been developed that characterize the financing of e-learning for sustainable development, applicable to modern Russian market conditions. The first financing model is the 4E framework encompassing dimensions of effectiveness, efficiency, economy, and equity. Effectiveness dimension assesses the quality of an educational institution's work (provision of e-learning services) by evaluating the fulfillment of educational goals. Efficiency assesses profitability and how efficiently the available resources are employed. The use of the 4E financing framework to e-learning for sustainable development ensures fairness of performance appraisal and reflects the market orientation of the business (Figure 7). Sustainability 2020, 12, x FOR PEER REVIEW 8 of 14 Results Based on the research findings, several models have been developed that characterize the financing of e-learning for sustainable development, applicable to modern Russian market conditions. The first financing model is the 4E framework encompassing dimensions of effectiveness, efficiency, economy, and equity. Effectiveness dimension assesses the quality of an educational institution's work (provision of e-learning services) by evaluating the fulfillment of educational goals. Efficiency assesses profitability and how efficiently the available resources are employed. The use of the 4E financing framework to e-learning for sustainable development ensures fairness of performance appraisal and reflects the market orientation of the business (Figure 7). For e-learning to solve the problem of sustainability, the focus needs to be on the following aspects: • financing the learning process to achieve relevant core competencies; • financing education for sustainable development with the purpose of socialization; • financing individual training for personal growth. The focus of education on life situations, real and practical problems, and real experience is also a determining factor in achieving sustainability (Figure 8). A dual-system approach, together with the assessment of prerequisites for e-learning financing, represents an integrated financing model. Unlike typical models, this framework operates with the mechanism of interaction between supply and demand and hence, enables a balance between the quantity of e-learning products and services to offer and the quantity of e-learning products and For e-learning to solve the problem of sustainability, the focus needs to be on the following aspects: • financing the learning process to achieve relevant core competencies; • financing education for sustainable development with the purpose of socialization; • financing individual training for personal growth. The focus of education on life situations, real and practical problems, and real experience is also a determining factor in achieving sustainability ( Figure 8). Results Based on the research findings, several models have been developed that characterize the financing of e-learning for sustainable development, applicable to modern Russian market conditions. The first financing model is the 4E framework encompassing dimensions of effectiveness, efficiency, economy, and equity. Effectiveness dimension assesses the quality of an educational institution's work (provision of e-learning services) by evaluating the fulfillment of educational goals. Efficiency assesses profitability and how efficiently the available resources are employed. The use of the 4E financing framework to e-learning for sustainable development ensures fairness of performance appraisal and reflects the market orientation of the business (Figure 7). For e-learning to solve the problem of sustainability, the focus needs to be on the following aspects: • financing the learning process to achieve relevant core competencies; • financing education for sustainable development with the purpose of socialization; • financing individual training for personal growth. The focus of education on life situations, real and practical problems, and real experience is also a determining factor in achieving sustainability ( Figure 8). A dual-system approach, together with the assessment of prerequisites for e-learning financing, represents an integrated financing model. Unlike typical models, this framework operates with the mechanism of interaction between supply and demand and hence, enables a balance between the quantity of e-learning products and services to offer and the quantity of e-learning products and A dual-system approach, together with the assessment of prerequisites for e-learning financing, represents an integrated financing model. Unlike typical models, this framework operates with the mechanism of interaction between supply and demand and hence, enables a balance between the quantity of e-learning products and services to offer and the quantity of e-learning products and services that learners desire. The balance between supply and demand corresponds to a module of e-learning cost generation. E-learning is undoubtedly in demand, owing to sustainability efforts across the global economy. The economic effectiveness of e-learning refers to the total spending on its provision. As each single component of demand rises, e-learning increases. For instance, if the Internet service tariff drops, then the demand and consumption of Internet services will grow. E-learning service consumers encourage supply in the market. The major sources of finance in e-learning for sustainable development in Russian realities are the local budget, extra-budgetary funds from paid educational services (e.g., tuition fees), and sponsorship. In general, financing policies influence the consumer decision making. The indicator in the center of the scheme (Figure 9) is a balance of supply and demand or the module of e-learning cost generation. Normally, market mechanisms redistribute funds that come from e-learners as part of payment for services. In case of shortage of e-learning service consumers or underfunding, e-learning providers seek to attract investment. In this case, consumers will make the maximum use of the offerings and continue to learn. Over time, the number of e-learners will increase. Once this happens, the e-learning providers will reduce their activity. Sustainability 2020, 12, x FOR PEER REVIEW 9 of 14 services that learners desire. The balance between supply and demand corresponds to a module of elearning cost generation. E-learning is undoubtedly in demand, owing to sustainability efforts across the global economy. The economic effectiveness of e-learning refers to the total spending on its provision. As each single component of demand rises, e-learning increases. For instance, if the Internet service tariff drops, then the demand and consumption of Internet services will grow. E-learning service consumers encourage supply in the market. The major sources of finance in e-learning for sustainable development in Russian realities are the local budget, extra-budgetary funds from paid educational services (e.g., tuition fees), and sponsorship. In general, financing policies influence the consumer decision making. The indicator in the center of the scheme (Figure 9) is a balance of supply and demand or the module of e-learning cost generation. Normally, market mechanisms redistribute funds that come from e-learners as part of payment for services. In case of shortage of e-learning service consumers or underfunding, elearning providers seek to attract investment. In this case, consumers will make the maximum use of the offerings and continue to learn. Over time, the number of e-learners will increase. Once this happens, the e-learning providers will reduce their activity. However, it should be emphasized that demand is much more unstable compared to supply; it can rapidly and unexpectedly change depending on the global market situation (incl. due to force majeure such as a pandemic) and it takes time to stabilize it. A significant impact on the supply-demand system is inflicted by macro factors, including geopolitical processes. Thus, both sides are at risk of global market instability. Discussion A distinctive feature of the study is that it uses a dual-system approach to assess and model the financing of e-learning for sustainable development. This approach enabled the acquisition of specific and accurate results. The works on the global e-learning trends, on the place of e-learning in the education system, and on the e-learners' role have been reviewed. The review has revealed that many analysts have recognized the importance of integrating sustainability topics with multiple teaching and learning methods. E-learning brings new dimensions to traditional education and increases students' motivation to study. Furthermore, e-learning can increase students' readiness to study if they are allowed to switch their social roles within the program, creating new ways to learn and solve environmental, economic, and social problems online [48][49][50][51][52]. Many analysts emphasize that successful online students should have a greater inclination to transfer knowledge to a new domain, a greater sense of community and communication, as well as greater knowledge and independence, leading to successful learning [53][54][55][56][57]. Unfortunately, the factor of financing (its sources and methods) was insufficiently studied, and thus, incorrect forecasts were generated and research vectors were shifted. Those analysts who explored the dimension of education finance emphasized the importance of choosing the appropriate sources and model of financing that could be efficiently integrated into the overall framework [58][59][60][61][62]. This opens new possibilities for further research in this field. The application of the dual-system approach is a rather novel practice that may be effective as part of international operational models of financing e-learning for sustainable development. The experience of establishing a framework for education financing may be found in the reviews of industry-specific markets and papers devoted to management modeling. However, no specific methods for e-learning financing have been found. Education in Russia is financed primarily through the mechanism of inter-budget redistribution. In accordance with the Budget Code of the Russian Federation, it is possible to directly finance educational institutions exclusively from the budget of the level to which the direct founder of a particular educational institution belongs. This creates an imbalance at the level of the entire state and does not correspond to the relations of supply and demand. As a rule, the key problems associated with financing education in the Russian Federation are due to a budget deficit [63]. However, the problem often boils down to the budget's irrational use [64]. Today's budget financing mechanism is demonstrating its inefficiency and requires conceptual changes. However, transformations should concern not only the inclusion of the "private sector" in the financing model, but also state instruments' improvement, which requires political will. The following are among these politically dependent initiatives: (1) when forming the budget for financing educational institutions, the features of the functioning of individual educational institutions, their material base, and territorial location should be taken into account [65]; (2) introducing a unified system for analysis of the use of budgetary and extra-budgetary funds by educational institutions [66]; (3) legalization of the provision of modern services that contribute to the development of human capital and are often outside the legal field due to the inertia of the bureaucratic legislative process [67]. Conclusions The findings revealed dependence between the development of e-learning for sustainable development and the upward trend in the economic efficiency of education. The active private capital enables partial reduction of government spending, optimization and improvement of education management, as well as higher salaries. For learners, active private capital means lower costs of educational materials purchased. This creates a balance between supply and demand, which is critical in the context of the Russian market. Implementing in Russia the e-learning concept for sustainable development will allow enhancing key competencies of each learner, promoting personal growth, and organizing a learning process taking into account learners' personality types, their knowledge, time available for learning and, most importantly, their financial capabilities. Putting education to an electronic format does not mean downgrading its quality, since the incorporation of modern information technologies and software contributes to improvement and permits a rapid update of learning materials to meet modern requirements. It should be noted that effective digitalization of the education system requires higher Internet coverage and higher digital literacy of the population, while Russia has a heterogeneity of the regions concerning the latter. Thus, the electronic segment of education meets the general principle of lifelong education, which reinforces the sustainability aspect. An e-learning financing model proposed for Russian realities will allow all subjects of this process to develop in accordance with the requirements of the time and improve information technology. At the same time, comprehensive methodology for financing e-learning serves as a tool to improve the education system generally and incorporates the most important components of sustainable development of society. Conflicts of Interest: The authors declare no conflict of interest.
7,032.2
2020-05-28T00:00:00.000
[ "Economics" ]
Parametric Weibull Model Based on Imputations Techniques for Partly Interval Censored Data The term survival analysis has been used in a broad sense to describe collection of statistical procedures for data analysis for which the outcome variable of interest is time until an event occurs, the time to failure of an experimental unit might be censored and this can be right, left, interval, and Partly Interval Censored data (PIC). In this paper, the analysis of this model is conducted based on parametric Weibull model via PIC data. Moreover, two imputation techniques are used, which are: left point and right point. The effectiveness of the proposed model is tested through numerical analysis on simulated and secondary data sets. Introduction Statistical method is one of the strategies used by researchers as it provides various kinds of methods in analyzing data. One of the methods used in the data analysis is the survival analysis method. Survival analysis or failure time analysis is described as one of the most significant and advanced methods in statistics during the last quarter of the 20th century (Sam and Krings (2008)). It is one of the significant statistical methods as it is involved with the failures of components (Singh and Totawattage (2012)). Kleinbaum and Klein (2005) described the survival analysis as the procedure to analyze data statistically and the outcome is the time until an event occurs. There are many applications of survival analysis, for example in medical, engineering, education, economic, and other areas. Mostly the survival analysis method has been widely used in the biomedical as well as in the engineering applications. As mentioned by Liu (2012), one of the examples of engineering application that deals with the survival analysis method is the testing of life time or durability of a mechanical or an electrical component. Scientists apply this technique to track the products and material's life span for predicting the product reliability and durability. Lawless (2003) described that the duration could be compared with lifetime of a marriage; a marriage may end due to annulment, divorce, or death. Other example from education scope, as mentioned by Eagle and Barnes (2014), used survival analysis approach for measuring time until an event occurs and account for teacher's attrition. However, this research uses Weibull model for secondary data form medical and simulated data based on education data sets. While dealing with survival data, censoring should be involved. Censoring occurs when the information of a failure time of some subjects is incomplete. There are different reasons for censoring which lead to different types of censored data. These are right, left, interval, and one of the most important types of interval censored data is partly interval censored data which means that for some of the subjects the event of interest is exactly observed while for others it lies within an interval (Kim (2003)). In this paper, analysis is conducted based on partly interval censored via simulated and secondary medical data. Weibull distribution model In lifetime data one of the most useful distributions for analyzing and modelling is the Weibull distribution in various fields such as; medical, biology, engineering, and others. It is applicable for various failure situations and was proposed by Weibull (1939). Lee and Wang (2003) proposed that Weibull distribution it used in many mortalities of the human disease studies and reliability studies. It is described by two parameters, which are: shape parameter that determine the distribution curve and the other parameter that determine the scaling (the two parameters are shown in equation 1). Lee and Wang (2003) proposed the probability density function of the two Weibull distribution parameters as, where β and α presents the shape and scale parameters respectively. The cumulative Weibull distribution function is given as, This study presents the model under survivorship and the estimation of the curve of the survival probability function is calculated by using censoring data based on Weibull distribution model. The likelihood function for Weibull distribution with data involve failure, right censored, and interval-censored is given as (Guure, Ibrahim, and Adam (2012)): This implies that, Then the log-likelihood is taken from equation (4) and differentiated with respect to α and β. Then numerical method, such as Newton Raphson method is applied to obtain the values of β and α. Results and data analysis This paper illustrates the implementation of the methods, discussed in the earlier sections, by using two data sets. The first one is breast cancer data, and the second one is auto generated data. All calculations were computed by using R software. In the simulation study in 5 we used simple imputation methods to impute the missing data or the exact data. The two imputation methods are right-point where the event time is imputed by the right limit of the interval, and left-point where the event time is imputed by the left limit of the interval. Breast cancer data Several researchers used this data in their study such as; Zyoud, Elfaki, and Hrairi (2016) modified the partly interval censored data and compared with the Turnbull method. There were two failure times that are Radiation (R) and Radiation + Chemotherapy (R+C). In the first failure time (R) there were 66 patients and 68 patients for the second failure time (R+C). The objective is to compare the cosmetic effects of the first failure time against the second failure time on women with early breast cancer and the event of interest was represented by the time to first occurrence of breast retraction. The actual dates are recorded based on the availability of the patient while visiting the clinic for every 4 to 6 months. To set up the data as the PIC, the same way is followed that was used by Alharphy and Ibrahim (2013) and Zyoud et al. (2016). Figure 1 shows the survival curve for Radiotherapy and Radiation + Chemotherapy by using Weibull distribution and Turnbull method. It is clear from the figure that the estimated survival curves obtained by the Weibull model lies close to the one obtained by Turnbull. These results indicated that the proposed model is fitted well compare with Turnbull method. Parameters estimates (shape and scale) of Weibull distribution and standard errors (se) of the two treatments which are presented in Table 1. Moreover, the likelihood ratio test for this model shows the value as 12.86, where P-value is almost zero. Simulation data To evaluate and study the behavior of statistical procedures in statistic, simulation procedure is mostly used especially for the situation when a problem cannot be solved analytically (Elfaki, Bin Daud, Ibrahim, Abdullah, and Usman (2007)). The technique requires setup of many samples. The samples are then individually reckoned in terms of statistics of interest, and the overall statistics of interest is used to study distribution properties. The objective of this simulation study is to compare the survival function for local and international students based on partly interval censored data. The simulated data were generated based on the education data set with two failure times that is are local and international students (This education data set is not provided in this paper, readers are referred to Saeed (2018)). The Weibull distribution is used to generate the data (the Weibull was found to be fit the original data very well as in Saeed (2018)). To generate the data we used the mean and standard deviation as 19.7538, 1.3354 for local students and 11.749, 0.0548 for international student, respectively. The generated data were 500 in count for local and international students as sample size. Results from partly interval censored data There are two scenarios; in the first scenario the study has taken 50% exact observed and 50% observed as interval. In the second scenario, the study has taken the exact observation of 70% and 30% for the observed as interval. The result of this partly interval censored data shows in Figure 2, 3, 4 and Figure 5. These figures show that the survival curves obtaining by Weibull for the local students and international students compared with Turnbull method. The local student showed a longer survival compared to the international students for both scenarios, which is indicate that the local students are more stable compare with international students. Table 2 and Table 3 show the shape and scale parameter obtained by our model based on right point and the results look similar for both students (local and international) in the two scenarios. Likewise, the estimated parameters obtained by left point imputation for two scenarios are almost similar with respect to the parameters and standard error (Tables 4 & 5). Moreover, the likelihood ratio test obtained by right point for two scenarios are 38.11 (0) and 35.6 (0), respectively. Similarly, the likelihood ratio test obtained by left point for two scenarios are 38.48 (0) and 40.38 (0), respectively, which implement the significant of the model. Concluding In this study, Weibull model is used based on simple imputation technique to simplify the procedure for partly interval censored data, which are the right point and left point. The estimated survival function was obtained based on the maximum likelihood estimation and comparisons were made with existing literature. From the breast cancer data, it is confirmed that the proposed model is fit to use well and easy to implement compared with the one obtained by Turnbull method. The simulation data was used based on the education data. The data generated for 500 times from international and local students. It can be concluded that the Weibull model based on simulation results are suitable for partly interval censored data compared with interval data. Finally, the result reflects that when the observed data have more exact in the data, the model is better fitted which is same line with other results obtained by some researchers such as Kim (2003), Zyoud et al. (2016), and Alharphy and Ibrahim (2013).
2,284.8
2020-02-20T00:00:00.000
[ "Mathematics" ]
Gluon polarization measurements from longitudinally polarized proton-proton collisions at STAR Jets produced in the pseudo-rapidity range, $-1.0<\eta<1.0$, from $pp$ collisions at RHIC kinematics are dominated by quark-gluon and gluon-gluon scattering processes. Therefore the longitudinal double spin asymmetry $A_{LL}$ for jets is an effective channel to explore the longitudinal gluon polarization in the proton. At STAR, jets are reconstructed in full azimuth, from the charged-particle tracks seen by the Time Projection Chamber and electro-magnetic energy deposited in the Barrel and Endcap electro-magnetic calorimeters at both $\sqrt{s} = $ 200 and 510 GeV. Early STAR inclusive jet $A_{LL}$ results at $\sqrt{s} = $ 200 GeV provided the first evidence of the non-zero gluon polarization at $x>$ 0.05. At $\sqrt{s} = $ 510 GeV, the inclusive jet $A_{LL}$ is sensitive to the gluon polarization as low as $x \sim $ 0.015. In this talk, we will discuss recent STAR inclusive jet and dijet $A_{LL}$ results at $\sqrt{s} = $ 510 GeV and highlight the new techniques designed for this analysis, for example the underlying event correction to the jet transverse energy and its effect on the jet $A_{LL}$. Dijet $A_{LL}$ results are shown for four topologies in regions of pseudo-rapidity, effectively scanning the $x$-dependence of the gluon polarization. Introduction Early deep inelastic scattering (DIS) experiments in the 1980s showed that quarks inside the proton make only a small contribution to its total spin [1]. Where the rest of the proton spin comes from has been an outstanding problem awaiting to be explored. Theorists introduced parton distribution functions, PDFs, to describe the probability of a parton with momentum fraction x encountering a probe at energy scale, Q 2 , f (x, Q 2 ). Jaffe and Manohar proposed that not only do quarks contribute to the proton spin, but also gluons and the orbital angular momentum of quarks and gluons [2]. However the kinematics space in x − Q 2 covered by the polarized fixed target experiments through DIS processes only provide limited constraints on the gluon polarization inside the proton [3]. Different from DIS experiments, a polarized hadron-hadron collider at high center-of-mass energy, √ s, such as the Relativistic Heavy Ion Collider (RHIC) [4,5,6] can provide direct access to gluon polarization inside the proton. At RHIC, either transversely or longitudinally polarized proton beams collide at both √ s = 200 and 510 GeV. To explore the gluon polarizations, we measure the longitudinal double-spin asymmetry, A LL , for jets, defined as the fractional difference of the jet cross sections when beams have the same and the opposite helicities. The A LL can be expressed as the sum of convolutions of the polarized PDFs and partonic longitudinal double-spin asymmetryâ LL over all possible partonic processes. The next-to-leading order (NLO) perturbative quantum chromodynamics (pQCD) calculations show that the qg and gg [7]. Both qg and gg processes have sizableâ LL [8], therefore jet A LL are sensitive to gluon polarizations. The same applies to A LL measurements for hadrons, for example π 0 . Given beam polarizations, P 1 and P 2 , and relative luminosities R = L ++ L −− , A LL is defined experimentally as: 2. Inclusive jet and dijet A LL measurements at STAR Solenoidal Tracker at RHIC (STAR) [9] has published a series of inclusive jet and dijet A LL results at √ s = 200 GeV [10,11,12]. The inclusive jets with transverse momentum, p T , and pseudo-rapidity, η, sample the scattering parton x ≈ x T e ±η , where x T = 2p T √ s . The dijets are able to unfold the initial kinematics x 1 , x 2 , and the scattering angle in the parton scattering rest frame, cosθ * , as in Equations 2, 3 and 4. Since the kinematics of the two scattering partons are simultaneously determined by dijet kinematics, it constrains the shape of polarized gluon distribution function, ∆g(x), as function of x. At √ s = 200 GeV, jets are sensitive to ∆g(x) at x as low as 0.05 when |η| < 1.0. The new prediction from DSSV group who included all the recently published STAR inclusive jet and dijet A LL results at √ s = 200 GeV shows 1 0.01 ∆g(x) = 0.296 ± 0.108 at Q 2 = 10 GeV 2 [13]. However large uncertainties of ∆g(x) still exist at x < 0.01. To explore the low x gluon polarization that is not well constrained by current available experimental data, we need to increase √ s or extend η forward. In the year 2012, STAR recorded data from 82 pb −1 of longitudinally polarized pp collisions at √ s = 510 GeV, with average beam polarizations for two beams, 54% and 55% respectively [14], and R varing from 0.9 to 1.1. The electro-magnetic calorimeter based jet patch triggers, JP0, JP1 and JP2, are optimized to sample three different ranges over jet p T with thresholds set at 5.4, 7.3 and 14.4 GeV/c. Jets are reconstructed with charged tracks and electro-magnetic towers using the anti-k T algorithm with the parameter R = 0.5 [15]. An off-axis cone method adapted from the ALICE experiment at the LHC [16] is applied to correct the jet transverse energy due to underlying event contributions. It collects particles inside two cones centered at ± π 2 away from the jet in φ and at the same jet η. The correction dp T is taken as dp T =ρ × A, whereρ is the averaged energy density of the two off-axis cones and A is the jet area. This method samples the η dependence of the underlying event activities. To study its contribution to jet A LL , we measure the longitudinal double-spin dp T asymmetry, A dp T LL as in Equation 5. A constant fit through A dp T LL as a function of jet p T shows that the underlying event correction is consistent with zero, as in Figure 1 [17]. Taking < dp T > ×A dp T LL as a shift of the jet p T due to underlying events, where < dp T > is the average dp T regardless of beam helicities, we estimated the potential contribution is at the level of 10 −4 , which is assigned as a systematic uncertainty. The systematic uncertainties are studied with an embedding sample where simulated hard QCD scattering events are embedded into zero-bias events that are randomly taken during the collisions. The exponent parameter that controls the √ s dependent cut-off p T,0 in the default Perugia 2012 tune [18] was modified to match the simulated π ± spectra with the previously published STAR measurements [19,20] from pp collisions at √ s = 200 GeV [21,22]. Figure 2 shows the excellent agreement between data and simulation for jet p T spectra for jets satisfying JP0, JP1 and JP2 trigger requirements. Figure 2. Jet p T spectra comparison between data (markers) and embedding (lines) for jets satisfying JP0, JP1 and JP2 triggers separately [17]. [17]. Topology Description Regions of η 3 and η 4 Jets reconstructed from the detector responses in the embedding sample are required to meet the jet patch trigger requirements. Comparing their predicted A LL as a function of jet p T with the unbiased parton level A LL allows to estimate the trigger bias and reconstruction correction and its uncertainty. The 100 equally probable replicas from NNPDFpol1.1 [23], which cover the current uncertainty band of ∆g(x), results in much more precise estimation of the correction and its uncertainty than the previous measurements at √ s = 200 GeV. The STAR 2012 inclusive jet A LL as a function of parton jet x T at √ s = 510 GeV is presented in the right panel of Figure 3 [17], together with STAR 2009 results at √ s = 200 GeV [10]. Both results agree well in the overlapping x T region. The new results are also consistent with recent NLO PDF predictions that imply positive gluon polarization [23,24]. The 510 GeV results extend measurements to lower x T which is sensitive to low x polarized gluons. The sensitivities to ∆g(x) goes to x as low as ∼ 0.015, as in the left panel of Figure 3 [17]. [24] and NNPDFpol1.1 [23] models (solid line with shades) [17]. The dijet events require the opening angle ∆φ > 2π 3 and the asymmetric p T cut, p T,3 > 6 GeV/c and p T,4 > 8 GeV/c, for the two jets. The combinations of the unfolded x 1 and x 2 constrain the shape of ∆g(x). The partonicâ LL depends on cosθ * . Therefore we proposed four η topology binnings as in Table 1. As expected, we see the difference in measured dijet A LL for four η topologies, as in the right panel of Figure 4. The sampled x 1 and x 2 are much narrower than the sampled x g by inclusive jets, as in the left panel of Figure 4 [17]. The preliminary results for the inclusive jet and dijet A LL measurements from STAR 2013 [24] and NNPDFpol1.1 [23] models (solid line with shades) [17]. 510 GeV pp collisions were released in 2018 [25]. The integrated luminosity is about four times larger than that of the 2012 data set, however the dijet jet patch triggers were introduced to favorably capture dijet events. The same procedure has been applied in the 2013 inclusive jet A LL measurements. Both results agree with each other. We are finalizing the systematic uncertainties before the future publication. Other measurements and STAR forward upgrade The neutral pions, π 0 , are reconstructed from the γ decay in the forward meson spectrometer. The measured π 0 A LL results at √ s = 510 GeV from STAR 2012 and 2013 data, divided into two η ranges, 2.65 < η < 3.15 and 3.15 < η < 3.90, are very small, less than 5 × 10 −3 . The forward η allows to access polarized gluons at x in the order of 10 −3 [26]. The STAR forward upgrade has been fully approved and funded in time for the RHIC 2022 run. It features a forward calorimeter system and a forward tracking system at 2.5 < η < 4.0. The calorimeter includes a hadron calorimeter and an electro-magnetic calorimeter. Silicon disks and small thin gap chambers will be installed for the forward tracking system. The dijet A LL will be one of the highlighted physics programs for this upgrade, with one or both jets inside the forward region. With both jets inside the forward region at √ s = 510 GeV, it allows to sample ∆g(x) at x as low as 10 −3 , where the current model predictions show large uncertainties. The STAR forward upgrade will also lay the ground for the future Electron Ion Collider [27]. Conclusion In summary, the inclusive jet measurements at STAR probe the magnitude of the ∆g(x) over a wide range of x. The dijet measurements provide additional constraints on the shape of ∆g(x). The first measurements of inclusive jet and dijet A LL at √ s = 510 GeV are sensitive to gluons at x ∼ 0.015. The results are consistent with current model predictions that imply positive gluon polarizations over x > 0.02. The STAR forward upgrade will play an important role in exploring the gluon polarizations at x near 10 −3 , which is loosely constrained by the current world data.
2,675.4
2019-09-26T00:00:00.000
[ "Physics" ]
Business Intelligence Using N-Beats And Rnn Methods End Influence On Decision Making In The Flexible Packaging Manufacturing − Today's complex decision-making solutions for intelligent manufacturing depend on the ability to be able to model a manufacturing system realistically, valid and consistent data integrated easily and in a timely manner, able to solve problems efficiently with computational effort to obtain optimal production and product quality optimizations continuously. When an organization uses a data-driven approach, it means that it makes strategic decisions based on data collection, analysis, and interpretations or insights. The purpose of this research is to analyze the business intelligence approach in optimizing print machines by speed, material and time. in this research, using the N-Beats is a deep neural architecture based on backward and forward residual links and a very deep stack of fully-connected layers and Recurrent Neural Networks (RNN). The novelty of this research is increasing machine speed using new insights by combining two deep learning methods. Observing and retrieving raw data from the printing machine process with sensors data for use and ensuring the justification of the addition of new methods. The result is expected to be able to provide new insights that can increase engine speed, the data based decision making provides businesses with the capabilities to generate real time insights and predictions to optimize their performance and provide confidence in decision making that are fast, precise and better. I. INTRODUCTION To achieve genuine digital transformation, it is necessary to move beyond the asset-focused business intelligence approach and adopt a more holistic system that integrates Engineering, Operations, and Maintenance [1]. The increasing availability of data on volume, variety, and speed along with the increasing capabilities of computing and communications as well as the capabilities of modeling and solving complex analysis, is driving data into business using a combination of diverse and complementary types of process analysis [2]. The model possesses several desirable properties, including interpretability, adaptability to a wide range of target domains, and fast training. Two N-BEATS configurations, along with Recurrent Neural Network (RNN), were employed using the Python programming language. The problem that will be discussed in this research is how to optimize the printing machine production process using the N-Beats and RNN methods to analyze errors from the results of the model training process. To obtain the right parameters in an effort to speed optimized machine [3]. Focused on the three parameters generated by the machine sensor dataset, namely the type of material, speed and processing time and the process code for each machine unit [4]. The manufacturing industry is currently undergoing a paradigm shift due to the advancements in Big Data and Machine Learning (ML), which have now progressed to Deep Learning (DL) [5]. This transformation is moving the industry from the traditional manufacturing era to the intelligent manufacturing era 4.0, creating new opportunities. N-BETAS (Neural Basis Expansion Analysis for Interpretable Time Series Forecasting) [6]. The methodology for the architectural design of this system is based on a set of fundamental principles. Firstly, the architecture should be straightforward and versatile while still being comprehensive. Secondly, the architecture must not depend on feature engineering or input scaling that is specific to time series [7]. RNNs have been employed with success for various tasks that involve processing sequences of data such as sentiment analysis, machine translation, time-series forecasting, image captioning, and more [8]. II. RESEARCH METHODOLOGY In the upcoming chapter, we will explain why quantitative methods were selected and how the empirical research was conducted. Then, we will outline the dataset retrieval design, which includes a brief introduction to the organizations involved in the thesis and the process of collecting the dataset from the machine. Finally, we will discuss the method analysis, the reliability of the research, and any criticisms of the chosen method. A. Research Object The purpose of this research is to improve the accuracy of forecasting predictions using the N-BEATS and RNN methods in computer models. The thesis focuses on these two models and aims to demonstrate their ability to generate new insights through prediction. The dataset utilized is comprised of engine sensor data in Txt format, which is stored in a SQL database. The data is then retrieved in CSV format, and the engine process data for the year 2021 is selected for use in the thesis. This section describes the proposed method for generating forecasting that can provide new insights by leveraging multiple parameters of sensor data. The model will process various parameters generated by the sensors to obtain effective forecasting predictions and extract insights from the data. B. Predicting Forecasting by Machine In this table, the "Model" column lists the names of the two models being compared. The other columns show the average values of different evaluation metrics for each model. The "RMSE" column shows the root mean squared error, the "MAE" column shows the mean absolute error, the "MSE" column shows the mean squared error, and the "SMAPE" column shows the symmetric mean absolute percentage error. In the Table 1, that shows the average results of the N-BEATS and RNN models for time series forecasting. C. Prediction Accuracy Measures The Root Mean Squared Error (RMSE) and the Mean Average Percentage Error (MAPE) are used to assess a models price predictions adequacy. These measures have also been used by many other studies. (1) III. RESULTS AND DISCUSSION The models used in this study achieved relatively good performance in terms of prediction accuracy. This means that the models developed are able to produce predictions that are close to the true value with a low error rate. In the study, the evaluation results showed that the models had good accuracy and were able to produce accurate predictions. With relatively good performance, the models can provide valuable insights and information in decisionmaking in the flexible packaging manufacturing industry. However, the N-BEATS model is superior in terms of prediction accuracy with lower mean values across all evaluation metrics. The results from both models provide valuable insights that can be used in optimizing press speeds in rotogravure printing processes. By identifying the optimal parameters based on the model predictions and increasing the output, companies can improve their productivity and process efficiency. In other words, the use of N-BEATS and RNN models in Business Intelligence analysis enables flexible packaging manufacturing companies to make informed decisions to improve the performance and effectiveness of their presses. The use of Business Intelligence (BI) in the flexible packaging manufacturing industry has significant benefits [9]. With BI, companies can take better decisions based on accurate data analysis and easy-to-understand visualizations. This helps in understanding market trends, predicting product demand, and responding quickly. In addition, BI also enables cost reduction by identifying areas of waste and improving efficiency in the production process [10]. With product quality analysis, companies can improve quality and reduce defects through quick corrective actions. Finally, BI helps improve a company's competitiveness by providing deep insights into markets, customers, and competitive advantages [11]. Sbrana et al., (2020) This enables companies to take the right steps to meet customer needs and achieve an edge in the flexible packaging market. 1. Business Intelligence (BI) plays an important role in optimizing presses for speed by providing the insights and information needed to improve efficiency and productivity. Here are some of the ways BI can optimize presses based on speed: 2. Data Collection and Integration: BI involves collecting data from various sources such as press sensors, production logging systems, and other systems related to press speeds. This data is then integrated into a single BI system or platform for more holistic analysis. 3. Data Analytics: Through BI data analysis, patterns and trends related to press speeds can be identified. This analysis can involve statistical techniques, predictive modeling, and machine learning algorithms to unearth deep insights into the factors affecting press speeds. 4. Performance Monitoring: BI enables real-time monitoring of press performance. Actual speed data is collected and compared with set targets. If there is any deviation or drop in performance, BI provides notifications or reports that enable the operations team to take quick corrective actions. 5. Process Optimization: Using insights from BI, companies are able to identify the causes of printing press speed non-optimization. This allows them to optimize the production process by making necessary operational improvements or adjustments. For example, BI can help in identifying bottlenecks, organizing an efficient task sequence, or adjusting machine settings. 6. Planning and Decision Making: BI also helps in long-term planning by providing the information needed for strategic decision-making related to investments in new equipment, preventive maintenance, or changes to more efficient production processes. With a better understanding of press speeds and the factors that affect them, companies can make smarter decisions to improve performance and productivity. Through the application of BI in press management, companies can gain better visibility, identify opportunities for speed improvements, and take the necessary actions to optimize production processes. Thus, BI can help companies achieve higher efficiency, reduce production costs, increase output speed, and bring significant added value [13]. Business Intelligence (BI) plays an important role in optimizing material-based molding machines by providing the information and insights needed to improve production efficiency and quality [14]. Zeng et al., (2021) here are some of the ways BI can optimize presses by material: 1. Material Availability Analysis: Using BI data, companies can analyze the availability of materials required for the print process. This data includes material inventory, delivery time, and supplier capabilities. By looking at the data, companies can predict future material demand and optimize inventory so that the production process is not interrupted by material shortages [16]. 2. Material Inventory Management: Using BI analysis, companies can optimize material inventory arrangements. For example, by looking at historical demand data and market trends, companies can determine the optimal inventory levels for each type of material [17]. This helps avoid wastage and excessive storage costs, while ensuring materials are available on time for production. 3. Material Quality Monitoring: BI can help in monitoring the quality of materials used in the molding process. Quality data and indicators can be collected and analyzed in real-time to detect any discrepancies or defects in the materials. Thus, companies can take immediate corrective actions, reduce the risk of defects in molded products, and increase customer satisfaction. 4. Material Efficiency Analysis: BI can help companies identify factors that affect material utilization efficiency. Data collected from molding machines and other production systems can be analyzed to identify possible waste, material loss, or imperfections in the production process. By understanding the patterns and causes of material wastage, companies can take action to optimize material usage and improve production efficiency [18]. 5. Planning and Decision Making: Using BI, companies can make better plans related to material procurement, supplier selection, and inventory management. BI data and analysis help in making strategic decisions related to materials, such as evaluating new suppliers, looking for more efficient material alternatives, or adjusting material procurement strategies based on market trends. Business Intelligence (BI) can help optimize presses based on time by providing deep insights into press performance, real-time monitoring, and historical data analysis. Bulatov, (2020) here are some of the ways BI can be used to optimize presses over time: 1. Molding Machine Performance Monitoring: BI enables companies to monitor press performance in real-time. Operational data such as print speed, downtime, and reset time can be collected and analyzed in real time. This helps in identifying possible performance issues, such as unplanned downtime or low print speed. With proper monitoring, companies can take quick corrective actions and optimize the uptime of the press. 2. Downtime Analysis: Downtime is the time during which a press is not operating. Using BI analysis, companies can identify the causes and duration of downtime, both planned and unplanned [4]. This data can help in optimizing routine maintenance schedules and minimizing unnecessary downtime. In addition, downtime analysis also helps companies identify contributing factors and take preventive actions to reduce downtime in the future. 3. Demand Prediction: Through the analysis of historical data and market trends, BI can help companies predict future demand for printed products. This information allows companies to conduct [20] better production planning, optimize production schedules, and avoid situations where presses experience idle time or overload. By predicting demand with greater accuracy, companies can optimize the use of printing presses and avoid wasting resources. 4. Production Efficiency Analysis: BI allows companies to analyze production efficiency based on time. Operational data and production parameters such as cycle time, preparation time, and reset time can be analyzed to identify areas where efficiency can be improved. For example, companies can evaluate the optimal cycle time for each printed product or identify activities that take too long in the production process. With this analysis, companies can optimize the use of time and increase the productivity of the printing press. 5. Production Planning and Scheduling: BI helps in efficient production planning and scheduling. By collecting and analyzing time-related data, BI enables companies to create better production schedules based on press capacity, product demand, and resource availability [21]. This helps avoid production time imbalances that can result in machine overutilization or underutilization. IV. CONCLUSION Both models achieved relatively good performance in terms of prediction accuracy. The N-BEATS model outperformed the RNN model in terms of prediction accuracy, with lower average values across all evaluation metrics. The insights gained from these models can be used to optimize machine speed in the rotogravure printing process. By identifying the optimal parameters and increasing the output, the productivity and efficiency of the process can be improved. For future work fine-tune hyperparameters such as learning rate, batch size, number of layers, and number of neurons can significantly affect the performance of the models. Future work can explore different hyperparameters to find the best combination for the given dataset.
3,211.6
2023-06-27T00:00:00.000
[ "Business", "Computer Science" ]
Burgers-like equation for spontaneous breakdown of the chiral symmetry in QCD We link the spontaneous breakdown of chiral symmetry in Euclidean QCD to the collision of spectral shock waves in the vicinity of zero eigenvalue of Dirac operator. The mechanism, originating from complex Burger's-like equation for viscid, pressureless, one-dimensional flow of eigenvalues, is similar to recently observed weak-strong coupling phase transition in large $N_c$ Yang-Mills theory. The spectral viscosity is proportional to the inverse of the size of the random matrix that replaces the Dirac operator in the universal (ergodic) regime. We obtain the exact scaling function and critical exponents of the chiral phase transition for the averaged characteristic polynomial for $N_c \ge3$ QCD. We reinterpret our results in terms of known properties of chiral random matrix models and lattice data. Introduction It was a recently emphasized that the Burgers equation could be used to understand universal features of the weak to strong-coupling transition in two-dimensional Yang-Mills theory with a large number of colors N c [1,2,3]. This transition, first studied by Durhuus and Olesen [4], can be pictured, in the language of the Burgers equation, as resulting from the collision of two spectral shock waves at the closure of the gap. The emergence of the Burgers equation in such kind of problems, and related questions in random matrix theory, seems generic, and can be simply understood by exploiting Dyson's original idea of matrix random walks. Then, after a proper rescaling of time that separates the fast motion of the eigenvalues caused by their mutual repulsion, from the diffusion that results from the random walks of the matrix elements, one can easily show, by using standard tools of statistical mechanics, that the average resolvent obeys indeed a Burgers equation (or its simple generalizations) in the limit of large size of the matrices. Here we shall obtain the Burgers equation from an exact equation, akin to a diffusion equation, satisfied by the average characteristic polynomial. The average resolvent and average characteristic polynomial are simply related in the large N limit. However, the equation for the characteristic polynomial is exact for any matrix size, while no such equation seems to exist for the average resolvent. Returning to two-dimensional Yang-Mills theory, we note that the average characteristic determinant for a Wilson loop of a given area was indeed shown to obey an exact diffusion equation for any number of colors N c (which indeed yields a Burgers equation in the infinite number of colors limit). For finite number of colors, the flow of the eigenvalues is viscid, with a negative spectral viscosity v s = 1 2N c . It is the negative sign of the viscosity, or of the diffusion constant, that causes the rapid spectral oscillations at the closure of the gap. This picture was confirmed by extensive numerical studies of full Yang-Mills theory in three dimensions and recently, in four dimensions [3,5]. In this letter, we demonstrate that a similar mechanism, governed by similar viscous spectral Burgers-like equation, is responsible for the universal spectral oscillations of the spectrum of the Dirac operator in QCD that accompany the spontaneous breakdown of the chiral symmetry [6,7]. In the case of the Durhuus-Olesen transition, where the role of the time is played by the area of the Wilson loop, an explicit construction of a random matrix model was provided by Janik and Wieczorek [8], matrices attached to large loops being formed by multiplying random matrices representing small loops. Here, as already mentioned, following Dyson [9], we add a fictitious time (somewhat analogous to Schwinger's proper time) in order to describe the diffusion of the random matrices in 4 dimensional Euclidean space. Random matrix theory and QCD A cornerstone for the microscopic understanding of the spontaneous breakdown of chiral symmetry is the Banks-Casher [10] relation where the quark condensate < qq > is an order parameter for chiral symmetry, ρ(0) is the averaged (over the gauge field configurations) level density of the Euclidean Dirac operator near the vanishing eigenvalue, and V 4 ≡ L 4 is the Euclidean volume. This relation shows that chiral symmetry breaking requires a strong accumulation of eigenvalues near zero, i.e. a level spacing ∆ ∼ 1/L 4 , much larger than the level spacing ∆ ∼ 1/L of a free system [11]. This accumulation of eigenvalues leads to universal properties, that are well captured by random matrix theory: For eigenvalues smaller than a characteristic energy scale, referred to as the Thouless scale E T h , the fluctuations of eigenvalues are described by chiral random matrix models respecting the global symmetries of the Dirac Hamiltonian. In QCD, the condition E QCD T h /∆ = F 2 π L 2 ≫ 1, where F π is the pion decay constant, determines the regime of applicability of random matrix theory [12,13]. In Euclidean QCD, all four Dirac matrices can be chosen to be anti-hermitian, hence the spectrum of the massless Dirac operator D ≡ iγ µ (∂ µ − igA µ ) is purely imaginary. The partition function, for a fixed topological sector, reads where the averaging is done with respect to gluonic configurations of a given topological charge ν, ±iλ k are the eigenvalues of D, and m f is the mass of a quark with flavor f . Due to the chiral symmetry non-zero eigenvalues of D come in pairs, and the number of fermionic zero modes is related to the topological charge. In the chiral Gaussian random matrix model (hereafter χGUE), corresponding to QCD with N c ≥ 3, the role of the massless Dirac operator is played by a random matrix W(= −iD) of the following form: Here K is a rectangular M × N (M > N) matrix with complex entries, K i j ≡ x i j + iy i j , where x i j and y i j are drawn from a Gaussian distribution. Note that W is hermitian, so that its eigenvalues κ i 's are real. The block-diagonal structure of (3) reflects the chiral symmetry of the Dirac operator: W anticommutes with the analogue of the Dirac matrix γ 5 , defined here as γ 5 = diag(1 N , −1 M ). This implies in particular that the eigenvalues come in pairs of opposite values, (κ, −κ). By construction, W has in addition ν ≡ M − N zero eigenvalues. These mimic the zero modes of quarks propagating in gauge fields of non trivial topology. General spectral properties of the random matrices can be obtained form correlation functions containing both products and ratios of the characteristic polynomial [14] where < ... > denote averaging with respect to the χGUE measure, and the characteristic polynomial is Z(w) = det(w − W). Directly related to (4) is the resolvent, R(z) = ∂ w C(z; w)| w=z , whose imaginary part yields the spectral density. In the vicinity of zero, the microscopic (unfolded) resolvent predicts for QCD [15] where b = N f +|ν|, with N f the number of quark flavors, and x = zV 4 Σ. The appearance of Bessel functions I b , K b in the correlation functions, such as in Eq. (5), is generic. They encode the universal behaviors that show up already in the simplest objects, the average characteristic polynomial and/or the average of the inverse of the characteristic polynomial. As the first result of the paper, we shall obtain an exact differential equation for the averaged characteristic polynomial. This equation is akin to a diffusion equation, from which a Burgers equation can be derived from a simple transformation. Burgers equation and the characteristic polynomial for Dirac operator We assume now that the entries of the matrix K follow independent random walks. Let us denote by P(X, Y, t) the joint probability that the entries of K take the values The random walks of the matrix elements translates into the following diffusion equation for P(X, Y, t): We shall be interested in this paper with the time evolution of the averaged characteristic polynomial where n = N + M, w is an arbitrary complex number, and dX = i j dx i j , and similarly for dY. In order to get the equation obeyed by Q ν n (w, t), we consider first the equation satisfied by the average characteristic polynomial of the associated N × N Wishart where To derive the equation for M, we take a time derivative of the expression above. This acts on P(X, Y, t), which, using Eq. (6), we transform into derivatives with respect to x i j and y i j . Then, we integrate by parts, and use standard Grassmann calculus to obtain (after a somewhat tedious but straightforward calculation) the following differential equation This equation is valid for any N and M, and arbitrary initial conditions. Note that for the trivial initial condition K i j (t = 0) = 0, its solution is given by time dependent associated Laguerre polynomial [16]. From the equation for M ν N (z, t), Eq. (9) above, one easily obtains the equation for It will be also useful to consider the equation for the Cole-Hopf transform of Q ν n (w, t), t)). This object identifies with the average resolvent in the large n limit. After a rescaling of the time, τ = Mt, one gets from Eq. (10) where we have separated on the left hand side the terms that survive the large n limit, and on the right hand side the terms that are explicitly suppressed by powers of 1/n. Note the crucial role played by the rescaling of time in arriving at this equation. The motivation behind this rescaling is that the diffusion associated with the random walks is taking place over a time scale that is larger, typically by a factor n, than the time scale corresponding to the local rearrangements of the eigenvalues due to their mutual repulsion [9,16]. After rescaling, the diffusion terms are dwarfed by a factor 1/n, and the large n dynamics is dominated by repulsion. The last term, of order 1/n 2 finds it origin in the kinematical zero modes present when ν 0. Large n limit We consider now the limit n → ∞, with ν constant. We set g(w, τ) = lim n→∞ f ν n (w, τ). Eq. (11) reduces then to the inviscid Burgers equation [1,17,18], independent of ν: It can be solved using complex characteristics. We choose the system to be initially in a chiral symmetric state, with eigenvalues localized at ±a.The characteristic lines are given by w = ξ + τg 0 (ξ), with The solution g(w, τ) is constant along the characteristics, meaning g(w, τ) = g 0 ξ(w, τ) . By eliminating ξ, one obtains an implicit equation for g: This equation can be solved by elementary means, and well known results recovered. In fact, the change of variables an equation for a time-independent resolvent G st that has been obtained in this context using different techniques [19,20,21]. In previous studies, the parameter d was introduced as the "deterministic" part of the chiral matrix (with K in Eq. (3) replaced by K + d, d being fixed and K random), in order to control the approach to the chiral transition. The aforementioned change of variables renders transparent the dynamics captured by the Burgers equation: For small time, i.e. τ < a 2 , the spectral density remains localized in humps centered around the values ±a. As time reaches the critical value τ c = a 2 (corresponding to d = 1 in the static approach), the two domains of the spectrum merge at the origin, which we picture as the collision of two spectral shock waves. A finite "condensate" then develops, and chiral symmetry is spontaneously broken. We now recover these features by studying the singularities of the characteristics, and the behavior of the solution in the vicinity of these singularities. This brief discussion will pave the way for the scaling analysis to be performed in the next section. Singularities appear when characteristics start to cross (appearance of a spectral shock wave). This occurs for values of ξ that obey the equation, that is The values of ξ c correspond to the edges of the spectrum when τ < τ c = a 2 . At the critical time τ c = a 2 , where the two humps of the spectrum start to merge, the equation for ξ c admits a double solution at ξ c = 0, which splits into two purely imaginary and opposite solutions when τ > a 2 . In order to study the behavior of g at the edge of the spectrum, we expand g 0 in the vicinity of a singular point When τ < a 2 , g ′ 0 (ξ c ) = −1/τ and, using g 0 = (w − ξ)/τ, we get One can then easily invert the relation between w and ξ, and get (for the rightmost edge) so that which exhibits the familiar square root behavior of the spectrum near its (right) edge. For τ = a 2 , a similar analysis taking into account that g 0 (ξ c ) = 0 = g ′′ 0 (ξ c ), so that the cubic term must be kept, yields (for w c = 0) For time τ > a 2 , we have g(w = 0, τ) = g 0 (ξ c (w = 0, τ)) = − √ a 2 −τ τ , which is imaginary and hence directly proportional to the spectral density ρ(0). The behavior of g(w) at the edge of the spectrum determines the average eigenvalue spacing in the limit of large matrices. A singularity ∼ |w − w c | α yields a level spacing ∼ n −δ with δ = 1/(1 + α). We have therefore δ = 2/3 for τ < a 2 , δ = 3/4 for τ = a 2 and δ = 1 for τ > a 2 . We shall exploit these properties in the next section. Critical properties of the characteristic polynomial In this section we carry out a scaling analysis of the average characteristic polynomial Q(w, τ), or its Cole-Hopf transform f (w, τ), in the vicinity of the singular points. To that aim, we set with γ = 1 − δ, and s and χ(s, τ) remain finite as n → ∞. Bessel universality For τ > a 2 , there is no singular behavior (δ = 1). So, we set w = n −1 s and f ν n = χ. In the large n limit (at ν constant) we obtain, following the same manipulations as above, the following partial differential equation: which integrates to χ 2 + ∂ s χ + χ s − ν 2 s 2 + u(τ) = 0. Then, setting χ(s, τ) = ∂ s ln φ(s, τ) we obtain: whose solution is The determination of the arbitrary function u(τ) proceeds as in the previous case, by matching the asymptotic χ(s, τ) ∼ −i √ u(τ) with the large N solution. This gives √ u(τ) = √ τ − a 2 /τ. We recover the scaling of the ratio of spectral densities discussed for instance in [22,23,24]. Bessoid (axially symmetric Pearcey) universality Finally, we move to the case of τ = τ c = a 2 , which will lead to the second new result of this paper. As we have shown above, at τ = a 2 , the two pre-shocks collide. Before the collision, these pre-shocks are accompanied by oscillations of the Airy type. Our purpose now is to describe the modification of the pattern of oscillations for τ close to τ c . To perform this analysis, it is most convenient to start from the diffusion equation obeyed by the characteristic polynomial, i.e., Eq. (10) which, after changing t into τ = Mt, we rewrite as This equation is to be solved with the initial condition Q ν n (w, τ = 0) = w ν w 2 − a 2 N (where the function of w, (w 2 − a 2 ) N is defined with a cut between −a and a). It can be verified by a direct calculation (that does not require the explicit calculation of the integral below), that where C = (−1) ν+1/2 2M, is a solution with the proper initial condition. (For ν = 0, this solution agrees with a known solution [25].) The y-integral runs over a half-line that starts at the origin and goes to infinity, making a constant angle φ y = arg(y) with the real axis, with −π ≤ arg(y) < π. For the integral to be convergent, we require π 4 < |φ y | < 3π 4 . The modified Bessel function I ν (x) had the following asymptotic expansion lim |x|→∞ I ν (x) ≃ 1 √ 2πx e x , valid for | arg(x)| < π 2 (see [26]; here x = 2Myw τ ). This is useful in particular to verify the initial behavior. Indeed, as τ → 0, one may estimate the integral using the saddle point method. The saddle point equation yields y + w = 0, which fixes in particular arg(y) = arg(−w). This new condition for φ y , together with the convergence condition noted above, are easily seen to be compatible with the condition of validity of the asymptotic expansion of the Bessel function. In turn, these conditions limit the allowed arguments of w to π 4 < | arg(w)| < 3π 4 . The integral representation (30) of the characteristic polynomial allows us to study the vicinity of the critical point. We note that the saddle point equation reads (in the large n limit) Identifying y = −ξ, we recognize the equation for the characteristic lines. This indicates how the large n dynamics is coded in this integral. We shall focus more specifically at the critical point, w = 0, y = 0, τ = a 2 . In this regime, we may expand ln(a 2 − y 2 ) ≈ ln(a 2 ) − y 2 a 2 − y 4 2a 4 , and obtain with C ′ = C(−1) N . To capture the critical behavior also as a function of time, we set τ = a 2 + θ. Then Eq. (32) becomes This expression suggests the following change of variables that will ensure a smooth large n limit: We then define where the limit is taken with ν constant. Finally This exact scaling function for the characteristic polynomial is the second important result of this paper. It has a form very similar to the Pearcey function, which gives the asymptotic behavior of the characteristic polynomial in the case when gap closes for GUE [27] or in the case of unitary diffusion on the circle [5] P(q, t) = ∞ −∞ dy exp(−y 4 − ty 2 + qy). Here q is the rescaled angle representing the position of the eigenvalue on the unitary circle and t parameterizes the fluctuations around the critical area. The critical indices that determine the scaling with n in Eq. (34) are identical to those in the Pearcey integral, but the form of the integral is different. The reason is the chiral symmetry, which imposes an additional polar symmetry of the spectrum, trading the exponential function of q in the Pearcey integral (37) for a Bessel function in Eq. (36). Conclusions In this letter we have obtained an exact differential equation for the average characteristic polynomial of a chiral random matrix, and its Cole-Hopf transform. For the latter, the equation takes the form of a generalized viscid Burgers equation, where the viscosity is proportional to the inverse of the size of the matrix, but with a negative sign. This allowed us to provide a complete description of the full critical behavior of averaged characteristic polynomials in chiral QCD, based on a single equation. In particular, we considered the case of chiral Gaussian Unitary Ensembles and we have identified the exact universal scaling function (Bessoid B ν (q, θ)) in the vicinity of the chiral critical point for the average characteristic polynomial. We did not analyze in this letter the properties of the average of the inverse characteristic polynomial, but we have checked that if fulfills similar equations, albeit with initial conditions singular at w = 0, alike in the cases of unitary and GUE diffusions. Since F π scales like √ N c , more and more eigenvalues of the Dirac operator fall into the universal window when the number of colors tends to infinity, the volume of the lattice being kept finite. This suggests, that the present study is relevant also for analyzing the spontaneous breakdown of chiral symmetry at finite volume and large N c QCD, which was observed, and explained by Neuberger and Narayanan [28]. For small lattice sizes, chiral symmetry is unbroken, while at some critical scale L c a condensate is formed. The same authors [29] have also observed, that the N c -dependence of the level spacing closest to zero goes from 1/N c in the broken chiral symmetry phase to 1/N 2/3 c in the symmetric (gapped) phase. At L = L c , the critical scaling changes to 1/N 3/4 behavior and the condensate vanishes at the critical size L c as √ L − L c . These results are in agreement with our analysis. The critical universal scaling function (Bessoid) for large N c Dirac operator resembles closely the critical universal scaling function (Pearcey's cuspoid) for the weak to strong coupling transition in Yang-Mills theory at large N c . It would be interesting to study numerically both transitions simultaneously (at least in some simple model like [28]) to see the interplay between the cuspoid and the Bessoid, or, in other words, the relation between the critical size L c for chiral symmetry breakdown and the critical area ∼ L 2 c for the weak to strong coupling transition in Yang-Mills theory. Of particular interest would be to measure on the lattice the microscopic spectral density exactly at the point of the transition. As far as we know, the microscopic spectral density at criticality was constructed explicitly only for b = N f + |ν| = 0 case [30], and was never checked by the lattice simulation. Taking into account the still ongoing discussion on the nature of chiral phase transition, its relation to confinement and Anderson localization [31,32], lattice verification of analytic predictions for microscopic densities at the critical point may be a powerful tool to shed more light on this aspect of strong interactions.
5,306
2013-03-10T00:00:00.000
[ "Physics" ]
A High-Efficient Low-Cost Converter for Capacitive Wireless Power Transfer Systems Growth of the Internet of Things (IoT) spurs need for new ways of delivering power. Wireless power transfer (WPT) has come into the spotlight from both academia and industry as a promising way to power the IoT devices. As one of the well-known WPT techniques, the capacitive power transfer (CPT) has the merit of low electromagnetic radiation and amenability of combined power and data transfer over a capacitive interface. However, applying the CPT to the IoT devices is still challenging in reality. One of the major issues is due to the small capacitance of the capacitive interface, which results in low efficiency of the power transfer. To tackle this problem, we present a new step-up single-switch quasi-resonant (SSQR) converter for the CPT system. To enhance the CPT efficiency, the proposed converter is designed to operate at low frequency and drive small current into the capacitive interfaces. In addition, by eliminating resistor-capacitor-diode (RCD) snubber in the converter, we reduce the implementation cost of the CPT system. Based on intensive experimental work with a CPT system prototype that supports maximum 50 W (100 V/0.5 A) power transfer, we demonstrate the functional correctness of the converter that achieves up to 93% efficiency. Introduction The rise of the Internet of Things (IoT) has been undergoing a paradigm shift on powering electronic devices.Traditional power transfers based on wired connection or battery replacement have hit a roadblock to support IoT devices that basically operate with tiny capacity batteries and are distributed everywhere.For example, sensor nodes in an IoT wireless sensor network cannot be deployed in hard-to-reach places where wiring is infeasible, batteries cannot be replaced or the need to replace the battery becomes very inconvenient.To overcome the drawback of the traditional power transfer systems, wireless power transfer (WPT) systems have been proposed.Expected to liberate the IoT devices from being wired or replacing batteries, the WPT systems have been receiving considerable interest from both academia and industry as the best-suited technique to power the IoT devices [1,2]. According to the coupling method used in the WTP system, the WTP system can be classified into two types: inductive power transfer (IPT) and capacitive power transfer (CPT).The IPT is based on magnetic coupling between the winding coils, namely this technique uses the magnetic field to transfer power via air between resonators composed of the isolated windings [1,3,4].On the other hand, the CPT is based on electric coupling between two metal plates, so that it uses electric field to transfer power through physical isolation barrier that utilizes the capacitance of capacitive interface [5][6][7][8][9][10][11][12]. Since the possible power transmission distance from the IPT is commonly longer than that of the CPT, the IPT seems to be predominantly used in WTP systems.However, the CPT systems have many important benefits that they are cheaper to implement, less vulnerable to metal barriers and electromagnetic interference (EMI), and more flexible with alignment than IPT systems [12].Thanks to these advantages, the CPT systems have been adopted to various applications, such as biomedical microsystem [5], robot [13], and portable respiratory devices [14], rotating device [15,16] and so forth. However, applying the CPT system to IoT devices is still challenging, mainly due to its low efficiency of the power transfer (i.e., scaling down the CPT systems to support small IoT devices operating with tiny capacity batteries is another challenging issue.This is, however, out of scope in this paper.)Because the primary concern of IoT devices has become energy efficiency [17], the low efficient CPT systems cannot be compromised with IoT devices.Therefore, it is strongly required to find solutions to enhance the efficiency of the CPT systems, which is the main objective of this paper.To do that, we perform an analysis in advance why the CPT systems lose efficiency.As a significant amount of power is dissipated from DC-DC converters in many other systems (e.g., multicore platforms [18,19], smartphones [20,21], and display systems [22]), the CPT converter in the CPT system is the main power consumer.Since the capacitance of the capacitive interface is limited by area availability of devices or systems, it is normally difficult to obtain the high capacitance of the capacitive interface [23].The small capacitance of the capacitive interface requires an equipped CPT converter to operate at high resonant frequency, which results in significant increase of switching power loss.In addition, the small capacitance of the capacitive interface causes high equivalent series resistance (ESR) of the interface, which considerably increases conduction power loss.As a result, total power loss of the CPT converter results in critical efficiency degradation.More detailed elucidation regarding to this mechanism will be provided in Section 2. To minimize the power loss of the CPT converter and improve the efficiency of the CPT system, we propose a new step-up single-switch quasi-resonant (SSQR) converter.The proposed SSQR converter is designed to operate at low frequency so as to reduce the switching power loss, and to decrease the driving current into the capacitive interface in order to lessen the conduction power loss.In addition, we get rid of resistor-capacitor-diode (RCD) snubber in the converter, which results in retrenching implementation cost of the converter.Finally, a 50 W (100 V/0.5 A) prototype of the CPT system with the proposed SSQR converter is built to demonstrate the functional correctness and efficiency enhancement of the CPT system.To show the characteristics of the capacitive interface compared to film capacitors, the capacitive interface and a film capacitor are used as the resonant capacitor in the prototype system.The detailed analysis is performed with the prototype, which also takes consideration into the parasitic components of the system.From the intensive experimental work, the result shows that the proposed CPT system achieves the maximum efficiency of 93.93% at 20% load condition. The remainder of this paper is organized as follows.Section 2 provides in-depth analysis of the CPT systems including its power loss mechanism.Section 3 presents the details of the proposed SSQR converter design.Sections 4 and 5 are dedicated for the experimental works that include operational analysis and experimental results based on the prototype development.Section 6 concludes the paper. Analysis of CPT Systems Since CPT converters in CPT systems exploit the capacitive interface to transfer power without wires and connectors, the capacitive interface is a core component of a CPT system.In general, the capacitance of the capacitive interface can be expressed as follows: where C I1 and C I2 indicate the capacitances of two pairs of primary and secondary coupling plates (capacitive interfaces) [8,24]; A, 0 , r , and d denote the area of the capacitive interface, dielectric constant, relative dielectric constant, and distance between metal plates, respectively.C I1 and C I2 strongly affect the efficiency of a CPT converter, in that the higher C I1 and C I2 are preferred to design a high efficiency converter in a CPT system.From Equation (1), it is obvious that the capacitance of the capacitive interface is directly affected by A. Unfortunately, A is limited by the available form factor of CPT systems.This is why a number of CPT systems report that C I1 and C I2 are in the order of a few hundred picofarad or a few nanofarad [10,14,16,24].For example, the capacitance between metal plates with a 1/4 mm air gap in [23] is only 3.5 pF/cm 2 .Figure 1 shows the series LC tank in a CPT system that is composed of L r , C I1 and C I2 , where L r is a resonant inductor that is inserted in series into the capacitive interface to compensate its reactance.Meanwhile, the capacitive interface suffers from a voltage stress (V CI1 for the primary interface and V CI2 for the secondary interface) that may be expressed as: where I CI and f r are the capacitive interface current and the resonant frequency, respectively.In general, the capacitive interface has a strict breakdown limit of the voltage stress.From Equation (2), designing f r and C I1 (C I2 ) to be high, but I CI to be small is required in order not to exceed the breakdown limit.However, setting higher f r results in the increase of switching power loss and the need to use higher L r .Instead of rising f r , it should be better to increase C I1 (C I2 ) and/or decrease I CI , which derives V CI1 (V CI12 ) reduction.From Equation (1), increasing A may be the easiest way to increase C I1 (C I2 ).However, the bigger the size of CPT system is, the less versatile it will be.Instead, enhancing dielectric materials such as barium titanate (BaTiO 3 ), lead zirconium titanate (PZT), and polyethylene terephthalate (PET) can be used between the metal plates to increase C I1 (C I2 ).Unfortunately, such dielectric materials have high dielectric loss tangent tanδ, a.k.a.dissipation factor, which increases the equivalent series resistance (ESR) of the capacitive interface R CI1 and R CI2 .R CI1 and R CI2 is described in Figure 2a and can be expressed as [25]: where ω = 2π f r .The increased R CI1 and R CI2 generates more power loss (thereby more heat) in the capacitive interface, which is undesirable.The difficulty to increase capacitance of the capacitive interface due to the technical and spacial limits induces inevitable power loss in the CPT system.The low capacitance of the interface increases f r , thereby switching power loss increases.In addition, the low capacitance of the interface induces the high ESR, that, in turn, increases conduction power loss.More precisely, the switching power loss is linearly proportional to f r that can be formed: where C eq (= C I1 ||C I2 ) is the equivalent capacitance of the interface.The conduction loss is linearly proportional to the equivalent resistance R eq (=R CI1 + R CI2 ) and square proportional to I CI . From Equation ( 3), R eq increases with lowering C I1 and C I2 .For reference, C eq and R eq are presented in Figure 2b.In addition, we notice that lowering I CI are helpful for not only reducing the voltage stress from Equation ( 2), but also save power from the conduction loss, i.e., by intuition, I CI can be decreased by increasing the input voltage of the capacitive interface V CI_in .Therefore, we focus on both f r and I CI as an optimization objective in the CPT system.The proposed CPT converter design in the following sections shows our achievement. Step-Up Single-Switch Quasi-Resonant Converter for CPT Systems The driving circuit topologies of the CPT system are generally classified into three types: (i) full-bridge (or half-bridge) inverter [8,9]; (ii) push-pull converter [13,24]; and (iii) single-switch topology with an AC coupling capacitor (Sepic, Zeta, Cuk, and buck-boost converters) [10].Except for the single-switch topology with an coupling capacitor, full-bridge rectifiers are normally required as a rectifier circuit in the driving circuits.For low cost and high efficient CTP systems, the best-suit driving circuit type may be the single-switch topology with AC coupling capacitor, owing to its superiority in the less number of electric components and less power loss compared to the others.However, as aforementioned, this topology requires relatively large equivalent capacitance C eq for the low-frequency operation, which may result in still low efficiency of the power transfer. To improve the efficiency of the single-switch topology with an AC coupling capacitor, we focus on a type of single-switch quasi-resonant (SSQR) converter.In general, the SSQR converter is composed of a switch Q 1 , a transformer T 1 , a resonant capacitor C r , a resistive-capacitor-diode (RCD) snubber, two diodes D 1 and D 2 , and an output filter C O . Figure 3a shows the representative schematic of a step-down (buck) SSQR converter [26].Thanks to the quasi-resonant operation, the SSQR converter reduces the turn-on and turn-off loss from Q 1 and alleviates the reverse-recovery losses from D 1 and D 2 .The RCD snubber is used on the primary side to suppress the high voltage spike of Q 1 caused by the leakage inductance of T 1 and the high input voltage V I N at the switch turn-off transition.Especially in [26], two expensive multi-layer ceramic capacitor (MLCC) resonant capacitors with high capacitance of 22 µF are used for C r in the secondary side, in order for low f r .Since the voltage stress of D 1 and D 2 is clamped to the output voltage V O , the SSQR converter is especially good at step-up (boost) applications.We thus design a step-up SSQR converter for the CPT system, wherein the capacitive interface is used for C r .The schematic of the proposed converter is shown in Figure 3b.This converter operates with small C eq at low f r , and drives small I CI .Since the voltage overshoot at the switch turn-off transition is within a maximum drain-source voltage rating of Q 1 , we eliminate the RCD snubber, thereby saving power.As a result, the step-up SSQR converter transfers power through the capacitive interface with high efficiency. The proposed step-up SSQR converter is analyzed by taking account for the primary and secondary leakage inductances of T 1 , L lk1 and L lk2 , respectively, the output capacitor C oss and the internal anti-parallel diodes of the switch Q 1 , and the junction capacitor C j of D 1 and D 2 in detail.Figure 4 shows the schematics of the proposed SSQR converter to be analyzed.The validity of the step-up SSQR converter has been verified on a prototype that supports up to 50 W (100 V/0.5 A) power transfer.To show the characteristics of the capacitive interface, the capacitive interface for CPT systems and a film capacitor for general converters are investigated as a resonant capacitor in the proposed step-up SSQR converter.The results of the analysis and comparison of both capacitive interface and a film capacitor based step-up SSQR converters will be provided in Section 5. Low Resonant Frequency By turning on the switch Q 1 in Figure 4, power is transferred from a voltage source to a load.Then, the resonant current flows through T 1 , Q 1 , C eq , D 2 (not D 1 ) and the load.The components in the secondary side of T 1 can be transformed into their equivalences in the primary side by using the turn ratio n of T 1 .More precisely, the equivalence of the secondary leakage inductance L lk2 on the primary side is almost equal to the primary leakage inductance L lk1 , thus the leakage inductance L lk equals L lk1 = L lk2 .The equivalent capacitance C eq of the capacitive interface is multiplied by n 2 .The transformed schematics is shown in Figure 5a.As a result, the step-up turn ratio n of the SSQR converter makes the small equivalent capacitance C eq of the capacitive interface look very large in the primary side, which results in the low resonant frequency f r .Meanwhile, if the magnetizing inductance L m is much larger than L lk , the equivalent circuit in Figure 5a can be further simplified to the equivalent circuits as seen in Figure 5b,c.The input voltage and the resonant inductance of the resonant tank can be expressed as: From Equation ( 4), f r of the resonant tank of the proposed converter now can be reformulated to: Thanks to the low f r , the step-up SSQR converter operates at a low switching frequency, thereby switching power loss should be reduced. Low Capacitive Interface Current Owing to the step-up turn ratio n of the transformer T 1 , the step-up SSQR converter powers the capacitive interface with low current and high voltage.The conduction power loss (a.k.a.I 2 R loss) of the capacitive interface is proportional to the square of the current but proportional to the ESR of the capacitive interface.Therefore, the effect of the the ESR increase due to the low f r (cf.from Equation ( 3)) to the conduction power loss can be canceled out from decreasing the capacitive interface current.Alternatively, the conduction loss can be further reduced by decreasing the capacitive interface current more aggressively, so that the step-up SSQR converter can transfer power through the capacitive interface with high efficiency. Elimination of the RCD Snubber The voltage overshoot occurs at the switch turn-off transition of the SSQR converter.When Q 1 is turned off, the resonant current flows through the transformer, the output capacitor C oss of the switch Q 1 , C eq , and diode D 1 (not D 2 ).Since this transition is very short, a function of the voltage across the equivalent capacitor for a certain time t, V C eq (t) can be treated as the voltage at t = 0, V C eq (0).Like what we performed the simplification in Figure 5 by assuming L m L lk , here we also can simplify the schematic of the SSQR converter when the switch is turned off to Figure 6.Due to the quasi-resonant operation, the primary leakage inductor current i Llk1 (t) and the secondary leakage inductor current i Llk2 (t) are low at the switch turn-off transition (cf. Figure 4).Therefore, the initial values of the leakage inductor currents are low.Plus, the step-up SSQR converter has the low V I N that is lower than V O .Moreover, for the primary side of the transformer, V C eq (0) and L lk may look small, due to the step-up turn ratio n.As a consequence, the voltage overshoot at the switch turn-off transition is within the maximum drain-source voltage rating of Q 1 .We thus get rid of the RCD snubber, that, in turn, eliminates the corresponding power loss of the SSQR converter and reduces the number of components in the converter. Low Component Counts The CPT converters are conventionally composed of a full-bridge inverter, a resonant inductor, capacitive interface, a full-bridge rectifier, and an output filter.Namely, at least four switches and four diodes are necessary to implement such converters.On the other hand, the proposed SSQR converter requires only a single switch and two diodes as semiconductor components.Therefore, the proposed converter has an edge in cost competitiveness compared to the conventional CPT converters. Operational Analysis The proposed SSQR converter has several operational modes in one switching cycle, which can be divided into eight modes according to time t.These eight operational modes are described in Figure 7.The characteristic waveforms corresponding to the eight operational modes are depicted in Figure 8.To facilitate the operational analysis with minimum loss of generality, we assume that (i) a switch Q 1 is ideal except for the inherent output capacitor C oss and the internal anti-parallel diodes; (ii) two rectifier diodes are ideal except for the junction capacitors C j ; and (iii) input voltage V I N and output voltage V O are constant.Note that, in Section 5, the measured result of the operational modes with a prototype will be provided in detail.Before Mode 1 (t ≤ t 0 ) [Figure 7h]: Before t 0 , the switch Q 1 and the diode D 1 in Figure 4 are turned on.The current flows through the transformer, the output capacitor C oss of Q 1 , the equivalent capacitance C eq , and D 1 .The inductances of the transformer resonates with the C oss .At the same time, the V C eq (t) is applied to the transformer, and then the transformer is reset and V C eq (t) starts to increase.As a result, the voltage across the switch V DS (t) increases with V C eq (t) and resonates with inductances of the transformer.The primary leakage inductor current i Llk1 (t) is almost zero.Mode 1 (t 0 < t ≤ t 1 ) [Figure 7a]: This mode begins when Q 1 turns on.The current flows through the transformer, Q 1 , C eq , and D 1 .The inductances of the transformer resonates with the C eq .The i Llk1 (t) and i Llk2 (t) increase according to the quasi-resonant operation.The step-up SSQR converter achieves the quasi-resonant zero-voltage-switching (QR-ZVS).The relation with the currents of the transformer can be expressed as: When i Llk1 (t) meets the magnetizing inductor current i Lm (t), the i Llk2 (t) becomes zero.At this time, Mode 1 ends.Mode 2 (t 1 < t ≤ t 2 ) [Figure 7b]: At t 1 , i Llk2 (t) changes its direction from negative to positive.D 1 turns off.C j of D 1 is charged up and C j of D 2 is discharged.As a result, the voltage across D1, V D1 (t), increases from zero to V O and the voltage across D2, V D2 (t), increases from −V O to zero.D 2 turns on at t 2 , and Mode 2 ends. Mode 3 (t 2 < t ≤ t 3 ) [Figure 7c]: When D 2 turns on, the resonant current flows through the transformer, Q 1 , C eq , D 2 , and the load.The difference between i Llk1 (t) and i Lm (t) is transferred to the load.When i Llk1 (t) meets i Lm (t) again, i Llk2 (t) changes its direction from positive to negative at t 3 .7d]: When i Llk2 (t) becomes negative, D 2 turns off.C j of D 2 starts to be charged up and C j of D 1 is to be discharged.As a result, V D2 (t) decreases from zero to −V O , and the V D1 (t) decreases from V O to zero.D 1 turns on at t 4 , and Mode 4 ends. Mode 6 (t 5 < t ≤ t 6 ) [Figure 7f]: When Q 1 is turned off, C oss resonates with the inductances of the transformer.The resonance makes V DS (t) increase and decrease.At this time, i Llk1 (t) changes its direction from positive to negative.When V DS (t) decrease to zero, Mode 6 ends. Mode 7 (t 6 < t ≤ t 7 ) [Figure 7g]: When V DS (t) decrease to zero, the internal anti-parallel diodes of Q 1 is turned on.The current flows through the transformer, the internal anti-parallel diodes of Q 1 , C eq , and D 1 .V C eq (t) is applied to the transformer, and then the transformer is reset and V C eq (t) is to be increased.The inductances of the transformer resonates with C eq .When i Llk1 (t) becomes zero, Mode 7 ends. Mode 8 (t 7 < t ≤ t 8 ) [Figure 7h]: When i Llk1 (t) becomes zero at t 7 , the current flows through the transformer, C oss , C eq , and D 1 .Since V C eq (t) is applied to the transformer, the transformer is still reset.V C eq (t) then increases.At the same time, the inductances of the transformer resonates with C oss .Consequently, V DS (t) increases with V C eq (t) and resonates with inductances of the transformer. Experimental Results To verify the feasibility of the step-up SSQR converter with the capacitive interface, a prototype SSQR converter has been built with the following specifications: the maximum transferrable power = 50 W (100 V/0.5 A), V IN = 24 V, and V O = 100 V.A picture of the prototype converter is shown in Figure 9, and the components used in the prototype are listed up in Table 1.Note that the tolerance on the capacitances and inductances in the table are less than 10%.To make the capacitance of the small size capacitive interface high, we have used the lead zirconium titanate (PZT) for the dielectric material between the two electrodes, as seen in Figure 9.As aforementioned in Section 2, C eq is half the capacitance of the two capacitors C I1 and C I2 .The capacitive interface is composed of four electrodes and two PZT plates (width: 30 mm, length: 70 mm, and thickness: 1 mm).Note that these PZT plates represent the two capacitors in Figure 3b.Meanwhile, the amount of power transfer that the current prototype supports may be much larger than the power demand from small IoT devices operating with tiny capacity batteries (e.g., IoT sensor nodes).Due to the implementation difficulty of CPT systems being suitable for such small IoT devices (i.e., chip-level implementation and measurement techniques may be necessary), we focus on demonstrating the feasibility of the proposed converter design implemented with the board-level (relatively large) CPT prototype.As a pioneer design, the proposed CPT converter can be exploited and scaled down to develop high efficient CPT systems that are appropriate for the small IoT devices.Figure 10 shows the resulting waveforms of the gate signal V GS1 (t), the primary leakage inductor current i Llk1 (t), the voltage across a switch V DS (t), and the voltage across interface 1 V CI2 (t).The laboratory prototype SSQR converter utilizes the resonance between the transformer T 1 and the capacitive interface C I1 and C I2 during the switch conducting interval DT S .The resonant operation is started and finished in DT S .When the switch is turned on, the primary leakage inductor current i Llk1 (t) is almost zero.For this reason, the turned-on loss of the switch is very low.Since the resonant operation shapes the current sinusoidal, and the primary leakage inductor current i Llk1 (t) is low when the switch is turned off, the turn-off loss of the switch is reduced.To regulate the output voltage V O , a PWM control has been used for different load conditions. To show the characteristics of the capacitive interface, the film capacitor C r , instead of the capacitive interface, has been also used during the experimental work, as the resonant capacitor in the proposed step-up SSQR converter.Then, we have measured the resulting efficiency of the converter for each case and have performed comparison.When the film capacitor is used, the operation of the prototype converter results in the similar switching frequency f S and duty ratio D to those from the prototype converter with the capacitive interface.Therefore, the waveforms in Figure 11 look similar to the waveforms in Figure 10.As shown in Figure 12, the efficiency of the step-up SSQR converter with the film capacitor for 20%, 50%, and 100% load conditions are 95.52%,95.28%, and 91.44%, respectively.On the other hand, when the capacitive interface is applied instead of the film capacitor, the efficiency of the step-up SSQR converter is reduced as seen in Figure 12.This efficiency degradation may be resulted from the high ESRs of the capacitive interface.Although the step-up turn ratio n of the transformer enables the capacitive interface to drive relatively low current, increasing load current may induce the higher conduction loss due to the high ESRs of the capacitive interface.The rising conduction loss according to increasing load current can result in more heat generation.In other words, the temperature of the capacitive interface may increase as the the load current increases, which is shown in Figure 13.However, the advantages of the proposed SSQR converter far outweigh the disadvantages, in that it still achieves the high efficiency at normal operating conditions with less number of components than the traditional CPT converters.For example, Table 2 shows the comparison results between our proposed method and other methods to achieve the maximum efficiency of a CPT system. Conclusions Conventional CPT converters that transfer power through the capacitive interface have drawbacks that their efficiencies are quite low, and their implementation costs are expensive due to the high component count.More precisely, the most CPT converters suffer from the small capacitance of the capacitive interface, which makes the CPT converters operate at high resonant frequency (thereby switching loss is increased) and have high ESR (thereby conduction loss is increased), finally operating with low efficiency.In addition, a single CPT converter requires a full-bridge inverter, a resonant inductor, capacitive interface, a full-bridge rectifier, and an output filter, thus the implementation cost is nothing to sneeze at. In this paper, a high-efficiency and cost-effective step-up SSQR converter is proposed for CPT.The capacitive interface is used as the resonant capacitor of the SSQR converter for low component counts in CPT applications.The proposed SSQR converter operates with low operating frequency (thereby switching loss is decreased) and achieves the minimum driving current that flows into the capacitive interface (thereby conduction loss is decreased).Plus, the proposed SSQR converter gets rid of an RCD snubber that is unnecessary for step-up operation, which results in power saving as well as implementation cost reduction.The validity of the step-up SSQR converter has been verified in the prototype system.For fair comparison, the capacitive interface as well as the film capacitor is used as the resonant capacitor in the proposed converter.According to the intensive and comparative experimental work based on the prototype, the proposed step-up SSQR converter demonstrates its superiority for CPT systems, in that it achieves high efficiency with the less component count thanks to the low resonant frequency, the low capacitive interface current, and the elimination of RCD snubber.The experimental results show that the maximum efficiency of 93.93% is achieved at 20% load condition. Figure 1 . Figure 1.Capacitances of capacitive interface and series LC resonant tank in the CPT system. Figure 2 . Figure 2. (a) capacitive interface and (b) its equivalent circuit in the CPT system. Figure 3 . Figure 3. Schematics of the (a) step-down and (b) step-up single-switch quasi-resonant converter with a resonant capacitor. Figure 4 . Figure 4. Current and voltage analysis of the proposed SSQR converter. Figure 5 . Figure 5. Steps to develop the equivalent circuit of the proposed SSQR converter in Figure 4, when the switch is turned on: (a) first, apply L lk = L lk1 = L lk2 , n 2 C eq , and R L /n 2 ; (b) next, merge L m and L lk to a single inductor; and finally (c) take account of L m L lk . Figure 6 . Figure 6.Equivalent circuit of the proposed SSQR converter when the switch is turned off. Figure 8 . Figure 8. Characteristic waveforms of the proposed SSQR converter for step-up gain. Figure 9 . Figure 9.The prototype of the proposed SSQR converter including the capacitive interface made of lead zirconium titanate (PZT). Figure 11 . Figure 11.Experimental waveforms of V GS1 (t), i Llk1 (t), V DS (t), and V CI1 (t), when 50% load is applied.A film capacitor is used in this case. Figure 12 . Figure 12.Measured efficiency according to various load conditions: the efficiency is measured from the prototype converter that equips the film capacitor or the capacitive interface. Figure 13 . Figure 13.Measured temperature of the capacitive interface under various load conditions. Table 2 . Comparison between our proposed method and other methods. Max.
7,130.2
2017-09-18T00:00:00.000
[ "Computer Science" ]
Solvent effect on the absorption and emission spectra of carbon dots: evaluation of ground and excited state dipole moment Background Carbon dots (C-dots) are photoluminescent nanoparticles with less than 10 nm in size. Today, many studies are performed to exploit the photoluminescence (PL) property of carbon dots, and our focus in this study is to estimate the dipole moment of carbon dots. For reaching our aims, C-dots were synthesized and dissolved in the different solvents. Results Carbon dots with intense photoluminescence properties have been synthesized by a one-step hydrothermal method from a carbon bio-source. In this research, we report on the effect of aprotic solvents on absorption and fluorescence spectra and dipole moments of C-dots dispersed in a range of many aprotic solvents with various polarity and dielectric constant at room temperature. The change in the value of dipole moment was estimated by using the Stokes shifts. The difference between the dipole moment of the excited state and the ground state was shown using an extended form of Lippert equations by Kawski and co-workers. Conclusions The values found for μg = 1.077 D, and μe = 3.157 D, as well as the change in the dipole moments. The results showed that the dipole moment of the excited state is more than the ground state, indicating a high density and redistribution of electrons in the excited state. Finally, the quantum yield of C-dots in the eclectic aprotic solvents was communicated and discussed. Introduction Carbon-based nanomaterials such as carbon nanotubes, fullerene and graphene have poor solubility in water and lack strong fluorescence in the visible area, limiting their applications [1]. These shortcomings can be addressed by carbon dots (C-dots) which are spherical carbon-based nanoparticles with a size of less than 10 nm [2]. C-dots are heavily fluorescent, non-blinking, water soluble, chemically stable and can be easily synthesized at low cost [3,4]. Also, introduction of full-color fluorescent C-dots [5] is another advantage that can expand their application spectrum. C-dots was first discovered by electrophoretic purification of single-walled carbon nanotubes in 2004 [6]. In recent years, different materials and synthesis methods have been used to obtain C-dots. The synthesis approaches of C-dots can be classified into two categories: top-down and bottom-up methods including hydrothermal [7,8], electrochemical oxidation [9], acidic oxidation [10], microwave [11,12], and laser ablation [13]. Owing to their physicochemical properties, C-dots can take part in the chemiluminescence reaction as oxidants, emitting species, energy acceptors of chemical reaction energy or even as catalyst involving in different chemiluminescence systems [14,15]. For the first time, Shen and et al. fabricated chemiluminescence C-dots and used it to develop new class of CL nanosensors for the imaging Reactive oxygen species [16]. Also C-dots have gained widespread attention in recent years, especially in chemical censoring [17], biosensing [18], bioimaging [19], drug delivery [20], solar cells [21], light-emitting diode (LED) [22], and electrocatalysis [23]. C-dots are easily dispersed in protic and aprotic solvents due to carboxyl, hydroxyl, and carbonyl groups. The interaction between C-dots and solvent plays an essential role in the wavelength of photoluminescence emissions. In summary, no single theory can be used for a quantitative explanation of the effects of the environment on fluorescence. Explanation of these effects depends not only on polarity considerations but also on the structure of the C-dots and the types of chemical interactions it can experience with other near molecules. Kumar et al. [24] report on solvent-dependent spectroscopic study of fluorescent carbon nanoparticles in organic solvents. They have found that the absorption spectra of the nanoparticles were independent of solvent nature, while their photoluminescence spectra were considerably dependent on the solvent nature. The trends observed with solvent polarity follow the theory of general solvent effects, which may give the impression that solvent polarity is the only factor to consider. Solvent-solute interaction and the trace of solvent environments are investigated by considering various solvent parameters such as hydrogen bond capability, hydrogen bond acceptability and polarization on dipole moment [25,26]. Determining the dipole moment of electron balances is crucial because it can explain how electron distribution changes under excitation. Suppan has shown that the most acceptable method to approximate the excited dipole moment of a solute involves the simultaneous estimation of the absorption and fluorescence spectra of the solute in the range of solvent [27]. C-dot fluorescence properties are complicated by dependence on excitation wavelength [28] and solvent nature [24,29]. Because there are functional groups on the C-dots surface, they are therefore available to solvent molecules so that strong interactions of C-dots with solvent molecules can have a significant effect on fluorescence. We use the solvatochromic method in which the amount of polarity of the base and dipole moment was calculated, followed by the investigation of their differences in a variety of aprotic solvents. Pursuing our previous work on protic solvents [35], we aim to understand the effect of intermolecular interactions of synthesized C-dots and aprotic solvents. The focus is to study the spectral changes of C-dots in aprotic solvents by using the concept of Kamlet-Abboud-Taft's linear solubility energy. Kamlet-Abboud-Taft's equation is one of the most reliable methods of measuring solvent effects on dissolved C-dots. This equation applies the solvent polarity parameter result on the solute's spectral features [36]. Materials C-dots were synthesized by hydrothermal treatment of persimmon peel. All the solvents used in this research were of the highest degree of purity available from Merck. The physical properties and polarity functions of the solvents are given in Table 1; Spectroscopic polarity parameters in various aprotic solvents are provided in Table 2. Synthesis and characterization of C-dots The green C-dots synthesis method, their characterization and structures, were thoroughly described in our previous work [38]. Briefly, the C-dots were synthesized from persimmon peels by hydrothermal treatment. In the first step, persimmon was cut into pieces and ground into a mixture. Then, 50 mL ultrapure water was added, and the solution Table1 Physical properties, and polarity functions in aprotic solvents was kept for 15 min under magnetic stirring, and the obtained juice was autoclaved using a Teflon lined stainless steel autoclave reactor at 120 °C for 150 min. The autoclave was allowed to cool at room temperature, and the resultant dark brown solution was centrifuged at 10,000 rpm for 20 min to separate the larger particles. In the next step, the pH of the aqueous solution was adjusted to neutral with 1 M NaOH, and the C-dots solution was filtrated with a 0.22 μm filter membrane. In the final step, the C-dots solution was further purified by dialysis against (1000MWCO) deionized water for 24 h. The powder of C-dots was obstinate by lyophilization for 48 h and stored at 4 °C until further use. The formation of C-dots with an average size of 2 nm was obtained. Together with elemental analysis by CHN-analyser and using FE-SEM, the nitrogen and carbonyl-containing functional groups on the surface of C-dots were also revealed by the FTIR method. Accordingly, for further spectroscopic analyses, a 0.1% (W/V) C-dots solution was prepared in aprotic solvents by mixing for 4 h to obtain homogenous solutions. Absorption and emission spectroscopy UV-Vis absorption spectra of the C-dot solutions in different aprotic solvents were recorded by a CECIL CE7250 Spectrophotometer with a 1 cm quartz cuvette at room temperature over a wavelength range of 200-600 nm. Meanwhile, photoluminescence (PL) measurements were undertaken using a Cytation 5/Biotek/USA fluorescence spectrophotometer with excitation slit set at 1 nm pass and emission at 1 nm bandpass in 96 cell plates. Results and discussion The theory of universal solvent effects provides beneficial information for consideration of solvent-dependent spectral shifts. In explaining general solvent effects, the C-dots is a dipole in a continuous medium of uniform dielectric constant. The interactions between the solvent and C-dots affect the energy difference between the ground and excited states and the orientation polarizability of solvents. The dipole moment represents the electron distribution in a molecule with a specific structure. In combination with the reactive field around it, the dipole moment plays an essential role in the transition of a molecule because a molecule can absorb light when its dipolar moment changes. C-dots' absorption and fluorescence emission spectra in the range of different aprotic solvents with dielectric constant and refractive index were recorded at room temperature. Fig. 1 UV-vis absorption spectra of C-dots in aprotic solvents Depending on the solvent's polarity, the aprotic solvent used in the registration of UV-Vis absorption and emission fluorescence spectra of the C-dots, depending on the solvent's polarity, influenced the positions, intensity, and shape of the solvent-C-dots complex. The UV-vis absorption spectra of the C-dots were observed in the UV region with maximum absorption at 237-256 nm and a tail extending into the visible range (Fig. 1). This is attributed to the n-π* transition of C=O band and π-π* transition of C=C band. The behavioural mechanism of PL is not yet fully understood, and our recent studies in calculating the ground and excited state dipole moment are a step forward in understanding the mechanism of this C-dots effect. One possible reason for the PL behaviour is the presence of different particle sizes of C-dots; and the different distribution of C-dots surface energy traps, the nature of the surface, and the presence of numerous functional groups on the surface of the C-dots may result in a series of emissive traps between π and π* of C-C. [37]. The results obtained from the absorption and fluorescence spectra in Fig. 1 and 2 shows that the displacement observed in the absorption and emission spectra of C-dots indicates the dependence of C-dots on solvent polarity, which means that the change of solvent polarity displacement in the emission spectra is relative to the absorption spectra. Estimation of the ground state and excited state dipole moments In order to approximate the ground state and excited state dipole moments of the C-dots, spectral shifts (ῡ A − ῡ F ) and (ῡ A + ῡ F ) of fluorescence C-dots were calculated along with solvent polarity (ν A and ν F are the wavenumbers (cm -1 ) of the absorption and emission). The result demonstrates that the excited dipole moment is larger during the C-dots electronic transition than the ground dipole moment, i.e. μ e > μ g. Therefore, the dipolar solvent polarisation, the Franck-Condon excited state, is more solvated foremost to the experiential redshift in the spectrum. To a first estimate, this energy difference (in cm -1 ) is a property of the refractive index (n) and dielectric constant (ε) of the solvent, and is described by the Lippert-Mataga [30,31] equation as below: In this equation h = 6.6256 × 10 -27 ergs is Planck's constant, c = 2.9979 × 10 10 cm/s is the speed of light, and a is the radius of the cavity in which the fluorophore resides. In this equation, the opposite effects of ν A and ν F on the Stokes shift are significant. As the refractive index (n) increases, this energy difference decreases, whereas an increase in ε results in a larger difference between ν A and ν F . The refractive index is a rapid frequency response that depends on the motion of electrons in solvent molecules that occur when light is absorbed. In contrast with the refractive index, the dielectric constant is a static and steady feature that depends on the electrons and molecular motions of the solvents' organization around the excited state. Increasing the refractive index (n) of the ground and excited states is quickly stabilized by the motion of electrons in solvent molecules. This redistribution of electrons reduces the energy difference between ground and excited states. Lippert-Mataga framework, there is no consideration of specific interaction with solvent. Thus, several investigators attempted to extend and modify the Lippert equation. Kawski and co-workers [38][39][40] obtained a simple quantum mechanical second-order perturbation theory for absorption (ν A ) and fluorescence (ν F ) band shifts. By variation of ε and n in solvents, as explained below, functions f (ε, n) and g(n) refer to Bakhshiev [41] and Kawski-Chamma-Viallet [33,34] relations, respectively. Consequentially the solvent dependent changes for the difference and sum of ν A with ν F have been defined by the following equations: where: The parameters m 1 and m 2 can be determined from absorption and fluorescence band shifts (ῡ A − ῡ F ) and (ῡ A + ῡ F ) using the following equations: 14 kK, indicating a chargetransfer transition. The surface external of C-dots can also donate a proton to the aprotic solvent. This interaction influences the emission, notwithstanding the strong hydrogen accepting capability of these solvents from DMSO to dioxane, which is compared by large β Kamlet-Taft parameters (β parameters listed in Table 1) β = 0.76 and 0.37, respectively. The significant difference in the Stokes shift shows that the structural geometry of the excited state is different from the ground state. Table 2 shows the increase in Stokes shift with increasing aprotic solvent polarity, indicating an increase in dipole moment in the excited state. Spectral shifts (ῡ A − ῡ F ) and (ῡ A + ῡ F ) of C-dots in polarity functions f (ε, n) and f (ε, n) + 2 g (n) are illustrated in Figs. 3 and 4. The slopes, intercepts and correlation coefficients of these best-fit lines are listed in Table 2. The slopes m 1 and m 2 of the fitted lines are presented in Table 3, the slopes of Figs. 3 and 4 were estimated to be m 1 = 696.4 cm −1 and m 2 = 1417.8 cm −1 . The ground and excited state dipole moment were calculated using Eqs. (8) and (9) respectively, and the values are listed in Table 3. The values found for μ g = 1.077 D, and μ e = 3.157 D, as well as the change in the dipole moments (△μ = μ g − μ e ) is 1.077 D. The obtained dipole moment shows that the excited state dipole moment (μ e ) is greater than ground-state dipole moment (μ g ). It denotes that these C-dots treated as dyes are more polar in the excited states. Effect of aprotic solvent on the absorbance and florescence spectra The typical fluorescence spectra and absorption spectra of C-dots in different aprotic solvents are shown in Figs. 2 and 3, respectively. The emission spectra of C-dots are broad, with shifts depending on the solvents. The large spectral shift is apparent in the fluorescence spectra compared to the absorption spectra. The smaller spectral shift in the absorption spectra than the emission spectra and the higher residence time for fluorescence indicate two phenomena. Firstly, the dipole moment of the excited state is greater than the ground state in all aprotic solvents studied. Secondly, the energy level of the first excited state, S 1 , is stabilized compared to the ground state, S 0 , by solvation with increasing the solvent polarity. These phenomena result in a redshift or bathochromic shift of the fluorescence. The solvent effect was preserved within the outline of the linear solvation energy relationships (LSER) established by Kamlet-Taft [42] multivariate regression, in which each of specific and non-specific interactions has a linear contribution to the total solvation energy of solvent. Here the coefficients π * , α and β are the Kamelt-Taft solvatochromic parameters (KAT) which have been developed for scaling the dipolarity/polarizability, (11) E T = A 0 + Sπ * + aα + bβ Table 3 Regression fits to solvatochromic polarity scales for stokes shift of C-dots hydrogen-bond donor acidity and hydrogen-bond acceptor basicity of solvent, respectively [36,43,44]. The A 0 , a, b and s are regression coefficients, quantity of the sensitivity of E T values to the acidity, basicity and dipolarity/ polarizability, respectively. The fit parameters are represented in Table 4. The emission spectra energies were obtained by using Eq. 11. The molar electronic transition energy values, E T , of the C-dots in solvents were calculated using Eq. 12. The results related to KAT parameters in fluorescence show that the primary distribution in C-dots is related to polarization interactions. Also, in the as-prepared C-dot, the π * and β sign are negative, indicating the ability to accept hydrogen bonding and polarization. For making the data in Table 4 reasonable and comparable, we have transformed the values in Table 5 into percentage contributions. Also, in this study, the focus is on the solvatochromic effect of C-dots with solvents of low dielectric constants. Hence, the π * , α and β in aprotic solvents are shown in Fig. 5. As it is indicated in Fig. 5, linear dependence could have resulted from specific and non-specific interactions. The emission maximum E T diagram in π * for the range of aprotic solvents are shown in Fig. 5a. A linear dependence result indicates reasonable linearity with r = 0.7 between E T and π * in all aprotic solvents ranging from DMSO to dioxane. By plotting E T diagrams in terms of α and β, the results showed a poor linear dependence (r = 0.02) between E T and α as well as E T and β (r = 0.3) in the range of aprotic solvents. The reason for this significant deviation from the α parameter can be explained by the use of aprotic solvents that have low acidity, and in most cases, α was zero ( Fig. 5a and b). In contrast to our previous work, if we consider only the aprotic solvents, the results clearly show that the main contribution to solvatochromic is not related to the α scale of solvent HBD acidity and β scale of solvent HBA basicity parameters of solvent. This relationship associates with parameter π * , which shows the polarity/polarization of the solvents, and the measure of the stability of a charge or dipole moment with a dielectric effect [45]. The results show that non-specific interaction (dipoledipole) plays a significant role in solvatochromic of n-π* transition from edge band/edge states of C-dots [44]. The quantum yield of the C-dots dissolved in aprotic solvents was defined at an excitation wavelength of 350 nm using the following equation (Eq. 13) where Q is the quantum yield, I is the intensity of fluorescence spectra, A is the absorbance at the excitation wavelength, and η is the refractive index (1.33) of the solvent using quinine sulfate (quantum yield 0.5) in 0.1 M H 2 SO 4 solution as the reference [46,47]. The subscripts 'CDs' stands for carbon dots and 'R' for the reference are used in this equation. while the C-dots was dissolved in water (η = 1.33), as the reference solution to keep their absorption at minimum ˂ 0.05 by comparing the integrated fluorescence intensities using the Eq. 13 at excitation wavelength of 350 nm and the quantum yield was determinded. The quantum yield results calculated for C-dots are given in Table 1. According to the results, the highest amount of quantum yield among the studied aprotic solvents was obtained in diethyl ether (with a quantum yield of 0.53). This high quantum yield value can be attributed to the minimum value of AN in diethyl ether, among other aprotic solvents studied. To further explain this, it is assumed that the combination of other factors, such as the carbon core domain, affects emissions. However, solvatochromism deals with the effects of surface groups, and in the case of organic solvents, the effect of specific interactions such as hydrogen bond donors was minimal. Henceforth, if we limit our attention to the electron-donating and accepting characters of the constituents, the functional groups on the surface of C-dots containing -NH 2 and -OH groups act as electron donors in their excited states. Thus, the surface energy traps are stabilized next to less polar solvents with lower acceptor number (AN) value and eventually promoted emission efficiency results [29,49]. At the practical level, further studies and research on the optical properties of C-dots, including quantum yield, are conserved in a wide range of solvents, and some of these solvents are potentially useful in biological applications. As the results illustrated in Table 1 trend. Consistent with our previous study [35], it was shown that the solvent with greater AN (the higher electron-accepting character can be seen from their α values) was a more effective quencher. Conclusions In summary, in this research work, we dissolved C-dots in various aprotic solvents to explore the specific solvation effects. A different factor is expressed on C-dots solventsoluble interaction is generally controlled by the polarizability and basicity parameters. Studies were performed to calculate the dipole moment. The results showed that the excited state dipole moment is higher than the ground state. The results related to KAT parameters in fluorescence indicates the occurance of a bathochromic shift of the fluorescence. One of the grand challenges in studying C-dots for biological applications and bio-imaging is to increase the duration and intensity of C-dots fluorescence. Steps for more efficient use of C-dots in this field prompted us to evaluate quantum yields, and the resulted values indicated significant improvements of C-dot's quantum yields in some aprotic solvents.
4,908.4
2021-09-25T00:00:00.000
[ "Materials Science", "Chemistry", "Physics" ]
Formulae for generating standard and individual human cone spectral sensitivities Abstract Normal color perception is complicated. But at its initial stage it is relatively simple, since at photopic levels it depends on the activations of just three photoreceptor types: the long‐ (L‐), middle‐ (M‐) and short‐ (S‐) wavelength‐sensitive cones. Knowledge of how each type responds to different wavelengths—the three cone spectral sensitivities—can be used to model human color vision and in practical applications to specify color and predict color matches. The CIE has sanctioned the cone spectral sensitivity estimates of Stockman and Sharpe (Stockman and Sharpe, 2000, Vision Res) and their associated measures of luminous efficiency as “physiologically‐relevant” standards for color vision (CIE, 2006; 2015). These LMS cone spectral sensitivities are specified at 5‐ and 1‐nm steps for mean “standard” observers with normal cone photopigments and average ocular transparencies, both of which can vary in the population. Here, we provide formulae for the three cone spectral sensitivities as well as for macular and lens pigment density spectra, all as continuous functions of wavelength from 360 to 850 nm. These functions reproduce the tabulated discrete CIE LMS cone spectral sensitivities for 2‐deg and 10‐deg with little error in both linear and logarithmic units. Furthermore, these formulae allow the easy computation of non‐standard cone spectral sensitivities (and other color matching functions) with individual differences in macular, lens and photopigment optical densities, and with spectrally shifted hybrid or polymorphic L‐ and M‐cone photopigments appropriate for either normal or red‐green color vision deficient observers. | INTRODUCTION Color perception depends on complex spatial and temporal interactions occurring within serial and parallel neural processing networks, but the initial stage of processing is relatively simple since it depends only on the relative activations of three types of cone photoceptor: the long-(L-), middle-(M-) and short-(S-) wavelength-sensitive cones.The three cone types all contain the same chromophore, 11-cis retinal, but bound to different photopigment opsins. Upon photon absorption, 11-cis retinal changes isoform to all-trans retinal and thereby initiates the phototransduction cascade.The probability of photon absorption as a function of wavelength and thus the cone spectral sensitivity is modified by the photopigment opsin. Measured with respect to the sensitivity to lights of different wavelengths entering the eye at the cornea, the "corneal cone spectral sensitivities" depend on the cone spectral sensitivities at the retina and the filtering by the lens and macular pigments through which the light must pass before reaching the photopigment; for review, see Reference [1].All these factors are subject to individual variability, so that different observers viewing the same scene may make different color matches and perceive different colors.Here, we provide a simple means of computing corneal cone spectral sensitivities that can incorporate individual differences and thus predict color matches for non-standard observers. The Commission Internationale de l' Eclairage (CIE) sanctioned the cone spectral-sensitivity estimates of Stockman and Sharpe 2 and their associated measures of luminous efficiency 3,4 as "physiologically-relevant" standards for color vision. 5,6The standards are tabulated for 2-deg and 10-deg visual fields at 5-nm steps in CIE publications 5,6 and at 0.1, 1, and 5-nm steps on www.cvrl.org(at up to eight decimal places to reduce rounding errors in calculations).They are arguably the most secure estimates of mean human cone spectral sensitivities available for modeling human color vision since they are based on observers of known genotype. 7They are called the standard cone fundamentals for reasons explained later.They follow a long history of cone spectral sensitivity estimates, the first plausible estimates of which were obtained in the 19th century by König and Dieterici. 8otable estimates since then include those by Bouma, 9 Judd, 10,11 Wyszecki and Stiles, 12 Vos and Walraven, 13 Smith and Pokorny, 14 Vos, 15 Estévez, 16 Vos, Estévez and Walraven, 17 and Stockman, MacLeod and Johnson. 18hese studies generally aimed to exclude anomalous observers; that is, observers known now to have L/M hybrid opsin genes the spectral sensitivities of which are shifted and lie between the standard L-and M-cone fundamentals.Stockman and Sharpe 2 were able to exclude anomalous observers on molecular genetic grounds. Here, we develop mathematical descriptions of the standard cone fundamentals that are continuous functions of wavelength and can replace the discrete, tabulated functions with very low residual error (see Figure 3, below).These continuous functions relate to the underlying photopigment absorbance spectra and thus can incorporate prereceptoral filtering, so enabling fundamentals with normal or shifted λ max values (the wavelength at which cone sensitivity is maximal) and different macular, lens, and photopigment optical densities to be easily generated. We start by generating continuous representations that define the cone spectral sensitivities for the mean standard normal observer for 2-and 10-deg vision.Our strategy is to fit functions to the L-, M-and S-cone photopigment absorbance spectra and the standard lens and macular optical density spectra and from them to calculate the corneal cone fundamentals (the corneal cone spectral sensitivities). We will consider the representation of the cone fundamentals in other color spaces in a separate section below. 1.1 | Glossary 2-deg: Small-field colour matches made with centrally viewed, circular fields subtending 2-deg diameter of visual angle. 10-deg: Large-field colour matches made with centrally viewed, circular fields subtending 10-deg diameter of visual angle. Chromaticity coordinates: l, m, which in terms of the tristimulus values are L/(L + M + S) and M/(L + M + S), respectively (r, g for RGB space, or x, y for XYZ space). CIE: Commission Internationale de l' Eclairage or International Commission on Illumination.An organization that sets international standards for colour and lighting. Colour match: A subjective match between two lights, typically side-by-side semi-circular fields, of different spectral power distributions (which are consequently metamers). Colour matching functions: Tristimulus values of the equal-energy, monochromatic spectrum locus. Colorimetry: The measurement and specification of colour. Cone fundamentals: Cone spectral sensitivities: l λ ð Þ, m λ ð Þ and s λ ð Þin colorimetric notation.These are the colour matching functions for three imaginary primaries, L, M and S, that uniquely stimulate each of the cones. Imaginary light: Theoretical light that has negative energy in some region of the visible spectrum.Examples include the imaginary cone primaries L, M and S with spectral power distributions L λ ð Þ, M λ ð Þ and S λ ð Þ, respectively. Metamers: Two lights that match, but which are physically different.An example is a match between a yellow light and a mixture of red and green lights. Photopigment absorbance spectra: Relative probability of photon absorption as a function of wavelength for the different cone photopigment opsins.These are hypothetical functions that correspond to the absorbance probability of an infinitely dilute solution of photopigment.Also known as extinction spectra. Photopigment optical density: Attenuation of light per unit length of photopigment as a function of wavelength.It depends on the photopigment absorbance spectra, photopigment density, and the axial length of the cone outer segment. Photometry: The measurement and specification of the luminous efficiency of lights intended to be independent of colour. Photopic luminosity function: Photometric measure of luminance efficiency as a function of wavelength under photopic (i.e., rod-free) conditions: V λ ð Þor y λ ð Þ.Primary lights: R, G, B. The three independent primaries (real or imaginary) to which the test light is matched (actually or hypothetically).They must be independent in the sense that no combination of two can match the third. Spectral power distribution: P λ ð Þ.The power of a light, P, as a function of wavelength. Standard observer: The standard, mean observer is the hypothetical person whose colour matching behavior is represented by a particular set of mean CMFs. Trichromacy: The ability of normal observers to match test lights with a mixture of three independent primary lights. Tristimulus values: R, G, B, the amounts of the three primaries required to match a given stimulus. Univariance: The output of a photoreceptor varies unidimensionally according only to the rate of photon absorption. | Normal cone photopigment absorbance spectra The templates used for the photopigment absorbance spectra are effectively 8th order Fourier polynomials (sometimes known as truncated Fourier series) of the form: It is important to note that this function is purely descriptive and, as shown below, provides a very good fit to the data, but there is no theoretical significance to the function or any of the parameters (the value s is a renormalization factor added after the polynomial fit so that the linear absorbance spectra peak exactly at 1 at λ max [to the nearest 0.1 nm]).The variable θ P varies from 0 to π over the fitted wavelength region and corresponds to one half period of the fundamental of the Fourier polynomial.We used a log wavelength scale from 2.556 and 2.929 log 10 nm (360-850 nm) and only half the period of the fundamental because the absorbance spectra are not equal at their long-and short-wavelength ends and using the full 2π period of θ P would force them to be equal.For θ P to vary from 0 to π over the log 10 (360) to log 10 (850) range: 0][21] Thus, this scale can straightforwardly be used to shift the cone absorbance spectra to account for small individual differences in λ max without changing the shape of the spectrum for different λ max (however, as we show below, the spectra for the different cone types do vary slightly in shape even on a logarithmic wavelength scale, so that the formulae for the L-M-and S-cone spectra have been separately determined). We start with the absorbance spectra adopted by the CIE from Stockman and Sharpe. 2 These spectra range from 390 to 830 nm for the L-and M-cone spectra and from 390 to 615 nm for the S-cone spectrum.The range corresponds to the range of wavelengths used by Stiles and Burch in their 10-deg color matching experiments.The upper limit of the S-cone spectrum corresponds to the longest wavelength measured in S-cone monochromats. 22Because we also want to be able to spectrally shift the cone fundamentals to account for individual differences in λ max , we have extended the absorbance spectra by extrapolating them all to 360 nm at short wavelengths and to 850 nm at long wavelengths.The short wavelength region from 360 to 400 nm is the upper end of the ultra-violet spectrum (UVA), which is usually taken to extend from 315 to 400 nm 23 and the long wavelength region from 780 to 850 nm falls in the near infrared region (NIR or IR-A). The long-wavelength extrapolations were achieved by aligning Lamb's photopigment template 21 with the existing data and extending the data from 830 to 850 nm for the Land M-cone spectra and from 615 to 850 nm for the S-cone spectrum.Although the extensions are very plausible, 24 it is not crucial that they are exactly correct, since they lie at the spectral extremes for each photopigment where they have little effect on typical color matching predictions. The short-wavelength extrapolations are more problematic for several reasons.First, because there is a lack of normal color matching data in that region.Second, because the lens and macular densities below 390 nm are uncertain.Third, because any color matching measurements in that region are affected by the fluorescence of the lens and cornea (see for review 25 ), as a result of which the usual laws that underlie color matching 26 may not apply.Another concern is that the CIE L-and M-cone photopigment absorbance spectra proposed by Stockman and Sharpe 27 and adopted by the CIE may be incorrect below 400 nm, since between 400 and 390 nm the absorbance spectra fall, whereas other estimates of photopigment absorbance spectra suggest that they should rise slightly between 400 and 390 nm-consistent with the shallow, secondary "β-band" peak in photopigment spectra at shorter wavelengths. 24,28Given the uncertainties about both lens densities and color matches below 400 nm, we have taken the somewhat drastic step of adjusting the CIE spectra from 390 to 400 nm to be consistent with spectra derived from photopigment measurements.To achieve this, we calculated a shallow β-band spectrum for the L-cone photopigment below 400 nm using the photopigment template formula proposed by Govardovskii et al., 24 and aligned it with the original CIE L-cone photopigment absorbance spectrum at 400 nm.The M-cone spectrum was then approximated below 400 nm by appropriately spectrally shifting the modified L-cone spectrum.The S-cone spectrum was extended below 390 nm by shifting the new L-and M-cone spectra to align with the S-cone spectrum at 390 nm and then averaging the aligned templates.Shifts were carried out along a logarithmic wavelength axis.In terms of the CIE standards, we have therefore slightly changed the color matching data between 390 and 400 nm.These changes have relatively little effect on most color matching predictions.Color in the UVA part of the spectrum is considered further in the Discussion. Figure 1 shows the discrete points of the tabulated CIE absorbance spectra connected by the red, green, and blue colored lines for the L-, M-and S-cone spectra, respectively.The logarithmic cone absorbance spectra are plotted in the upper panel and the linear absorbance spectra in the third panel.The spectra are normalized to have the same peak values.The absorbances are plotted as a function of wavelength shown on a logarithmic scale.The short-and long-wavelength extensions are shown as the solid black lines.The dashed yellow lines show the fits obtained with Equations ( 1) and (2) (as outlined above, and evident in the linear representation, the short-wavelength extensions for the L-and M-cone spectra deviate from the CIE estimates below 400 nm). We fitted Equations ( 1) and (2) to the logarithmic cone absorbance spectra from 360 to 850 nm, simultaneously minimizing both the logarithmic and linear errors.As the absorbances vary with wavelength over greater than 6 decades, minimizing the errors in just the linear absorbances led to substantial errors in the prediction of the smaller logarithmic absorbances and minimizing just the logarithmic errors led to substantial errors in the prediction of larger linear absorbances.To mitigate this problem, we simultaneously minimized both the logarithmic and the linear errors.All fits were made at 1-nm steps using the standard non-linear fitting Marquardt-Levenberg algorithm implemented in SigmaPlot (Systat Software, San Jose, CA) to minimize the sum of the squared differences between the data and model predictions.Separate fits were made to the L-, M-and S-cone fundamentals, the results of which are shown by the yellow dashed lines in top and third panels of Figure 1.We found that increasing the order of the Fourier polynomials beyond 8 did not significantly improve the fits in Figures 1 and 4, nor improve the reconstruction of the cone fundamentals and their transformation into other spaces such as the RGB and XYZ spaces shown in Figure 6, below.The polynomial coefficients for the three log cone absorbance spectra are given in Table 1. The continuous logarithmic cone absorbances as a function of wavelength, log 10 Note that this procedure and the resulting formulae can be easily applied to other estimates of the cone spectral sensitivities, such as those by Smith and Pokorny, 14 by first converting them to cone absorbance spectra; see appendix A of Reference [29]. The Fourier polynomials fit the cone absorbances extremely well on both logarithmic and linear scales.The logarithmic errors are shown in the second panel and the linear errors in the bottom panel.Most of the errors on both scales are less than ±0.01.There is a slight perturbation in the CIE S-cone log absorbance spectrum between about 480 and 510 nm, which is smoothed by the fitted template.The adjusted R 2 values for the simultaneous fits are >99.99% for each of the L, M and S fits, and the standard errors of the L, M, and S fits are 0.003, 0.003 and 0.004, respectively.Thus, the errors resulting from the use of the proposed templates rather than the CIE tabulated functions will be small even for monochromatic lights. The λ max values of the photopigment spectra derived from Equations ( 1) and ( 2) using the values given in Table 1 are 551.9,529.8 and 416.9, nm for L-, M-and S-cones, respectively.Note that the Fourier polynomial coefficients have little theoretical significance and can change substantially with small adjustments to the underlying spectrum being fitted.Their utility is in allowing continuous spectra to be computationally generated with negligible differences from the discrete, tabulated CIE functions (see Figure 3, below).They also provide continuous functions to allow arbitrary wavelength shifts associated with different values of λ max .Later, we consider the derivation of a common template to describe all three spectra. | Macular and lens optical density spectra The templates used for the macular and lens pigment optical density spectra were also Fourier polynomials: where n = 11 for the macular spectrum and n = 9 for the lens spectrum.We used a linear wavelength scale to define both the macular and lens density spectra.The value d is a renormalization factor that scales the polynomials so that, consistent with the CIE standard, the macular spectrum is 0.350 at 460 nm and the lens spectrum is 1.7649 at 400 nm.Note that the densities are logarithmica density of 0.0 at any wavelength means it is perfectly transparent at that wavelength and a density of 0.350 at 460 nm means the macular pigment allows only 10 À0.350 (44.7%) of light at this wavelength to pass through.A density of 1.7649 at 400 nm means the lens blocks all but 10 À1.7649 (1.7%) of light at that wavelength. The macular pigment absorbs light mainly of short wavelengths (see Figure 2, top panel for 2-deg visual fields; in both panels of Figure 2, wavelength is plotted on a linear scale).The pigmented macular area varies in shape from individual to individual and is irregular in density: greatest in the fovea and falling irregularly with retinal eccentricity to become largely absent by about 10 deg. 30The CIE standard macular density spectrum was originally proposed by Stockman and Sharpe 2 based on measurements by Bone, Landrum and Cairns. 31he standard (mean) macular densities at 460 nm are assumed to be 0.350 and 0.095 for 2-deg and 10-deg vision, respectively. 2he wavelength range for the macular template fit was from 375 to 550 nm; outside this range the density is assumed to be zero.For θ mac to vary from 0 to π as λ varies from 375 to 550 nm: The standard CIE macular density template, like the L-and M-cone absorbance spectra, is limited to between 390 and 830 nm.To extend this template to shorter wavelengths, we assumed that the density falls smoothly to zero between 390 and 375 nm and then remains at zero at shorter wavelengths.Wald 32 made sparsely sampled spectral macular density measurements showing that at 365 nm the density was zero (see his figure 4).At wavelengths longer than 830 nm, we assumed that the template remains at zero.The short wavelength extrapolation is speculative, since most of the available data are limited to wavelengths of 400 nm and longer.In the top panel of Figure 2, the CIE macular template for 2-deg vision is shown by the pink line and the extensions by the solid black lines. We found that an 11th order Fourier polynomial captured macular optical density spectra well enough to avoid significant discrepancies from the original template.The fit is shown by the dashed white line in the upper panel.The errors, which are shown as the pink curve in the bottom panel, are very small.The adjusted R 2 value is >99.99% the standard error of the fit is 0.0007.The errors resulting from the use of this template rather than the original discrete tabulated functions are negligible.Higher order polynomials did not perform significantly better.The polynomial coefficients are given in Table 2.The macular pigment density as a function of wavelength, mac λ ð Þ, is given by substituting Equation (4) into Equation (3) with the appropriate values from Table 2. The lens contains a pigment that absorbs light mainly of short wavelengths (see Figure 2, middle panel).The standard CIE lens density spectrum, 5 which is assumed to be 1.7649 at 400 nm, is shown by the orange solid line in the middle panel of Figure 2. It is a slightly adjusted version of the mean lens density spectrum of van Norren and Vos 33 proposed originally by Stockman, Sharpe and Fach. 22Like the macular spectrum, it is defined from 390 to 830 nm.For consistency with the other templates, we have extended it down to 360 nm and up to 850 nm.Above 830 nm, we can reasonably assume that the density remains zero.[36][37] Cooper and Robson, 35 for example, found that lens density in the UVA varied with age but was surprisingly high (4.5 log units at 360 nm in a 33-year-old lens).By contrast, Lerman and Borkman 36 found that the density in the UVA did not vary with age between 25 and 82 years yet was implausibly low (on average about 0.7 log unit).Subsequently, Weale, 37 who tried to resolve these discrepancies, found densities that were intermediate between these two extremes and confirmed that they did vary with age.][36][37] To extend the lens density template below 390 nm, we have used a slightly smoothed version of the lens density template proposed in another CIE report to facilitate computations of the absorption and transmission characteristics of the eye published in 2012. 38This template, which was based on measurements in rhesus monkey, 39,40 is broadly consistent with the human measurements of Weale. 37We have scaled the CIE 2012 lens density spectrum to align with the CIE 2006 lens density spectrum at 400 nm and used the scaled densities to define the lens density spectrum from 400 to 360 nm.The assumed extension is shown by the solid black line in the middle panel of Figure 2. The wavelength range for the lens template fit was 360 to 660 nm.Above 660 nm, the density is assumed to be zero.For θ lens to vary from 0 to π from 360 to 660 nm: A 9th order Fourier polynomial captured the extended lens optical density spectrum extremely well.The fit is shown by the dashed white line in the middle panel of Figure 2, and the errors shown as the orange curve in the lower panel.The adjusted R 2 value is >99.99% and the standard error of the fit 0.004.The errors resulting from the use of this template rather than the original discrete tabulated functions are small, but slightly greater than the macular fit.This is partly because of the relatively abrupt slope change and flattening of the CIE 2012 template at and below 375 nm, and because of other minor discontinuities in the combined CIE 2006 and 2012 lens templates.The polynomial coefficients are given in Column 2 of Table 2.The lens density as a function of wavelength, lens λ ð Þ, is given by substituting Equation (5) into Equation (3) with the appropriate values from Table 2. As noted above, the main purpose of the shortwavelength extensions was to facilitate spectral shifts of the L-and M-cone spectra along a log wavelength axis.If any analyses are restricted to 390 nm and longer after shifting, the effects of errors in the short-wavelength extrapolations should be small.Used with caution, the short-wavelength extensions might be useful for estimating visual performance between 360 and 390 nm with the proviso that the lens template is likely to be too transparent for older observers. | From the continuous spectra to corneal cone fundamentals By convention, absorbance spectra correspond to the spectral absorption of infinitely dilute photopigments (so that their shapes are independent of photopigment optical density). 41To calculate cone spectral sensitivities in the living eye, we need to know the axial optical density of the photoreceptors, which is proportional to the product of the concentration of the photopigments in situ and the length of the outer segment in which they reside and through which light passes. 41The length of the photoreceptor outer segment declines with eccentricity 42,43 and is shorter for S-cones than for L-and M-cones in the same retinal region. 44Thus, the photopigment optical density is lower for more peripheral lights and lower for S-cones.The peak photopigment optical densities assumed in the CIE standard for a 2-deg field of view are 0.50, 0.50, and 0.40 for the L-, M-and S-cones, respectively; and, for a 10-deg field of view, 0.38, 0.38 and 0.30 for the L-, Mand S-cones, respectively. 2Increasing the peak photopigment optical density broadens the spectral sensitivity of the cones by a process known as self-screening. 45Thus, longer foveal cones will have the same λ max as peripheral cones but be relatively more sensitive to wavelengths above and below the peak. Using the CIE standard cone optical densities and the formulae for the cone absorbance spectra, we can calculate the spectral sensitivities of the cones without prereceptoral filtering, also known as the photopigment absorptance spectra. 41The spectral sensitivity of the unfiltered L-cone photoreceptor, l R λ ð Þ, is related to the Lcone absorbance spectrum, l A λ ð Þ by: where l OD is the peak L-cone optical density, which, for the standard observer, is 0.5 for a 2-deg field and 0.38 for a 10-deg field.Similarly, for M and S*: where m OD is the peak M-cone optical density, which for the standard observer is 0.5 for a 2-deg field and 0.38 for a 10-deg field; and s OD is the peak S-cone optical density, which for the standard observer is 0.4 for a 2-deg field and 0.30 for a 10-deg field. To calculate from photoreceptor spectral sensitivities to corneal spectral sensitivities, l Q λ ð Þ, the filtering by the lens and macular pigments is included: and similarly for M-and S-cones.For the mean standard mean observer, k mac is 1 for a 2-deg field and 0.271 for a 10-deg field, and k lens is 1.0. It is important to note that corneal cone spectral sensitivities calculated in this way are quantal spectral sensitivities, since they are related to quantal absorptions by the photopigment.It is more common in colorimetry to use energy units.As the energy of a photon is inversely proportional to its wavelength, to specify corneal spectral sensitivities in energy units, l λ ð Þ, we simply multiply by λ and renormalize to unity peak: where α is a scaling constant that forces the function to peak at 1 (note that the change from quantal to energy units shifts λ max slightly to longer wavelengths).Figure 3 shows the corneal L-, M-and S-cone spectral sensitivities generated from the formulae as the dashed yellow lines in four panels.The original CIE functions are shown by solid red, green and blue lines.The upper panels show logarithmic sensitivities and the lower panels the corresponding linear sensitivities.The left panels show the 2-deg functions and the right panels the 10-deg functions.The agreement between the tabulated and generated spectral sensitivities is excellent.Excluding λ < 400 nm, where the deviations are due to adjustments in the shapes of the absorbance spectra (see above), the mean absolute errors (MAEs) for points sampled at 1-nm intervals are 0.0040 and 0.0018 for the 2-deg logarithmic and linear fundamentals, respectively, and 0.0043 and 0.0020 for the 10-deg logarithmic and linear fundamentals. | FORMULAE FOR NON-STANDARD OBSERVERS AND FOR CHANGES WITH ECCENTRICITY The CIE standards defined by the above equations and Fourier polynomial coefficients are useful for modeling color vision in those observers with normal cone spectral sensitivities and typical macular, lens, and photopigment optical densities.However, they are less useful for modeling color vision in observers whose spectral sensitivities deviate from the standard because of individual differences in the density of the lens pigment, the density of macular pigment, or the axial optical density of the photopigment in the photoreceptors.The utility of the formulae presented so far is that they can be used to generate cone fundamentals for individuals with different macular, lens and, photopigment optical densities.Macular pigment density can be varied by changing k mac in Equation (7), lens density by changing k lens in the same equation, and photopigment density by changing l OD , m OD or s OD in Equation ( 6). 7][48] Yet, even in young observers of a similar age (<30 years old) the range of densities is approximately ±25% of the mean. 33Asano, Fairchild and Blondé 49 usefully summarize the results of 13 past studies on the optical density of lens pigment in their Table 1.The CIE 2006 standard includes formulae and templates that change the shape of the lens pigment optical density spectrum with increasing age 5 based on work by Pokorny, Smith and Lutze. 48These are not implemented here. Individual differences in macular pigment optical density can also be large with a range of peak density from 0.0 to c.1.2 at 460 nm. 32,50,51The density of the pigment also changes with retinal location; tending to become more transparent with eccentricity, and becoming wholly or largely absent by a retinal eccentricity of 10 deg. 30Asano, Fairchild and Blondé 49 usefully summarize the results of nine past studies on the optical density of macular pigment in their Table 2. 3][54][55][56][57][58][59] Measurements made before and after a bleach yield mean peak optical density values that range from 0.3 to 0.6, while those that depend on oblique presentation range from 0.7 to 1.0; and other objective measures range from 0.35 to 0.60.See Stockman and Sharpe 1 for discussion.Asano, Fairchild and Blondé 49 summarize the results of four past studies in their Table 3. As well as differing among individuals, the macular pigment density and photopigment optical density vary with retinal eccentricity and must be adjusted when predicting the cone spectral sensitivities for target sizes and eccentricities different from the standard 2-and 10-deg foveal viewing conditions.These can also be easily changed by changing k mac in Equation ( 7) and changing l OD , m OD or s OD in Equation (6). Variations in the spectral positions of the L-and M-cone photopigments are common because of hybrid Land M-cone photopigment opsin genes.These hybrid genes are fusion genes produced by intragenic crossovers during meiosis between the L-and M-cone opsin genes, which lie in a head-to-tail array on the X-chromosome, 60 and thus contain coding sequences from both genes; for review, see References [61,62]. Both in vitro 63,64 and in vivo 7,65 determinations of the λ max of the absorbance spectra of hybrid pigments reveal a wide range of possible anomalous pigments with peaks lying in steps between those of the normal L-and M-cone pigments.In addition, smaller shifts occur within the normal population, because of different polymorphisms (commonly occurring allelic differences) of the M-and L-cone photopigment opsin genes.The most frequently observed polymorphic-induced shift occurs in the L-cone photopigment when alanine replaces serine at position 180 of the L photopigment opsin gene, leading to a mean shift across studies of about 3.5 nm towards shorter wavelengths. 66he L-and M-cone opsins are made up of a sequence of 364 amino acids, which are specified by "codons" (3-base sequences of DNA nucleotides) "read" from the genes.Of these 364 amino acids, only 15 differ between the L-and M-cone opsins, and of those only 7 are thought to change the spectral sensitivity of the photopigment. 61The seven L and M variants of the amino acids that alter spectral sensitivity are given in Table 3. Changing the amino acids shown in Table 3 from the M to the L versions or vice versa, shifts the spectral sensitivity of the resulting photopigment.The shifts in Table 3 are based on data from various sources, 7,[62][63][64][65]67 which are summarized in Table A1. Theshifts are given in the two far right columns.Notice that the shifts are sometimes slightly asymmetric.Based on these earlier data, codons at positions 233 and 309 have been assumed not to shift the spectral peak.Thus, only 5 codons seem to be important for determining spectral sensitivity. We have defined our absorbance spectra as functions of log wavelength so that, given the assumption that photopigment absorbance spectra are roughly shape invariant along such a scale, [19][20][21] we can easily shift them along the spectrum to generate cone absorbance spectra with different λ max . To calculate shifted absorbance functions, the spectral shift should be converted to θ P units as defined in Equation (2).Spectral shifts, however, are usually given as linear wavelength shifts in nm at λ max (i.e., as a shift from λ max1 to λ max2 ).The logarithmic shift is then log 10 (λ max1 /λ max2 ) and the shift in θ P units, Δθ P , is then: Note that the L-cone template given in Table 1 defines an L-cone absorbance spectrum that is the mean of the polymorphic L(ala180) and L(ser180) variants in the population.In the next section, we define an L-cone absorbance spectrum that is appropriate for individual L-cone variants.These formulae can also be used to spectrally shift the S-cone absorbance spectrum, but unlike the M-and L-cone spectra there is only weak evidence for such shifts.Estimating S-cone λ max from psychophysical measurements in five normal observers and three S-cone monochromats, Stockman, Sharpe and Fach 22 found that the mean λ max across observers was 418.8 nm and the standard deviation was 1.5 nm (see their p.2922).Given the uncertainties about pre-receptoral filtering at shorter wavelengths, this variability is very small; however, Stockman et al. did note that their data fell roughly into two clusters with means of 417.4 and 420.1 nm. The continuous spectral sensitivity functions generated by these formulae may be useful enhancements for the development of individual colorimetric observer models such as the one proposed by Asano, Fairchild and Blondé. 49 | FORMULA FOR AN L-CONE POLYMORPHIC TEMPLATE Two common L-cone polymorphic variants are found in the normal population: one with alanine at position 180 of the opsin gene, L(ala180), and one with serine at position 180, L(ser180); for review, see Reference [61].To generate the mean L-cone fundamental, Stockman and Sharpe 2 combined spectral sensitivity measurements from single-gene dichromats with either L(ser180) or L(ala180), which they found differed in spectral position on average by 2.7 nm with L(ser180) shifted towards longer wavelengths. 7To produce the mean L-cone spectral sensitivity subsequently adopted by the CIE, 5,6 Stockman and Sharpe 2 linearly combined the L(ala180) and L(ser180) spectral sensitivities in the ratio of 0.44:0.56(based upon the proportion of the polymorphisms found in the population). Consequently, the L-cone absorbance template given in Table 1 represents the weighted linear average of the absorbances of two L-cone polymorphisms with slightly different spectral sensitivities.In this section, we derive a template shape that is appropriate for either the L(ser180) or the L(ala180) variants alone. We repeated the fit to L carried out to obtain the L-cone template given in Table 1, but assumed that the mean L-cone absorbance spectrum was the linear addition of two identical templates combined in the ratio 0.44:0.56 with the one with the higher ratio shifted along the log wavelength scale by À0.002108 log 10 nm (À2.7 nm from the 557.5-nm λ max of the best-fitting L(ser180) template).As before, a simultaneous fit to both the linear and logarithmic absorbances was performed. The adjusted R 2 value for the L(ala180) plus L(ser180) fit is >99.99%, and the standard error is 0.003.The fit and errors (not shown) are visually indistinguishable from the L-cone fit and errors as shown in Figure 1.The λ max of the L(ser180) function 553.1 nm and that of L(ala180) function 550.4 nm, which is a spectral shift of 2.7 nm or 0.002125 log 10 nm.Column 2 of Table 4 gives the L(ser180) template.Shifting it by 0.002125 log 10 nm to shorter wavelength gives the L(ala180) template. | FORMULA FOR A COMMON SHAPE-INVARIANT PHOTOPIGMENT TEMPLATE The reason for separately constructing Fourier polynomials for the absorbance spectrum of each of the three cone types is because the underlying CIE spectra have slightly different shapes even when plotted against a logarithmic function of wavelength.Thus, to compute continuous CIE cone fundamentals and minimize the differences from the tabulated spectra, color matching T A B L E 4 Fourier coefficients for the L(ser180) template optimized for L-cone polymorphic spectra only (column 2) or for all cone spectra column 3).Here, we generate an optimal common photopigment absorbance spectrum that best fits all three CIE cone absorbance spectra.Such a template helps to highlight the differences in shape between the CIE spectra but is also useful for generating absorbance spectra for photopigments in other species. Fourier coefficients As in the previous section, we assumed that the CIE L-cone absorbance spectrum is a linear combination of the underlying L(ala180) and L(ser180) spectra in the ratio 0.44:0.56.We derived the best-fitting common template by aligning it with the L(ser180) spectrum and then allowing the template to shift along a log wavelength scale for other spectra.The shift to L(ala180) was fixed at À0.002108 log 10 nm (À2.7 nm from the 557.5-nm λ max of the best-fitting L(ser180) template consistent with the shift adopted by Stockman and Sharpe), while the shifts to M and S were best-fitting shifts.As before, a simultaneous fit to both the linear and logarithmic absorbances was performed, and the extension of the S-cone absorbance function between 360 and 390 nm was not allowed to influence the fit since it had already been averaged from the L-and M-cone templates (see above). Figure 4 shows again the extended CIE absorbance spectra used to derive the Fourier polynomial as the solid red, green and blue lines for L-, M-and S-cones, respectively, plotted as logarithmic absorbances in the top panel and linear absorbances in the third panel; wavelength is plotted on a logarithmic scale in all panels.The regions of the S-cone functions not included in the fit are shown in black.The best-fitting shifted common template that fits the CIE functions are shown by the three dashed yellow lines (the L-cone fit is, of course, the weighted sum of L(ser180) and L(ala180)).The Fourier polynomial accounts reasonably well for the three cone absorbances on both logarithmic and linear scales.The logarithmic errors are shown in the second panel of Figure 4 and the linear errors in the bottom panel.The errors are much larger than those found for individually fitted templates (see Figure 1 and note the change of scale in the error plots) and reflect mainly differences in the widths of the underlying CIE absorbance spectra that cannot be accounted for by a single template shape.Nevertheless, the adjusted R 2 value for the fit is >99.99% and the standard error of the fit is 0.0260.However, the fit with mixed L(ala180) and L(ser180) templates is no better than the fit with a single template (not shown), and both are considerably worse than the individual template fits. Column 2 of Table 4 gives the coefficients that define the L(ser180)-cone spectrum without any spectral shift.Relative to this spectrum the L(ala180)-spectrum should be shifted by À0.002108 log nm (consistent with the assumed 2.7-nm shift), the M-cone spectrum by À0.024187 log nm, and the S-cone spectra by À0.124549 log nm.These correspond to shifts in the λ max of the common L(ser180) photopigment absorbance spectrum from its peak at 557.5 (its unshifted value) to 554.8 nm for L(ala180) to 527.3 nm for M and to 418.5 nm for S. The peak of the combined L(ser180) and L(ala180) template is 556.0 nm. For the common template the log wavelength scale is equivalent to a "normalized frequency" scale (i.e., f/f max or λ max /λ, since f is proportional to 1/λ).Shape invariance The extended logarithmic (top panel) and linear (third panel) L-, M-and S-cone photopigment absorbance spectra (red, green, and blue solid lines, respectively) fitted by the same 8th order Fourier polynomial (yellow dashed lines) shifted along the log-wavelength scale.The L-cone spectrum was assumed to be a linear combination of two underlying common spectra: one for L(ala180) and the other for L(ser180).The black portion of the S-cone spectrum was not included in the fit.See text for further details. along such a scale can thus be linked to relative quantal sensitivity 19,69 (since the energy of a photon is proportional to its frequency).Nonetheless, the fits shown in Figure 4, particularly the linear fit in the third panel, suggest that shape invariance along a log wavelength scale is at best approximate, since the M-cone absorbance function is clearly narrower than either the S-cone or L-cone absorbances.There are two plausible reasons why the M-cone function might be narrower than the L-cone function.First the M-cone photopigment optical density might be less than the L-cone density rather than being equal to it as previously assumed. 2 Indeed, several groups have found that the M-cone optical density is lower than that of L-cones by about 0.1 to 0.2 log unit, but the sample sizes in those studies were small and other evidence is contradictory; for review see Reference [1].Furthermore, a more recent study with large sample sizes of 28 protanopes and 44 deuteranopes showed no difference between the M-and L-cone optical densities. 70Nevertheless, a lower M-cone optical density could account for some of the differences in shape that we find.Similarly, if the S-cone photopigment density is considerably higher than that of M-cones this could account for the different widths of their absorbance functions, but this is unlikely because, as noted above, the available evidence is clear that S-cone density is in fact lower than M-cone.A second reason that the M-cone function might be narrower than the L-cone one is that the L-cone function is the population mean of two polymorphic variants of the L-cone photopigment (one with serine at position 180 and another with serine at position 180) with λ max values that differ by 2.7 nm (see above).However, this assumption was explicitly made when we fitted the spectra shown in Figure 4 to a common template and was insufficient to account for the shape differences (it would also not account for the M-cone's absorbance spectrum being narrower than that of the S-cone). | CONE SPECTRAL SENSITIVITIES AND COLORIMETRY In this section, we relate the cone spectral sensitivities to colorimetry and color matching.The formulae we have derived to generate the cone fundamentals can be straightforwardly used to construct other color matching functions and chromaticity coordinates.However, to help explain these procedures, we first introduce some of the concepts and nomenclature used in colorimetry. In colorimetry, color is characterized in terms of how it can be matched to a mixture of three lights called "primary lights".A typical matching experiment is illustrated in Panel (A) of Figure 5.The primary lights in this example are the 444 (B), 526 (G) and 645 (R) nm lights used by Stiles and Burch 71 for their 10-deg color matching measurements (note that primaries are usually referred to by bold capital letters).In theory any three lights can be used as primaries, so long as no primary can be matched to a combination of the other two.The test lights in this experiment are monochromatic lights of wavelength, λ.Observers were asked to match a test light of wavelength, λ, in one half of a 10-deg field with a mixture of the three primaries, and the matches were made as a function of test wavelength to produce the color matching functions (CMFs), denoted r λ ð Þ, g λ ð Þ, and b λ ð Þ, shown in Panel (B) (note that CMFs are denoted by lower case italics as functions of wavelength).These functions give the amounts of the three primary lights required to match test lights of equal energy across the spectrum.† The amounts of primary lights are known as the tristimulus values, which are denoted by upper case italics (e.g., R, G, and B).Except at the primary wavelengths, one of the Stiles and Burch CMFs is always negative.Below 444 nm and above 645 nm g λ ð Þ is negative, between 444 and 526 nm r λ ð Þ is negative, and between 526 and 645 nm b λ ð Þ is negative.Mathematically, a negative CMF value for a given primary means that that primary must be subtracted from the other two primaries to complete the match to the test.In practical terms, since lights cannot be subtracted in this way (negative light is not physically realizable), the negative sign means that the primary light has been added to the test light to complete the match against the other two primaries.This is shown in Panel (A) where a bluish test light (λ ≈ 490 nm, and r λ ð Þ < 0) requires the red primary to be added to the test to match a combination of the blue and green primaries.When added to the test light, the primary "desaturates" it. In these kinds of experiments in which the two fields are presented in the same context, a color match is made at the level of the cones in that when a match is made, the excitations of the three cone types produced by the lights making up each of the matching semi-circular fields should be identical (note, this is different from asymmetric color matches where the matched lights are shown in different contexts (e.g., under different illuminations) and can produce different cone excitations but give rise to the same color sensation, or vice versa).A consequence of color matching being determined at the level of the cones is that the color matching functions, F I G U R E 5 (A) Maximum saturation method of setting a color match.A monochromatic test field of wavelength, λ, can be matched by a mixture of red (645 nm), green (526 nm) and blue (444 nm) primary lights, one of which must be added to the test field to complete the match.In this example, the red primary must be added to match a cyan (c.490 nm) test light.(B) The amounts of each of the three primaries required to match monochromatic lights spanning the visible spectrum are the r λ Color matches produce identical quantum catches in the three cone types.Thus, the cone fundamentals CMFs l λ ð Þ, m λ ð Þ, and s λ ð Þ (or the cone spectral sensitivities) must be a linear transformation of the r λ The l,m cone chromaticity plane with the coordinates of the spectrum locus shown by the solid line and yellow circles.The purple line joins the bottommost points on the locus and is the limit of physically realizable colors in the violet to red region of the chromaticity space (dashed purple line).The red, green, and blue diamonds are the intersections of the L, M and S primary vectors with the l,m cone chromaticity plane.The white dashed lines that join them delimit the imaginary and real colors that can be matched by adding together those primaries.The red, green, and blue squares are the intersections of the vectors representing the R, G and B primaries with this plane.The blacked dashed lines joining them show the real colors that can be matched by adding together those primaries.The figure shows 10-deg color matching data. matches 72 ; a helpful version of which in terms of matching symmetry, transitivity, proportionality, and additivity is given by Wyszecki and Stiles 73 p. 118) Note that the usual colorimetric nomenclature for CMFs (e.g., The cone spectral sensitivities are thus the "fundamental" color matching functions or cone fundamental CMFs on which all other CMFs depend. To help explain the relation between the primaries and their color matching functions, as well as the relation between RGB and LMS CMFs, we have plotted in Panel (D) of Figure 5 the primaries and the spectrum locus in a 2D plot, using what are known as chromaticity coordinates.The three cone spectral sensitivities or cone fundamentals allow colors and lights to be straightforwardly defined within a three-dimensional vector space corresponding to the triplet of cone excitations each produces.Any such color or light can be described by its tristimulus coordinates (L, M, S).Note, for an arbitrary real light P with power spectrum P(λ), the tristimulus values are defined as, for example., L ¼ R P λ ð Þl λ ð Þdλ, and similarly for M-and S-cones.A given tristimulus value thus corresponds to all lights that would produce the same three cone responses that is, all possible metamers.However, it is often helpful for visualization to project this 3-dimensional vector space into a 2-dimensional plane.The projective dimension is typically either luminance or intensity, so that what remains is chromaticity.One such projection plots all points along a line through the origin onto the plane with the equation: The l,m chromaticity diagram is a projective transformation of this plane, such that the l chromaticity coordinate is: and the m chromaticity coordinate is: The s chromaticity coordinate, which is s = 1 À (l + m), is not plotted.Although the l,m representation may be unfamiliar, we prefer it as it is the only one that corresponds to the lowest level of vision-the cone activationsunlike say MacLeod-Boynton space, 74 which is supposed to correspond to cone-opponent processes in the early visual pathways, or the hue and chroma coordinates of the Munsell color system, 75 which are linked to color appearance.Panel (D) of Figure 5 shows the fundamental CMFs plotted in l,m chromaticity coordinates.The locus of the spectrum is shown as the solid black line with selected wavelengths highlighted by the yellow circles.This locus and the purple line joining its bottommost points contains the area within which colors can be produced by real lights or by mixtures of real (physically realizable) lights.Points outside this area cannot be produced by real lights.The R, G and B primaries of Stiles and Burch are shown as colored squares and the black dashed triangle encloses all the colors that can be matched to positive amounts of R, G and B that is, the color gamut of these primaries.The three primary lights of 444, 526 and 645 nm chosen by Stiles and Burch enclose a large area of the realizable color gamut delimited by the spectrum locus and purple line shown in Panel (D).Although the spectrum locus lies close to the line connecting the G and R primaries in this plot, it in fact lies just outside the gamut (except at its corners) so that as previously mentioned one of the CMFs must be negative at all wavelengths.Also shown are the three LMS cone primaries as color diamonds connected by dashed white lines; these lie outside the gamut not only of RGB but also of the spectrum locus so that they are imaginary lights.The cone primaries are lights which, if they could be produced, would uniquely stimulate a single cone type.They cannot be produced because the cone spectral sensitivities overlap extensively throughout the spectrum, so that all lights excite more than one type.Note that the dashed white triangle fully encloses the spectrum locus so that the LMS CMFs in Panel (C) are everywhere positive as would be expected given the fundamental nature of the cone activations. Two caveats are perhaps of interest here.First, the unique modulation of a single cone type is possible if the light is modulated around a non-zero mean, since this allows modulations both above and below the non-zero mean by a process called "silent substitution". 76Yet since the mean light also stimulates the other two cones the activation is not unique-only the modulation is.8][79] It remains an open question how those colors are perceived. In summary, the imaginary LMS primaries enclose the spectrum locus and produce CMFs that are always positive.Other imaginary primaries, such as XYZ, see below, can be chosen that also enclose the locus so that their CMFs are always positive.Real primaries, such as RGB, must lie on or within the spectrum locus (plus the purple line) and produce CMFs at least one of which is non-positive at each wavelength. | LINEAR TRANSFORMATIONS BETWEEN COLOR-MATCHING SPACES The LMS space is the fundamental, physiologically relevant color-matching space since, in principle at least, it reflects the relative excitations of cones themselves.Nevertheless, many prefer to plot color matches in other spaces.We consider two such color-matching spaces here: an RGB space with physically realizable primaries, and the XYZ space with the imaginary primaries introduced by the CIE in their 1931 proceedings 80 and consequently used extensively in applied color fields and in colorimetric devices. To be consistent with Grassmann's laws, 26,81 any color-matching space must be a linear transformation of the LMS space, and consequently any two colormatching spaces must be linear transforms of each other.Converting between spaces simply requires specifying the relevant linear transformation-usually in the form of the components of a 3 Â 3 matrix. The linear transformation from RGB space to LMS space is given by: The nine elements of this matrix are the three cone spectral sensitivities (in energy units) to each of the three physical primaries.For example, for a monochromatic light of wavelength λ R , L R ¼ l λ R ð Þ is the sensitivity of the L-cones to R. Similar definitions apply to the other elements of the 3 Â 3 matrix in relation to the other primaries and cone types (note that for broadband primary lights the sensitivities are replaced by, for example, is the power spectrum of the red primary [it is to avoid this complication that monochromatic lights are chosen]) Equation (11a) allows us to convert from an RGB color-matching space to LMS space by breaking down each of the real RGB lights into their separate effects on the L-, M-and S-cones, and adding them together. As a specific case of Equation (11a) we can transform RGB CMFs into LMS CMFs, Indeed, this is how the CIE cone spectral sensitivities are defined: as a linear transformation of the 10-deg RGB The coefficients of the 3 Â 3 matrix are the cone spectral sensitivities to the RGB primaries, which were determined from an extensive set of spectral sensitivity measurements and color matches made in genotyped color deficient observers, some normal observers and from the 10-deg matches themselves. 7,22,27,82The standard 2-deg cone spectral sensitivities are based on the same transformation, but were adjusted for macular and photopigment optical densities appropriate for a 2-deg target field; for details see Reference [2].Note that the units of cone activation are somewhat arbitrary, since the absolute values are unknown, so there are scaling factors for each cone in converting from RGB to LMS and these scaling factors have been absorbed into the blue coefficients, so they are normalized to 1 (see third column of the 3 Â 3 matrix in Equation ( 12)).The bottom left coefficient, S R in Equation (12), indicates the response of the S-cones to R, and is assumed to be zero (this will be true for any R unless it is of unusually short wavelength). Given that we know and have generated l λ ð Þ, m λ ð Þ, and s λ ð Þ using the formulae above, how do we linearly transform those CMFs to a new set of real primaries?And more generally, how do we convert LMS space into some other RGB space?The procedure is straightforward.First, we populate the coefficients of the upper 3 Â 3 matrix in Equation (11a) (i.e., input the cone spectral sensitivities to the new RGB primaries).Then we find the inverse matrix and calculate the second row of Equation ( 13): In theory, the coefficients of the lower 3 Â 3 inverse matrix are the tristimulus values produced by the three imaginary cone primaries, e.g., R L ¼ R L λ ð Þr λ ð Þdλ, where L λ ð Þ is the imaginary spectral power distribution of the L-cone primary.As in Equations (11b) and ( 12) we can substitute CMFs in place of the generic tristimulus values in Equation ( 13) to convert the LMS CMFs into RGB CMFs. Using these transformations, we plot in Figure 6 three chromaticity diagrams each showing our cone fundamentals as the spectrum locus (as white curves and white circles) and the CIE 2006 10-deg cone CMFs (as black dashed curves and black circles).The upper panel shows the l,m coordinates described above.The middle panel shows the spectrum locus in r,g chromaticity coordinates for l λ ð Þ, m λ ð Þ, and s λ ð Þ transformed to the Stiles and Burch RGB primaries of 444, 526 and 645 nm, and the lower panel shows the spectrum locus in x,y, chromaticity coordinates for l λ ð Þ, m λ ð Þ, and s λ ð Þ transformed to the XYZ primaries according to Equation (15).Note, r,g and x,y coordinates are derived from RGB and XYZ tristimulus values in the same way l,m are derived from LMS that is, The agreement is excellent except for a minor discrepancy centered at 500 nm in r,g coordinates in the middle panel.This discrepancy reflects mainly the exaggerated stretching of this space (compared with l,m and x,y diagrams), and is likely to have little practical effect on color matching predictions. 83The agreement between the spectral loci transformed from the tabulated and generated CIE cone fundamentals is excellent for both l,m and x,y with mean absolute errors (MAEs) for points sampled at 1-nm intervals of 0.0006 for l(λ), 0.0008 for m(λ), 0.0007 for x(λ) and 0.0011 for y(λ) but because of the discrepancy around 500 nm the agreement is less good for r,g with MAEs of 0.0025 for r(λ) and 0.0029 for g(λ). In general, the formulae presented here can be used to generate CMFs and chromaticity coordinates for any arbitrary set primaries with small errors compared to the tabulated CIE data. | COLOR VISION IN THE UVA REGION ABOVE 360 NM The extrapolations of the three cone absorbance spectra from 400 to 360 nm predict the relative cone excitations and thus color matches in that spectral region.The predictions can be seen clearly in the three panels of Figure 6, or blown up in l,m coordinates in Figure 7.As wavelength decreases below 420 nm, the spectrum locus reverses and then moves diagonally up and towards the right reflecting slight increases in both the L-and the M-cone spectral sensitivities in that region resulting from β-band absorbances, and a corresponding decrease in S-cone sensitivity (see Figure 1).Thus, lights would be expected to appear less violet (i.e., bluer) and more desaturated as the wavelength decreases in this short wavelength region.Yet, how plausible, or indeed useful are these color matching predictions? Expectations of a rotation in the spectrum locus and a color reversal in long-wavelength UVA region due to β-band cone absorbances have been stated before, 84,85 but are those expectations consistent with the color appearances of those lights?Helmholtz 86 noted that the violet color of 400 nm turned back towards blue as the wavelength decreased, which is consistent with the chromaticity diagram of Figure 7.However, color perception in this spectral region, as Helmholtz pointed out, is complicated by the fluorescence of the lens and retina.Lenticular fluorescence in the UVA region is primarily due to a fluorophore with an excitation maximum at 360 nm that emits light with an emission maximum between 420 and 440 nm and therefore alters the color appearance of the exciting light; see, for reviews. 87,88he effects of lenticular fluorescence can be eliminated, and UVA sensitivity increased, by using aphakic observers who lack the lens.In such observers, Gaydon 89 and Bachem 90 reported that shorter wavelengths look bluer, which again is consistent with the color reversal predicted in the violet corner of the chromaticity diagram of Figure 7. Tan 91 made more extensive color matching measurements in two aphakic observers, the mean results from which are replotted as chromaticity coordinates in figure 8 of Stark and Tan. 85The mean spectrum locus for these two aphakic observers, like that shown in Figure 7, reverses in in the violet part of the spectrum and remains broadly consistent with those predictions until about 370 nm.After 370 nm, surprisingly, the spectral locus of these aphakic observers continues to rotate until it points towards 445 nm.This further rotation is indicative of a relative increase in S-cone excitation in that region.Tan points out on p. 78 of his thesis, however, that because of system limitations, "The results must therefore be judged qualitatively (rather) than quantitatively", so it is not clear how much we can rely on these color matching data.Importantly, though, as Tan also points out the finding that UV lights can be matched using three primaries means that there is no evidence for another unknown photopigment in the UV. 91Yet other evidence from one of the two aphakic observers who made the color matches obtained using chromatic adaptation to favor different cone responses suggested that the S-cone sensitivity steadily increases in the UVA implausibly rising above its sensitivity at the S-cone λ max, 91 a finding later replicated in another aphakic observer. 92An increase in S-cone sensitivity into the UVA would account for the continued rotation of the aphakic spectrum locus below 370 nm, but its cause is unknown; see, for discussion, 85 and it is inconsistent with known human and primate S-cone photopigment spectra. 24n general, the color matching predictions in the UVA shown in Figures 6 and 7 are plausible but their usefulness in predicting color matches is inevitably limited, since any matches will be affected by fluorescence and perhaps by other unknown factors.New measurements in the UVA are undoubtedly needed, but to be useful the effects of fluorescence and cone spectral sensitivity must be disentangled.F I G U R E 6 Cone fundamentals transformed to other primaries and plotted as chromaticity coordinates calculated from the original tabulated CIE 2006 fundamentals (dashed black lines, and small black circles at 10-nm steps) and from the formulae presented here (solid white lines, and larger white circles).Upper panel: spectrum locus in l,m chromaticity coordinates.Middle panel: spectrum locus in r,g, chromaticity coordinates for RGB primaries of 444, 526 and 645 nm.Lower panel: spectrum locus in x,y, chromaticity coordinates for CIE XYZ primaries. | CONCLUSION A practical set of formulae is presented that allows the generation of cone fundamentals for standard (mean) observers for 2-deg and 10-deg vision that accurately reproduce the CIE 2006 observer.These fundamentals have been extended from 390 to 360 nm at short wavelengths and from 830 to 850 nm at long wavelengths (and from 615 to 830 nm for the S-cones).The principal reason for extending the range was to allow the cone fundamentals to be shifted along a log wavelength scale to model individual differences in cone λ max , but the extensions are speculative and should be used with caution.The formulae also allow the effects of potentially large individual differences in macular, lens and photopigment optical densities on the cone fundamentals to be easily modeled.The resulting cone fundamental CMFs can then be straightforwardly linearly transformed to generate CMFs for real primaries whose power spectra are known or for imaginary primaries, such as XYZ, provided the relevant transformation matrix is known, and hence allow modeling of the effects of individual differences on those functions.The uncertainties at wavelengths below 400 nm suggest the need for new measurements at longer UV wavelengths.However, any such measurements, and indeed any color matching predictions, in that region are complicated by the fluorescence of the lens and by the lens opacity especially in older observers (see above). We have also provided a version of a common template shape to account for all three cone absorbance spectra. Given the assumption that the absorbance spectra should be shape invariant along a log wavelength scale, this can be usefully used to generate and investigate cone absorbance spectra and corneal spectral sensitivities for photopigments with different λ max .The common template should not be used to generate corneal cone spectral sensitivities or cone fundamentals if compliance with the CIE standard is required. We have implemented the above equations in a computer program written in Python that can be used to generate cone fundamentals and CMFs for standard and individual observers.The program can be downloaded from http://github.com/CVRL-IoO/Individual-CMFs.git.Recombinant pigments and spectroscopy with shifts based on differences taken from Table 1 F I G U R E 2 Standard macular optical density spectrum (upper panel, pink solid line) and lens pigment optical density spectrum (middle panel, orange solid line) extrapolated from the Stockman and Sharpe and CIE color matching standard with best-fitting 11th order (macular) or 9th order (lens) Fourier polynomials (white dashed lines).See text for details.Errors in the fitted functions for macular (pink solid line) and lens (orange solid line) are shown in the bottom panel. U R E 3 Logarithmic (upper panels) and linear (lower panels) corneal L-, M-and S-cone spectral sensitivities (yellow dashed lines) calculated from the Fourier polynomials for 2-deg (left panels) and 10-deg (right panels) vision.The CIE L-, M-and S-cone standards are shown by the red, green, and blue solid lines, respectively.The sensitivities are given in relative energy units. T A B L E 3 Spectral shifts caused by changes in key amino acids in the L and M opsins. Photopigment absorbance spectra and b λ ð Þ, in Panel (B) of Figure 5 (or any other set of CMFs measured using any three primaries) must be a linear transformation of the three 10-deg CIE cone fundamental CMFs shown in Panel (C) and vice versa (this relation is formalized by Grassmann's laws of color CMFs of Stiles and Burch,71 since of the available sets of directly measured CMFs, they are the most secure and most comprehensive.The transformation from the Stiles and Burch 10-deg r 10 λ ð Þ, g 10 λ ð Þ, and b 10 λ ð Þ CMFs to the three 10-deg cone fundamentals, l 10 λ ð Þ, m 10 λ ð Þ, and s 10 λ ð Þ, is 5 Fa I G U R E 7 An expanded view of the short-wavelength region of the spectrum locus in l,m coordinates.Symbols as in Figure6.T A B L E A 1 Summary of previous research into the changes inλ max caused by amino acid substitutions in the L-or M-cone opsin genes.Summary shifts from Figure 1 of a review article by Neitz and Neitz.62 b Fourier coefficients for the macular and lens density spectra. T A B L E 2 of Merbs and Nathans.64 c Recombinant pigments and spectroscopy with shifts based on Figure 2 of Asenjo, Rim, and Oprian.63 d Recombinant pigments and spectroscopy from Table 1 of Merbs and Nathans.67
15,330
2023-07-19T00:00:00.000
[ "Mathematics" ]
Search for Lepton Flavor Violating tau- Decays Including with a K0s Meson We have searched for the lepton flavor violating decays $\tau^-\to \ell^-\ks$ ($\ell = e {or} \mu$), using a data sample of 281 fb$^{-1}$ collected with the Belle detector at the KEKB $e^+e^-$ asymmetric-energy collider. No evidence for a signal was found in either of the decay modes, and we set the following upper limits for the branching fractions: ${\cal{B}}(\tau^-\to e^-\ks)<5.6\times 10^{-8}$ and ${\cal{B}}(\tau^-\to \mu^-\ks)<4.9\times 10^{-8}$ at the 90% confidence level. These results improve the previously published limits set by the CLEO collaboration by factors of 16 and 19, respectively. INTRODUCTION Lepton flavor violation (LFV) is allowed in many extensions of the Standard Model (SM), such as Supersymmetry (SUSY) and leptoquark models. In particular, lepton flavor violating decays with K 0 S mesons are discussed in models with heavy singlet Dirac neutrinos [1], R−parity violation in SUSY [2,3], dimension-six effective fermionic operators that induce τ − µ mixing [4]. Experiments at the B-factories allow searches for lepton flavor violating decays with a very high sensitivity. The best upper limits of B(τ − → e − K 0 S ) < 9.1×10 −7 and B(τ − → µ − K 0 S ) < 9.5 × 10 −7 at the 90% confidence level were set by the CLEO experiment using 13.9 fb −1 of data [5]. In this paper, we report a search for the lepton flavor violating decays τ − → ℓ − K 0 S (ℓ = e or µ)[ †] using 281 fb −1 of data collected at the Υ(4S) resonance and 60 MeV below it with the Belle detector at the KEKB e + e − asymmetric-energy collider [6]. The Belle detector is a large-solid-angle magnetic spectrometer that consists of a silicon vertex detector (SVD), a 50-layer central drift chamber (CDC), an array of aerogel thresholď Cerenkov counters (ACC), a barrel-like arrangement of time-of-flight scintillation counters (TOF), and an electromagnetic calorimeter comprised of CsI(Tl) crystals (ECL), all located inside a superconducting solenoid coil that provides a 1.5 T magnetic field. An iron fluxreturn located outside of the coil is instrumented to detect K 0 L mesons and to identify muons (KLM). The detector is described in detail elsewhere [7]. Particle identification is very important in this measurement. We use particle identification likelihood variables based on the ratio of the energy deposited in the ECL to the momentum measured in the SVD and CDC, the shower shape in the ECL, the particle range in the KLM, the hit information from the ACC, the measured dE/dX in the CDC and the particle's time-of-flight from the TOF. For lepton identification, we form a likelihood ratio based on the electron probability P(e) [8] and the muon probability P(µ) [9] determined by the responses of the appropriate subdetectors. For Monte Carlo (MC) simulation studies, the following programs have been used to generate background events: KORALB/TAUOLA [10] for τ + τ − , QQ [11] for BB and continuum, BHLUMI [12] for Bhabha events, KKMC [13] for e + e − → µ + µ − and AAFH [14] for two-photon processes. Since the QQ generator does not include some rare processes that potentially contribute to final states with a K 0 S meson, we generated special samples of e + e − → D * + D ( * )− , a process that was recently observed by the Belle collaboration [15]. Signal MC is generated by KORALB/TAUOLA. Signal τ decays are two-body and assumed to have a uniform angular distribution in the τ lepton's rest frame. The Belle detector response is simulated by a GEANT 3 [16] based program. All kinematic variables are calculated in the laboratory frame unless otherwise specified. In particular, variables calculated in the e + e − center-of-mass (CM) frame are indicated by the superscript "CM". DATA ANALYSIS We search for τ + τ − events in which one τ (signal side) decays into ℓK 0 S (K 0 S → π + π − ), while the other τ (tag side) decays into one charged track (with a sign opposite to that of the signal-side lepton) and any number of additional photons and neutrinos. Thus, the [ †] Unless otherwise stated, charge conjugate decays are included throughout this paper. experimental signature is: All charged tracks and photons are required to be reconstructed within a fiducial volume, defined by −0.866 < cos θ < 0.956, where θ is the polar angle with respect to the direction opposite to the e + beam. We select charged tracks with momenta transverse to the e + beam p t > 0.1 GeV/c and photons with energies E γ > 0.1 GeV. Candidate τ -pair events are required to have four charged tracks with a zero net charge. Events are separated into two hemispheres corresponding to the signal (three-prong) and tag (one-prong) sides by the plane perpendicular to the thrust axis [17]. The magnitude of the thrust is required to be larger than 0.9 to suppress the qq continuum background. The K 0 S is reconstructed from two oppositely-charged tracks in the signal side that have an invariant mass 0.482 GeV/c 2 < M π + π − < 0.514 GeV/c 2 , assuming a pion mass for both tracks. The π + π − vertex is required to be displaced from the interaction point (IP) in the direction of the pion pair momentum [18]. In order to avoid fake K 0 S candidates from photon conversions (i.e. γ → e + e − ), the invariant mass reconstructed by assigning the electron mass to the tracks, is required to be greater than 0.2 GeV/c 2 . The signal side track not used in the K 0 S reconstruction is required to satisfy the lepton identification selection. The electron and muon identification criteria are P(e) > 0.9 with p > 0.3 GeV/c and P(µ) > 0.9 with p > 0.6 GeV/c, respectively. After the event selection described above, most of the remaining background comes from generic τ + τ − and continuum events that contain a real K 0 S meson. To ensure that the missing particles are neutrinos rather than photons or charged particles that fall outside the detector acceptance, we impose additional requirements on the missing momentum vector, p miss , calculated by subtracting the vector sum of the momenta of all tracks and photons from the sum of the e + and e − beam momenta. We require that the magnitude of p miss be greater than 0.4 GeV/c and that its direction point into the fiducial volume of the detector, as shown for the τ − → µ − K 0 S mode in Fig. 1 (a) and (b). The total visible energy in the CM frame, E CM vis , is defined as the sum of the energies of the K 0 S candidate, the lepton, the tag-side track (with a pion mass hypothesis) and all photon candidates. We require E CM vis to satisfy the condition: 5.29 GeV < E CM vis < 10.0 GeV (see Fig. 1 (c)). Since neutrinos are emitted only on the tag side, the direction of p miss should lie within the tag side of the event. The cosine of the opening angle between p miss and the tag-side track in the CM system, cos θ CM tag−miss , is therefore required to be greater than 0 (see Fig. 1 (d)). For all kinematic distributions shown in Fig. 1, reasonable agreement between the data and background MC is observed. In order to suppress background from qq (q = u, d, s, c) continuum events, the following requirements on the number of the photon candidates on the signal and tag side are imposed: n SIG ≤ 1 and n TAG ≤ 2, respectively. Finally, the correlation between the reconstructed momentum of the ℓK 0 S system, p ℓK S , and the cosine of the opening angle between the lepton and K 0 S , cos θ ℓK S is employed to further suppress background from generic τ + τ − and continuum events via the requirements: cos θ ℓK S < 0.14 × log(p ℓK S − 2.7) + 0.7, where p ℓK S is in GeV/c (see Fig. 2). While this condition retains 99% of the signal, 99% of the generic τ + τ − and 84% of the uds continuum background are removed. Following all the selection criteria, the signal detection efficiencies for the τ − → e − K 0 S and τ − → µ − K 0 S modes are 15.0% and 16.2%, respectively. decays. The remaining continuum backgrounds in the τ − → µ − K 0 S mode are combinations of a true K 0 S meson and a fake lepton. To optimize our search sensitivity, we select an elliptically shaped signal region of minimum area with the same signal acceptance as that of a rectangular box corresponding to ±5σ in the MC resolution for the M ℓK 0 S − ∆E plane. The signal efficiencies after all requirements are 11.8% for the τ − → e − K 0 S and 13.5% for the τ − → µ − K 0 S , respectively. As there are few remaining MC background events in the signal ellipse, we estimate the background contribution using the M ℓK 0 S sideband regions defined by rectangular areas beside the signal ellipse shown in Fig. 3 (a) and (b). Extrapolation to the signal region assumes that the background distribution is flat in M ℓK 0 S . We find the expected background in the ellipse to be 0.2 ± 0.2 events for both modes. Finally, we uncover the blinded region and find no data events in the signal region of the τ − → e − K 0 S and τ − → µ − K 0 S modes (see Fig. 3 (a) and (b)). Since no statistically significant excess of data over the expected background in the signal region is observed, we apply a frequentist approach to calculate upper limits on the signal yields [19]. The resulting limits for the signal yields at 90% confidence level, s 90 , are 2.23 events in both modes. The upper limits on the branching fraction before the inclusion of systematic uncertainties are then calculated as where B(K 0 S → π + π − ) = 0.6895 ± 0.0014 [20] and N τ τ = 251 × 10 6 is the number of τ −pairs produced in 281 fb −1 of data. We obtain N τ τ using σ τ τ = 0.892±0.002 nb, the e + e − → τ + τ − cross section at the Υ(4S) resonance calculated by KKMC [13]. The resulting values are The dominant systematic uncertainties on the detection sensitivity: 2εN τ τ B(K 0 S → π + π − ) come from K 0 S reconstruction and tracking efficiencies. These are 4.5% and 4.0%, respectively, for both modes. Other sources of the systematic uncertainties are: the trigger efficiency (0.5%), lepton identification (2.0%), MC statistics (0.3%), branching fraction of K 0 S → π + π − (0.2%) and luminosity (1.4%). Assuming no correlation between them, all these uncertainties are combined in quadrature to give a total of 6.5%. While the angular distribution of τ − → ℓ − K 0 S decay is initially assumed to be uniform in this analysis, it is sensitive to the lepton flavor violating interaction structure [21]. The spin correlation between the τ lepton in the signal and that in the tag side must be considered. A possible nonuniformity was taken into account by comparing the uniform case with those assuming V − A and V + A interactions, which result in the maximum possible variations. No statistically significant difference in the M ℓK 0 S -∆E distribution or the efficiencies is found compared to the case of the uniform distribution. Therefore, systematic uncertainties due to these effects are neglected in the upper limit evaluation. Upper limits on the branching fractions at the 90% confidence level including these systematic uncertainties are calculated with the POLE program without conditioning [22]. The resulting upper limits on the branching fractions at the 90% confidence level are DISCUSSION In the R−parity violating SUSY scenario, there are three kinds of terms (λ, λ ′ and λ ′′ ) with a total of 45 couplings. In this model, τ − could decay into ℓ − K 0 S via tree-level scalar neutrino exchange by the λλ ′ couplings. Using our results, the limits on the products λλ ′ as a function of the scalar neutrino mass (Mν) are given as [2], where i is the generation number. These bounds are more stringent than the previous bounds obtained in R−parity violating models from τ − decay including a pseudoscalar meson [2,3]. The improved sensitivity to rare τ lepton decays achieved in this work can be used to constrain the new physics scale for the dimension-six fermionic effective operators involving τ − µ flavor violation, motivated by neutrino oscillations [4]. From our upper limit for the branching fraction of the τ − → µ − K 0 S decay, lower bounds of 36.2 TeV and 37.2 TeV can be obtained for the axial-vector and pseudoscalar operators, respectively. CONCLUSION In conclusion, we have searched for the lepton flavor violating decays τ − → ℓ − K 0 S (ℓ = e or µ) using data collected with the Belle detector at the KEKB e + e − asymmetric-energy collider. We found no signal in either mode. The following upper limits on the branching fractions at the 90% confidence level are obtained: B(τ − → e − K 0 S ) < 5.6 × 10 −8 and B(τ − → µ − K 0 S ) < 4.9 × 10 −8 . These results improve the search sensitivity by factors of 16 and 19, respectively, compared to the previous limits obtained by the CLEO experiment.
3,276
2006-05-09T00:00:00.000
[ "Physics" ]
Application to the additive fabrication of Object Oriented Methodology Based on some experimentation in NUM3D Platform, we integrate the possibilities of additive manufacturing processes and the characteristics of the materials. We extend to the specific property inherent in form and control to define a new tetraptych CPMF (Control/Process/Material/Form). We propose, from this tetraptych, an Oriented Object Modelization suited to Additive Manufacturing. In this paper, we set up a virtual representation specific to industrial manufacturing. The final objective, in fine, will be to use this representation in a decision-support tool. Introduction The objective of this paper is to produce an Object Oriented Methodology like structure suited to Additive Manufacturing (AM).Ultimately, we will use these attributes to implement a decision support tool.The result will be a numerical model integrating all the manufacturing constraints. Our Object Oriented model integrate the manufacturing knowledge.In particular, the possibilities of manufacturing processes but also the characteristics of the materials and the ability to control the parts produced. The proposed structure will permit to better take into account the specificity of additive manufacturing in numerical simulation algorithms (As meso-structures or bio-mimicry).That will also be able to validate and define an artifact part for validating a process or directly a machine. The establishment of a detailed ontology of additive manufacturing was required to clearly explain the parameters who are, for time, implied or empirical.This ontology has two objectives, allow experienced users to better understand the field, and for novices, to get the ability to integrate this technology in their specialties (e.g.topology optimization). State of art 2.1 Database There are many specific database to materials as well as processes but few that combine the two.Marsden [10,11] highlights the role of metadata in manufacturing by integrating processes and material properties with the world of product development.Munguia [12], meanwhile, shows how to integrate a database into an artificial intelligence system dedicated to additive manufacturing. Based on the work of Professor Ashby [13], the company GRANTA Design is the leading materials information management system.With their solution GRANTA MI "you can apply this proven software to capture all of your vital data on Additive Manufacturing.Data is captured in one place, with full traceability". The objective is to create an essential knowledge base to develop a better understanding of processes (especially additive manufacturing).This database must be expanding by integrating other essential criteria in order to serve as a tool for qualification and certification of manufactured parts. Additive Manufacturing Template Database provides a data structure (Figure 1) developed by Granta based on extensive experience from AMAZE projects.It allows the integration of data from our own laboratory test or based on our experience, like: x "Import 'logfiles' directly from Additive Manufacturing machines, store process parameters, extract logged data for specific builds x Link build information directly to supplier data on the batches of material that were processed to make a part x Capture and associate part finishing, inspection, and test data with individual parts, and feed these results into statistical analyses to determine mechanical properties and monitor reproducibility x Export properties for use in simulation and design, capturing simulated outputs for use in optimizing part design and production." Object Oriented Modelling The object-orientation is a well know technique in informatics.This technique was introduce in 1966 by Ole-Johan Dahl and Kristen Nygaard [8], and was developed in the works of Alan Kay in 1968 [9].For this work, they respectively win the Turing price in 2001 and 2003. The main idea of object-orientation is "don't care how the others part of the system make for obtain the answer of your question, just have an answer".Object are see like some "black-box", only the public properties are in interest. Habitually, the language used for represent object is UML. The model abstracts the reality in some virtual object who interact each other.The real world is built from material or immaterial object, who are represent in the model.The Object orientation is in fact a technique for broach a problem and cut it in smaller sub-problem. The originality of our proposal is to use the OOM to virtualize the characteristics inherent in manufacturing production and in particular to additive manufacturing.A priori, we do not know any works developed in this sense. Ontology in Additive Manufacturing An Oriented Object model is a theoretical construction translating a practical case.The additive manufacturing is a new technology and, therefore, don't have the necessary objectivity for formalize this practice.We will base on the existent norm ISO CD/17296-2 that we reinforce in this paper with an AM dedicated ontology. AM) x Distinguish field knowledge with respect to operational knowledge Problem Our goal is to link the operational properties and structural properties of classes modeling the constraints inherent in manufacturing production, particularly in the context of additive manufacturing.All existing databases is based primarily on the characteristics of processes and materials. However, and by feedback from multiple test on NUM3D Platform, we consider that we need to more characteristics. Therefore, we propose two new characteristics (Control and Form) and define thus our "tetraptych CPMF" (detailed in Figure 0). Generating a software For a given part (forms / materials), we would like to develop a decision-making tool for the manufacturing process.As part of our work, we have a focus on additive manufacturing but the proposed methodology is applicable to other "traditional" processes. Implementing a basic algorithm, that can adapt to the different elements of our "tetraptych CPMF" (Control / Process / Materials / Form) defining a manufactured object, requires a dynamic evolution of the program.Depending on the differences of each element of our tetraptych, the responses to the same need may require different methods.For example, for the realization of the part below (Figure 2) our need is to prevent the collapse of the cantilever.The provided method differs depending on the type of process, so the generation of a same support material is necessary in FDM, a filling with a support material in Material Jetting process but requires no action Binder Jetting SLS because the powder bed provides this function.It is for the reasons that the object-oriented methodology seemed appropriate to resolve our problem. The same structure will also specify existing algorithms.For example, topology optimization algorithms are not specifically adapted to additive manufacturing but are general.Our proposal will guide the seven standardized additive manufacturing processes and thus to best adapt the numerical models to mechanical stress in their environment.Work in support of the project OptiFabAdd allow to highlight a marked improvement forms topologically optimized by adding own specificities to AM [17,18]. Topological optimization algorithms are of course not the only ones to be adapted to additive manufacturing.Indeed, many studies concerned with meso-structures as well as mimicry [19], [20], [21], [22], [23], [24].A development suited to additive manufacturing would specialize the algorithms and make them more efficient. Consideration of dimensional control / geometric / topological The vast majority of work who linked, by a general database system, the materials to processes do not take into account the ability of the process to realize dimensionally the parts (accuracy).Therefore, we propose the integration of a criteria "Control" able to take in to account the needed of accuracy and check the faculty of the device.There is no need to design a better part that the device can realize or that we can control! The Faculty of additive manufacturing device make freeform forced us to integrate 3D scanning means [25,26] in addition to the "traditional" control means such as CMM [27]. The resulting program will allow users a decision support at various optimality parameters depending on our tetraptych.Thus a requirement of a given object, we can propose the better process (among the seven standardized additive manufacturing process) depending on the needs. For example, a requirement of precision and a need for rapid manufacturing, the software will return us the fastest process capable of satisfying the required accuracy and associate the appropriate means of control.The result of software will give not only the method but also the associate devices. Integration of knowledge in processes and specific materials for additive manufacturing To validate your model, we based on the capitalization of knowledge realized on the NUM3D platform (CReSTIC laboratory from University of Reims Champagne Ardenne).We were able to realize different test allows us to determine the evolution in the mechanical characteristics in function of three criteria (materials, processes and orientation) on machines at our disposal [25].The establishment DOE has given us the critical values on the limits of our machines.Thus, the minimum wall thickness or the minimum diameter of printable and cleaned without deterioration channels of the parts could be determined according to the materials and processes used in manufacturing [15]. Proposal for structuration We wish to enrich and complete this type of bases by adding two new characteristics: x The shape (this notion being just begun in some bases...): The geometric analysis of a 3D model allowing us, for example, fixed materials and processes, to predict differences in resistance of the part manufactured.It also allows, according to the methods, to define a better orientation of the workpiece manufactured in the device environment. x Means of controls: we wish to integrate earlier in the manufacturing process, the possible constraints inherent to input controls.It is also important to align the precision of the manufacturing and control tools.These two new characteristics (Shape and Control) added to conventional constraints (Process and Material) define what we call the "tetraptych CPMF" (Control / Process / Materials / Form). Classes using tree According to [6], we have detailed our proposition into 6 key steps: x Definition of the Domain and its scope; x Reuse of existing ontologies; (e.g.CES) x Enumerate important terms in the ontology; (e.g.Brainstorming, FreeMind) x Definition of classes and their prioritization; x Properties of classes and attributes; x Creating instances; The manufacturing processes are all types (subtractive / additive / other).In our study, we have focused on the additive manufacturing processes. Description of structuration Our goal is to analyze a manufactured product split into four independent aspects.These aspects are (figure 3): x The shape of the object and all its geometric characteristics modeled in 3D: class = "Form" x The constituent materials with their mechanical characteristics: class = "Materials" x The manufacturing processes have realized the piece manufactured with all these specific: class = "Process" x A method (or more) to verify the compliance of the finished product: class = "Control" In the CAD modeling (CATIA, SW, PTC ...), we can numerically represent a product based on its shape and constituent material.Therefore, we call "numerical model" a class composed by the two classes "Shapes" and "materials" and which will represent the product DFN. In fact, we imply that it is not usually to take into account the means of control or implementation when defining 3D modeling.Although many studies demonstrate the importance of this integration (Concept DFM) [28] The finished manufactured product will be represented by the class "physical part" composed by classes "Process" -"Control" -"Numerical Model".The "numerical model" based itself on classes "Shape" and "Material"; our physical part consists of classes that represented the four independent aspects of our structure. The integration of our proposal at the heart of FEM software requires a lot of questioning of the laws of behavior.For easier using, we have chosen to group the "Laws of behavior" in a separate class who can easily be given as a parameter to specific programs. Classes and heritage One of the main objectives of our modeling is to allow dynamic adaptation of the calculation and optimization algorithms based on the consideration of criteria and parameters depending on the choice made on our tetraptych.The calculations of theoretical mechanical resistance, the forecast cost of a manufactured part or the production time, are intimately linked to the choices that have been made on the tetraptych (change materials, shape, manufacturing process or control). The classes described in your model are abstract, and destined to be inherited according to the different opportunities available to each element of tetraptych.As in a conventional object modeling, the highest class in the interaction diagram are able to query their classes that are less based on attributes and predefined functions abstract classes.When a class is questioned, it behaves like a "black box" and a change in method or formula can be easily performed by inheriting inter-changing different classes of the same abstract class. For example, we will focus on the abstract class "Process" of manufacturing.We know from the abstract class, a "process" must be able, among other things, an estimate of the volume of work available, or the time required to manufacture a part size and given shape.The manufacturing process should also be able to provide a list of its own characteristics to influence the strength of a part, or be able to validate the feasibility of a given digital model. We consider as a first inheritance step does distinguish between different categories of manufacturing process, defining three new abstract classes inheriting the class manufacturing process: x Subtractive manufacturing Our study is focused on additive manufacturing (subtractive manufacturing and other process were not further). As for additive manufacturing, we then describe concrete seven classes, based on the seven additive manufacturing processes as defined in the standard "Additive manufacturing General principles Part 2: Overview of the categories of processes and basic materials" [27]. Each of these classes must be able to answer any questions that may be asked in a manufacturing process.But each will have different function and method to answer these same questions, in fact inducing a dynamic modification effect of the global algorithm depending on the chosen manufacturing process, and therefore the instantiated class to represent it. Opening Our proposal aims to formalize the characteristics of our tetraptych bases in an abstract modeling to be completed and implemented in order to specialize in digital algorithms uses in engineering. For example, and as an opening to our work, we search to adapt to the context of AM a topological optimization, bio-mimicry or meso-structures generation algorithms in the heart of our manufactured parts.Within the OptiFabAdd© industrial project, we were able to show that adaptations of generic algorithms (here topological optimization) by integrating the inherent parameters in the manufacturing used process (Figure 4), allow a significant gain in expectations (This dedicated optimization material allows savings of 53 per cent in terms of weight compared to a conventional topological optimization only 47 per cent). Beyond this application, taking into account any changes in the tetraptych bases "CPMF" is likely to bring significantly improved responses in generic algorithms mentioned above.This topological optimization application shows that taking into account inherent specificity of Additive Manufacturing generate convincing results.In OptiFabAdd©, this application was not automatized and could be easily due to our model. Conclusion This paper proposes an Object Oriented Methodology structure specifically adapted to additive manufacturing technologies.We propose a tree called "CPMF" (Control / Process / Material / Form) for, among other things, provide a dynamic response to various parameters of a tetraptych for conventional simulation algorithms (i.e.topology optimization) and dedicated to the AM.The first results through OptiFabAdd© for topological optimization applied to the AM are promising and encourage us to expand our work for meso-structures and bio-mimicry like algorithms. Figure 2 . Figure 2. Part with cantileverIn principle, object programming is based on the existence of classes implementing different need for a more comprehensive algorithm.A class is defined virtually, via the methods as to answer a specific question, and this in "black box".Due to the concept of heritage, it is possible to implement the same abstract class in several different ways, thus adapting dynamically to changing the algorithm based on internal or external parameters in the program.It is for the reasons that the object-oriented methodology seemed appropriate to resolve our problem.The same structure will also specify existing algorithms.For example, topology optimization algorithms are not specifically adapted to additive manufacturing but are general.Our proposal will guide the seven standardized additive manufacturing processes and thus to best adapt the numerical models to mechanical stress in their environment.Work in support of the project OptiFabAdd allow to highlight a marked improvement forms topologically optimized by adding own specificities to AM[17,18].Topological optimization algorithms are of course not the only ones to be adapted to additive manufacturing.Indeed, many studies concerned with meso-structures as well as mimicry[19],[20],[21],[22],[23],[24].A development suited to additive manufacturing would specialize the algorithms and make them more efficient.
3,789.8
2016-01-01T00:00:00.000
[ "Materials Science" ]
Three-Dimensional Scattering From Uniaxial Objects With a Smooth Boundary Using a Multiple Infinitesimal Dipole Method The formulations for three-dimensional (3D) scattering from uniaxial objects with a smooth boundary using a multiple infinitesimal dipole method (MIDM) are introduced. The proposed technique uses two sets of infinitesimal dipole triplets (IDTs), including three co-located orthogonally polarized electric infinitesimal dipoles, distributed inside and outside of a scatterer to construct simulated fields. The dyadic Green’s functions of uniaxial materials are deployed in the MIDM so as to obtain the simulated fields. The singularity issues in using the uniaxial dyadic Green’s functions, which cannot be solved analytically so far for a general uniaxial medium, can be easily eliminated by using the proposed MIDM. In comparison to the traditional single-layered distribution scheme of IDTs, the proposed multiple-layered distribution scheme can handle the scattering from uniaxial objects accurately and efficiently. Several numerical examples are presented to study bistatic radar cross section (RCS) responses under different scenarios. Excellent agreement is achieved by comparing numerical results with those obtained from commercial software packages, while the simulation performance including CPU time and required memory is drastically improved by using the MIDM when computing a general uniaxial material or a relatively larger object. The proposed technique has its merits on simplicity, conciseness and fast computation in comparison to existing numerical methods. I. INTRODUCTION The interaction between electromagnetic waves and anisotropic materials has received a great deal of attentions recently. Anisotropic materials have found a variety of applications in the design of antennas [1]- [7], integratedcircuit structures [8], reduction of RCS of scatterers [9], optical signal processing [10] and so on. One of the basic problems to investigate waves in the anisotropic material is The associate editor coordinating the review of this manuscript and approving it for publication was Su Yan . to study the scattering characteristics of an anisotropic object. Several remarkable research contributions have been made and presented in [11]- [23]. The volumetric integral equation (VIE)-based methods were introduced in [11]- [16], [23] to compute scattering performances from an arbitrarily shaped object made of a linear, lossy, and anisotropic material. The VIE-based approaches can handle the most general cases of materials whereas they require to discretize the entire volume of an object, therefore becoming computationally challenging with large scatterers. The same issue will also rise in the finite-difference time domain (FDTD) [19], [22] and finite element-boundary integral (FE-BI) [20] methods. The surface integral equation (SIE) is a good candidate to overcome the computational burden of the methods based on volumetric discretization. A SIE-based MoM scheme combined with uniaxial dyadic Green's functions [24], [25] was proposed in [17], [18] for scattering evaluation from arbitrarily shaped objects filled with electrically uniaxial materials. The SIE-based solutions provide an accurate and more simplified approach compared to the VIE-based ones, yet the complexity of formulations is still there. The proposed analytical approach in [17] for eliminating singularity issues is only valid for an electrically uniaxial material, and it would fail when a general uniaxial material is encountered. The generalized multipole technique (GMT) [26], [27] is a generic name of several similar numerical methods [28]- [31] developed independently by several research groups. In the GMT, the scattered fields are usually expanded in terms of a set of multipole sources. However, not only the multipoles can be used for fields expansion, but other equivalent sources are also possible. Therefore, other names for similar methodologies have been given like multiple multipole method (MMP) [28], discrete sources method (DSM) [29], method of auxiliary sources (MAS) [30], or multifilament current model (MFCM) [31]. A common basic concept of all these methods is that the scattered fields inside and outside of a scatterer are simulated by a set of equivalent sources respectively located outside and inside of the scatterer with a certain distance away from the physical boundary, rather than being formulated in terms of equivalent surface currents flowing on the physical surface. In this case, no integrals have to be computed numerically which reduces the computation time and simplifies the problem formulation. Also the solution features no singularity. So far, most of the scattering problems tackled with the GMT have considered isotropic objects, and there is few research work on the anisotropic scenario. In [32], the GMT was extended to anisotropic scatterers by introducing the plane wave representation of an anisotropic material into Bessel multipoles, but it resulted in integrals which cannot be evaluated analytically to represent the scattered fields. The DSM was extended to three-dimensional anisotropic scatterers in [33], but the entire body of the scatterer needed to be discretized. As a result, DSM also suffers from a high computational burden as in the case of the VIE-based method when larger objects are involved. Recently, the random auxiliary sources (RAS) method, a MMP-inspired numerical technique, was introduced in [34]- [36]. But all applications of the RAS currently only focused on isotropic materials. To the best of the authors knowledge, there is no research reported to date about combining the GMT-like methods with dyadic Green's functions of anisotropic materials to study the scattering from anisotropic objects. The uniaxial material seems to be the most widely used type of anisotropic materials. This is because the uniaxial material can be either easily found in many natural crystals [8], [37], or artificially made by a stacked dielectric sheet structure consisting of alternative layers of two isotropic materials [4], [5], [38], [39], or obtained by homogenizing a mixture of several different materials via effective medium theory [40], [41]. This work only focuses on the uniaxial material, yet the proposed MIDM can also be applied to other kinds of anisotropic materials as long as the corresponding dyadic Green's functions are available. In the MIDM, the integral operations used in the VIE-or SIE-based methods are avoided. Instead, a simple point-matching testing procedure is used and the harmonics of a multipole source which is used in the MMP are also cast off by deploying the infinitesimal dipole source. Without using the integral operations and the harmonics of the sources, the computation of scattered fields speeds up and the problem formulation is also simplified. However, in comparison with the MMP which uses high order harmonics in equivalent sources, a larger number of infinitesimal dipole sources will be required in the MIDM to represent varying fields. An approach similar to the proposed MIDM is used in [42]. However, only isotropic materials were considered, and the strategy for the placement of sources may fail when a relatively larger object is involved, as will be shown in Section IV. The scattering from uniaxial objects with a smooth boundary using a MIDM is proposed in this paper. The applications of structures with a smooth boundary can be found in many scenarios, such as the spherical dielectric resonator antennas, lens antennas, extended hemispherical lens antennas and so on. The paper is organized as follow. The formulation of the problem is presented in Section II, where the strategies for placements of matching points and sources that play a key role in the MIDM, will be discussed in detail and specified there. The singularity issues in using uniaxial dyadic Green's functions are discussed and solved in Section III. Several numerical examples are presented in Section IV under different scenarios. All numerical results are compared with simulated results generated from commercial software package, and excellent agreement is obtained. Finally, a conclusion to summarize the proposed technique is given in Section V. II. FORMULATION OF THE MIDM The geometry of the problem is illustrated in Fig. 1. Two regions are present, the external region 1 is free space and The rotated global coordinate system, represented by unit vectorsû,v andĉ as shown in Fig. 2, are used to express the uniaxial medium.ĉ is the unit vector parallel to the optical axis of the medium. 1 , µ 1 and 2 , µ 2 are the permittivity and permeability associated with the directions perpendicular and parallel to the optical axis (ĉ), respectively. The relationship between the unit vectors in the rotated global and global coordinates is written as  where θ c and ϕ c are defined in Fig. 2. The concept of the MIDM is illustrated in Fig. 3. We place a set of infinitesimal dipole triplets (IDTs) in regions 1 and 2. Each IDT contains three co-located orthogonal polarized infinitesimal dipoles, namely point sources. The formulation of the proposed MIDM is conducted through a two-step equivalence. Firstly, the scattered fields in the region 1 are generated by equivalent IDTs placed in the region 2, and those point sources are treated as source currents radiating in unbounded vacuum. Secondly, the internal fields in the region 2 are generated by equivalent IDTs placed in the region 1, and those point sources are radiating in unbounded space filled with an homogeneous uniaxial material identical to that constituting the scatterer. A. FIELDS EXPRESSIONS IN REGIONS 1 AND 2 Region 1, considered as free space, contains the incident (E inc , H inc ) and the scattered fields (E s 1 , H s 1 ). If the incident fields are the plane wave, they could be written as: E inc =pe −jk(x sin θ inc cos ϕ inc +y sin θ inc sin ϕ inc +z cos θ inc ) (3a) wherep is the polarized direction of the incident electric field, andk is the unit vector of the wave vector. The scattered fields could be constructed in a simple manner and expressed in terms of the dyadic Green's functions. So the total fields (E 1 , H 1 ) could be expressed as: where J 1 i1 , J 1 i2 and J 1 i3 are the three orthogonal electric point sources in the ith IDT associated to the region 1. Noticing that three magnetic point sources can also be deployed in each IDT, and dyadic Green's functions (G em and G mm ) are then required with respect to the magnetic point source. Only electric point source based IDT is utilized throughout this paper. In (4), N 1 is the number of IDTs placed in the region 2. The two Green's functions, G 1 ee and G 1 me , in (4) are the dyadic Green's functions of isotropic materials, corresponding to the electric and magnetic fields radiated into the region 1 by an electric point source, which read [25] where r and r are the locations of the source and observation points, respectively. k = k 0 = ω √ 0 µ 0 is the wavenumber of free space herein. The term I is the identity dyad defined in global coordinates. The region 2 only contains the internal fields generated by the IDTs placed outside of it, and the expressions of fields read where J 2 i1 , J 2 i2 and J 2 i3 are the three orthogonal electric point sources in the ith IDT associated to the region 2. N 2 is the number of IDTs placed in the region 1. Since region 2 is occupied by the uniaxial material, the two Green's functions, G 2 ee and G 2 me , in (6) are the uniaxial dyadic Green's functions, corresponding to the electric and magnetic fields radiated into the region 2 by an electric point source, which read [24], [25] B. PLACEMENT OF MATCHING POINTS The matching points should be distributed as uniform as possible in order to capture the behavior of fields on the physical surface of a scatterer with a smooth boundary. A simple and efficient way to place matching points is to make use of the Rao-Wilton-Glisson(RWG)-mesh [43] information which can be exported from commercial software package FEKO [44]. As shown in Fig. 4, the locations of nodes constituting a RWG-mesh are used to place the matching points. The parameter, TEL(triangle edge length) defined in FEKO, is used to control the density of a generated RWGmesh. In the MIDM, we firstly model the investigated object in FEKO and then run the mesh module to generate a RWGmesh. The matching points are followed to be placed with the help of nodes information of exported RWG-mesh, and a uniform placement can be finally achieved easily. The total number of matching points, N m , is determined by the TEL value defined in FEKO. C. PLACEMENT OF IDTs The IDTs are placed inside and outside of an object to construct the simulated fields outside and inside, respectively. A multiple-layered sources distribution scheme is proposed for the placement of IDTs in this paper. L virtual surfaces, which have the identical shape with the physical surface, are formed by scaling the physical surface to internal and external regions. L scale parameters are linearly spaced in the range 0.2 ∼ 0.995 for internal and 1.5 ∼ 2.5 for external regions based on empirical experiments. For the single-layered distribution, L = 1, the scale parameters are 0.2 and 2.0 for internal and external regions as found in [42]. A total number of IDTs (N s ) are unequally allocated to the L layers, and the number of IDTs in each layer is determined with respect to ratios between L scale parameters. The larger scale parameter corresponds to more IDTs. The placement of IDTs also makes use of the nodes information of a RWGmesh generated by FEKO. Once the number of IDTs for ith layer, N si , is obtained, we select N si points from N m nodes orderly, and transform those nodes to internal and external regions using the scale parameter of the ith layer. Finally, a multiple-layered distribution scheme of IDTs is realized. An illustrative example of three-layered IDTs distribution strategy is shown in Fig. 5. D. BOUNDARY CONDITIONS The connection between the fields in regions 1 and 2 is dictated by the boundary conditions of surface S shown in Fig. 1. Specifically, the tangential components of electric and magnetic fields must be continuous along the physical boundary S, which leads tô wheren is a unit vector normal to the closed smooth surface S as shown in Fig. 1. A matrix is then created by imposing the boundary condition at a number of matching points on S. Two tangential electric and magnetic fields are calculated at each matching point, therefore the formulated matrix has 4N m rows. On the other hand, 2N s IDTs in two regions are used to simulate the tangential fields, and therefore 6N s infinitesimal electric dipoles are deployed in the simulation resulting in a matrix with 6N s columns. The number of matching VOLUME 8, 2020 points (N m ) must therefore satisfy the inequality N m ≥ 1.5N s (9) in order to have a unique soloution for the unknown current coefficients. Upon the application of a point-matching procedure, we will finally obtain a matrix expression of the type where X is a column vector containing the unknown dipole coefficients, and B is another column vector containing samples of incident tangential fields at the matching points. [Q] is a matrix whose entries are obtained from the tangential fields of IDTs at matching points, and it could be rectangular or square depending on whether oversampling is used or not. If it is in a square form, a unique solution can be found, otherwise the smallest least-square error solution is pursued and known to be where Q is the transpose of [Q] and the asterisk denotes complex conjugate. E. CONVERGENCE STUDY In order to study the convergence of results, we make use of the error on the imposed tangential boundary conditions as a metric, whose definitions read where E bc and H bc are evaluated on a more dense points than the matching points selected on the physical surface S. The necessary numbers of sources and matching points in the MIDM are increased until E bc and H bc reach the desired level of accuracy. III. SINGULARITIES IN USING DYADIC GREEN's FUNCTIONS Two types of singularity issues usually appear when using the dyadic Green's functions. The first singularity issue is involved in both free space and uniaxial dyadic Green's functions when |R|, the distance between source and matching points, is approaching zero. A special treatment on this issue should be considered in the SIE-based solution whereas this type of singularity is completely avoided in our case because the sources are placed at a certain distance away from the matching points. The other singularity appears in the usage of the uniaxial dyadic Green's functions when the term |R c | vanishes, which means R is parallel to the optical axisĉ, and R c /|R c | becomes undefined as can be seen from (7). A special treatment on this issue has been proposed in [17] for the SIEbased methodology. However, that treatment is only valid for an electrically uniaxial object and should also valid for an magnetically uniaxial object according to the duality theorem. For a general uniaxial material, the proposed solution in [17] is not applicable. Moreover, up to authors' knowledge, the analytical solution for the second type of singularity of a general uniaxial material has not been reported so far. Instead of deriving complex analytical solutions, a quite simple approach to eliminate the second singularity issue by removing the problematic sources is proposed in our proposed method. Once the locations of matching points and sources are generated as specified in Sec. II-B and II-C, we implement a filtering strategy to find those locations of sources letting R to be parallel toĉ with a criterion |R c | 1e −5 . Those detected problematic sources are removed directly in the MIDM in order to avoid the second type of singularity issue. It was found that the number of problematic sources is much less than the total number of sources, therefore it is safe to remove them without scarifying the accuracy. This strategy has been found to be a quite efficient way to handle the second singularity issue in our numerical examples. IV. NUMERICAL EXAMPLES AND OTHER DISCUSSIONS Based on the numerical scheme described in the previous sections, a computer program has been implemented. The program computes the normalized bistatic radar cross section (RCS) (σ/λ 2 ) in xoz and yoz planes, defined as where E s is the total scattered electric field in the region 1, and λ is the incident wavelength. The MIDM procedure flowchart is shown in Fig. 6, and the initial value of TEL is defined regarding to the electric size of an investigated scatterer. For a 3D object, it is better to set the initial TEL value as 0.08 if the volume of the scatterer is within (2λ) 3 , and the initial TEL value is better to be set as 0.16 if the volume of the scatterer is between (2λ) 3 and (4λ) 3 based on the empirical experiences. When the volume is larger than (4λ) 3 , unfortunately, it is impossible for us give the initial value of TEL because the CST installed on the server we can access encounters a stagnation in the simulation and hard to generate simulation results. The step of the refine mesh operation in Fig. 6 is set as 0.01 to regenerate a RWG-mesh in FEKO. We use L = 4 for the multilayered distribution of sources in four illustrative examples, and the layer number factor will be analyzed and discussed later. The computed results are compared with the reference results obtained from commercial software package CST where 4 cells per wavelength for the model and the background is set in the mesh properties of CST. A. FOUR ILLUSTRATIVE EXAMPLES The first example is a both electrically and magnetically uniaxial sphere with radius r a = 0.5λ illuminated by a plane wave with an unit magnitude ofx polarized electric field and propagating along the z axis. The uniaxial medium parameters are 1 = 2 0 , 2 = 4 0 , µ 1 = 3µ 0 , µ 2 = 5µ 0 and the optical axisĉ is parallel to the z axis. The normalized bistatic RCS responses in xoz and yoz planes are shown in Fig. 7(a) and 7(b). The computed results are compared with simulation results obtained from commercial software CST [45] where the finite-element method (FEM) is applied. Good agreement can be observed between the MIDM using single-layered (L = 1) or four-layered (L = 4) distribution scheme and CST. To further quantify the performance of the proposed MIDM, the relative difference, as defined in (14), between the normalized bistatic RCS responses obtained from the MIDM with L = 1 or L = 4 sources distributions and CST is evaluated. Error(dB) = |RCS(MIDM ) − RCS(CST )| (14) The relative difference responses of the first numerical example are shown in Fig. 8(a) and 8(ab). Apart from a big difference appeared around θ = 90 • in yoz plane, the relative difference responses are small for the MIDM with L = 1 or L = 4 as observed in Fig. 8, which corresponds to the good agreement presented in Fig. 7. It is reasonable to have a big difference appeared around θ = 90 • in Fig. 8(b) because the relative difference is evaluated at the peak values of simulated results obtained from MIDM and CST. The second example is related to an electrically uniaxial capsule with a height h = λ and two end-capped hemispheres with a radius r a = 0.5λ, as shown in Fig. 9(a), illuminated by a plane wave with an unit magnitude ofx polarized electric field and propagating along −z axis. The uniaxial medium parameters are 1 = 5 0 , 2 = 9 0 , µ 1 = µ 2 = µ 0 and the orientation ofĉ is defined as θ c = 45 • and ϕ c = 90 • . The computed normalized RCS results in xoz and yoz planes of two IDTs distribution strategies are shown in Fig. 9, and are compared with those obtained from CST. For the singlelayer scheme, which is referred from [42], a disagreement on the normalized RCS computation is obvious to see, yet the proposed multilayered distribution strategy with L = 4 works well and excellent agreement is observed. This phenomenon is also indicated by the relative difference responses in xoz and yoz planes as shown in Fig. 10, where a small difference is observed for the L = 4 sources distribution scheme whereas VOLUME 8, 2020 a large difference is obtained for the L = 1 counterpart. Since the axisĉ is oriented in the yoz plane, the scattered field pattern in the yoz plane will be asymmetric as observed in Fig. 9(c). Another different shape of scatterer is considered in the third example where six offset spheres merged. Six spheres are identical with a radius r a = 0.5λ, and are placed by offsetting one's center to six axes with a distance 0.5r a , as shown in Fig. 11(a). The uniaxial medium parameters are 1 = 2 0 , 2 = 8 0 , µ 1 = 3µ 0 , µ 2 = 9µ 0 , therefore a relative large ratio of anisotropy is considered, and the optical axisĉ is parallel to the z axis. This scatterer is illuminated by a plane wave with an unit magnitude ofx polarized electric field and propagating along z axis, as shown in Fig. 11(a). The normalized RCS results are computed in both xoz and yoz planes with single-and multiple-layered IDTs distribution schemes, and are compared with results obtained from CST as shown in Fig. 11. Once again, the single-layered scheme fails to have a good agreement on the normalized RCS simulation with that from commercial software package, whereas the proposed multilayered IDTs distribution scheme can. This also coincides with the results presented in Fig. 12 where a large difference is observed for the MIDM with L = 1 and a small difference is achieved when the multilayered sources distribution scheme is deployed in the MIDM. The last example is about scattering from a uniaxial layer coated PEC sphere as shown in Fig. 13. In Comparison to previous three examples, three regions instead of two are presented in this scenario. The outermost region 1 is free space, the region 2 is a layer occupied with an anisotropic T i O 2 material, and the innermost region 3 is a PEC sphere with a radius r a . The medium parameters of T i O 2 are 1 = 5.913 0 , 2 = 7.197 0 , µ 1 = µ 2 = µ 0 as referred from [46], and r b = 2r a = 0.6λ. The incident plane wave has a unit magnitude ofx polarized electric field and propagating along z axis. The formulation is similar to previous examples. The only difference is that additional sources located within the region 3 are required to simulate the fields inside of the anisotropic layer, as shown in Fig. 13(b). It is noteworthy that the locations of sources in Fig. 13(b) are randomly depicted to illustrate the concept of MIDM, and the practical implementation of sources placements is referred to Sec. II-C. The simulated normalized RCS responses are presented in Fig. 14. Since the electric size of this example is relatively small, both L = 4 and L = 1 sources distributions in MIDM can generate normalized RCS results which have good agreement with that obtained from the CST. The relative difference responses of the two sources distributions are also small as presented in Fig. 15. B. SINGULARITY ISSUE AND SIMULATION PERFORMANCES The TEL value is set as 0.08 in FEKO to generate a RWGmesh for placements of matching points and IDTs in three numerical examples. The resulting numbers of matching points (N m ) and IDTs (N s ) for each example are displayed in Table 1. As discussed in Sec. III, two types of singularity issues are encountered in using the dyadic Green's functions as discussed in [17]. The first type of singularity issue is not existing in the proposed MIDM, but the second type of singularity occurs. With the help of the freedom of sources placement in MIDM, a filtering strategy can be easily implemented to find those problematic sources as introduced in Sec. III. The number of problematic sources (N sp ) detected under the criterion |R c | 1e −5 in each numerical example is presented in Table 1. Specifically, in the fourth numerical example, the RWG-mesh generated by FEKO results in 854 and 218 matching points on physical boundaries S 1 and S 2 , respectively, and the numbers of IDTs are 569 for regions 1 and 2, and 145 for the innermost region 3. The numbers of problematic sources are 6 in region 1 and 4 in VOLUME 8, 2020 region 3 for single-layered sources distribution scheme, and are 4 in region 1 and 3 in region 3 for the fourth-layered one. Since the number of problematic sources is much less than the total number of sources, it is therefore safe to directly remove them in the MIDM resulting a singularity free numerical solution in simulating uniaxial materials. The simulation performance including the CPU time and the required memory of the proposed MIDM and commercial software CST is displayed in Table 2. The cost on the required memory is drastically reduced by using the MIDM in comparison to the FEM-based solver of CST. This is because a surface discretization method is deployed in MIDM whereas the volume discretization strategy is utilized in the FEM-based solver of CST. It is noteworthy to mention that the surface discretization-based method for simulating anisotropic materials has yet been incorporated in any commercial software so far. Except for the second example, the CPU time by using the proposed MIDM is also smaller than that used in CST, and the simulation performance has more distinct advantages by using the proposed MIDM if a relatively larger scatterer is encountered, as will be introduced in Sec. IV-C. It is noteworthy that our programs are written in MATLAB [47] and we authors are not professional programmer. It is expected that the performance of the MIDM can be improved if a compiled programming language is utilized. C. LAYER NUMBERS AND BOUNDARY CONDITION ERROR RESPONSES The four-layered sources distribution scheme has been proven to be feasible for the simulation of uniaxial objects with a smooth boundary and better than the single-layered one in previous examples. Yet L = 4 is a special case, it is therefore necessary to investigate the MIDM with different layer numbers in order to validate the feasibility of multiplelayered scheme. We then study a T i O 2 sphere with a radius r a = 2λ illuminated by a plane wave with an unit magnitude ofx polarized electric field and propagating along z axis. The TEL is set as 0.16 in FEKO to generate a RWG-mesh for placements of matching points and sources, and resulting 2346 mathcing points and 1564 IDTs. The normalized RCS results are computed in both xoz and yoz planes under different layer numbers of sources, and are presented in Fig. 16. The number of problematic IDTs is much less the total number of IDTs for each different layer number scenario, and is omitted herein. The results generated by using L = 4, L = 8 and L = 30 are almost the same and have excellent agreement with that obtained from CST, whereas the signle-layered scheme fails to provide an accurate solution. To further study this phenomenon, we turn to evaluate the boundary condition error response, which is a straightforward way to examine the accuracy of the solution. The tangential electric and magnetic field boundary condition errors, as defined in (12), in the xoz plane for single-layered and four-layered distribution schemes of IDTs are presented in Fig. 17. It can be observed that the multilayered scheme, L = 4, possesses a better response in E bc or H bc , which means a solution with a higher accuracy can be obtained if the multiple-layered strategy is deployed in the MIDM. By considering the inaccurate responses generated by the single-layered scheme as shown in Fig. 16, a boundary condition error criterion is therefore required to be set. The criterion ( E bc & H bc ) ≤ 0.1% is used in judging the acceptance of errors in MIDM procedure as shown in Fig. 6 to make sure the computed results are accurate. The simulations of the T i O 2 sphere problem were run on a large memory (384 GB) installed server with an Intel(R) Xeon(R<EMAIL_ADDRESS>GHz, and the CPU time and required memory for the proposed MIDM are 8811 s and 1.31 GB, respectively, whereas for commercial software package CST are 82980 s and 170.20 GB, respectively. Obvioulsy, the simulation performance has been drastically improved by using the proposed MIDM. V. CONCLUSIONS AND DISCUSSIONS The MIDM, an efficient momentum solution, has been introduced, formulated and further utilized in the threedimensional scattering computation of uniaxial scatterers with a smooth boundary in this work. The uniaxial dyadic Green's functions have been deployed in the GMT-like method, namely the proposed MIDM, for the first time. A simple and efficient strategy to avoid the second type of singularity issue, which cannot be analytically solved so far for a general uniaxial medium, is proposed. The placements of matching points and sources, which play a key role in the MIDM, have been discussed and specified in detail by making use of the RWG-mesh generated by FEKO. The proposed multiple-layered distribution scheme of sources has been proven to be feasible and accurate in the scattering simulation from a relatively larger objects in comparison to the traditional single-layered counterpart. Several numerical examples are investigated under different scenarios, such as shapes of scatterers, electrical sizes of scatterers, and material characteristics, and the computed results of the proposed MIDM with a multiple-layered sources distribution scheme for each example have excellent agreement with simulated results obtained from commercial software package CST. The cost on required memory has been drastically reduced by using the proposed MIDM, and the CPU time of simulation in the MIDM is also better than that in CST when a general uniaxial material or a relatively larger scatterer is considered. Only uniaxial materials have been investigated in this paper, yet the MIDM can also handle certain types of anisotropic materials, such as chiral, isotropic warm plasma and bi-isotropic materials, since their closed-form Green's functions are available [25]. Although the anisotropic materials with an arbitrary full tensor permittivity and permeability or with a general inhomogeneous configuration which can be handled by the volume discretization-based methods are highly impossible to be tackled by the surface discretizationbased methods, for example the proposed MIDM, the proposed technique has shown a significant advantage on the simulation performance in this work when a homogeneous general uniaxial material is considered, and this advantage is definitely inherited in electromagnetic computations of other anisotropic materials whose closed-form dyadic Green's functions are reachable by using the proposed MIDM. The limitation of the proposed MIDM appears in simulating an object with sharp corners. The fast variation of fields near the corner requires more matching points and sources to be placed around the corner in order to better approximate the behavior of fields there, as discussed in [35], [48], [49] where a two-dimensional scenario was discussed. In this case, the proposed methodology may fail to apply. To overcome this drawback, an adaptive mesh strategy by applying a fine mesh around the corner and a standard mesh on other areas should be deployed, and a strategy to place more sources around the sharp corner is also required in the MDIM. Another alternative solution could be found in [36] where a RWG-mesh based testing method and a randomly distribution scheme of infinitesimal dipoles are deployed, and scatterers involving sharp corners has been successfully simulated by using the RAS method. The electromagnetic simulation of an anisotropic scatterer with sharp corners by using the MIDM would be our future work and will be presented in the future publications.
7,915.6
2020-04-27T00:00:00.000
[ "Physics", "Engineering" ]
SELF-NORMALIZED LARGE DEVIATIONS FOR MARKOV CHAINS : We prove a self-normalized large deviation principle for sums of Banach space valued functions of a Markov chain. Self-normalization applies to situations for which a full large deviation principle is not available. We follow the lead of Dembo and Shao [DemSha98b] who state partial large deviations principles for independent and identically distributed random sequences. From Cramér to Shao Let (E, E) be a Polish space and (X n ) n be a sequence of E-valued random variables. For a borel-function g : E → R d and q > 1, we introduce S n (g) = n i=1 g(X i ) and V n,q (g) = n i=1 g(X i ) q 1/q . If (X n ) n is an independant and identically distributed (shortened as i.i.d.) sequence with distribution µ and if g : E → R is µ-integrable, the classical Cramér-Chernoff large deviation theorem states that where h g is the Cramér transform of the i.i.d. sequence (g(X i )) i . This inequality is useful if h g (x) > 0 for all x > g, i.e. if the "Cramér condition" is satisfied: there exists τ > 0 such that e τ g dµ < ∞. Under this condition, we have 1 n log P S n (g) n ≥ x → −h g (x). However, this assumption is way too strong in many situations. In [Sha97], Shao shows that it is possible to get rid of this exponential moment assumption taking advantage of self-normalization. He considers for instance the self-normalized sequence R n,q (g) = S n (g) n 1−1/q V n,q (g) and obtains the following very interesting result (with g g L q (µ) = 0 if g L q (µ) = ∞ and , lim n 1 n log P (R n,q (g) ≥ x) = − K(x) < 0, without any moment assumption on the random variable (g(X i )) i . In this work, we consider the same problematic in the Markovian framework and obtain analog results in section 2.5. (corollary 2). Full and partial large deviation principles Introducing the notion of partial large deviation principle in two articles [DemSha98a] and [DemSha98b], Dembo and Shao give a more general sense to Shao's paper [Sha97] and lighten the tools used to obtain these results. To help comprehension, we recall the basic vocabulary in large deviation theory. Let E be a metric topological space equiped with its Borel σ-field E. A function I : E → [0, ∞] is a good rate function if its level sets {x; I(x) ≤ t} are compact and it is a weak rate function if its level sets are closed (namely, if I is lower semi-continuous, shortened to l.s.c. in the sequel). A sequence of probability measures (µ n ) n on (E, E) satisfies a large deviation principle (shortened to LDP) with a good rate function I if, for every open subset G and every closed subset F of E, We say that the sequence (µ n ) n satisfies an upper LDP (resp. a lower LDP) if (2) only (resp. (1) only) holds. Moreover, a weak LDP is said to hold if (2) is satisfied for the compact sets of E only and if I is a weak rate function. The concept of partial large deviation principle (PLDP) has been introduced by Dembo and Shao in [DemSha98a and b] : the sequence (µ n ) n satisfies an upper PLDP with weak rate I with respect to a subclass S of E if, for every A ∈ S, we have : The full PLDP is said to hold if (1) is satisfied as well for every open G ⊂ E. Plan of the paper In section 2, we give our main results. A weak large deviation principle for "balanced couples" is stated in section 3 as a preliminary to obtain the main Theorem (the same way as in the i.i.d. case where the weak Cramér Theorem is the first step to prove self-normalized results). We give some commentaries along with examples in section 4. The proofs of the results are given in sections 5 and 6 : section 5 deals with the weak large deviation principle while section 6 provides partial exponential tightness which is the key to obtain partial large deviation Theorem. At last, section 7 brings some precisions about upper weak large deviations (Theorem 2). Main results, partial LDP We consider a Markov chain X = (X i ) i∈N taking values in a Polish space E endowed with its Borel σ-field E. Its transition kernel is denoted by p, C b (E) is the space of real bounded continuous functions on E and P(E) the space of probability measures on E equiped with the topology of weak convergence. If ζ belongs to P(E 2 ), we denote by ζ 1 and ζ 2 the first and second marginal. If ξ ∈ P(E) and Γ ∈ E ⊗ E, then ξp(·) = ξ(dx)p(x, ·), p(f )(x) = p(x, dy)f (y) and ξ ⊗ p(Γ) = I Γ (x, y)ξ(dx)p(x, dy). We work with the canonical form of the Markov chain (E N , E ⊗N , (P x ) x∈E , (X n ) n≥0 ) and the following notation : for any initial distribution ν, P ν = ν(dx)P x . Assumptions on the Markov chain These are the assumptions we might have use of in the following, the third one being useless for upper LDP results. The upper bounds stated in this section require a regularity assumption concerning the Markov chain. Let us recall the classical Feller property and the "almost Fellerian" extension proposed by Dupuis and Ellis [DupEll] and related to a condition introduced by J.G. Attali ([Att]) : Assumption 1 (Fellerian or almost Fellerian transition). • The transition p satisfies the Feller property (or is "Fellerian") if the map : x → p(x, ·) is continuous for the weak convergence topology of P(E). • More generally, denoting D(p) the discontinuity set of x → p(x, ·), p is "almost Fellerian" if, for every x ∈ E and all δ > 0, there exist an open set G δ of E containing D(p) and a real number r(x) > 0 such that for any y ∈ E, d(x, y) ≤ r(x) =⇒ p(y, G δ ) ≤ δ. In particular, for all x ∈ E, p(x, D(p)) = 0. We recall that a Lyapunov function is a measurable, non negative borel-function whose level sets are relatively compact. The existence of an invariant probability measure µ for p is guaranteed by the following "stabilization" condition: there exists a Lyapunov function V, a < 1 and b ∈ R such that pV ≤ aV + b. If this invariant probability µ is unique, (X n ) n is µ-stable (almost surely, L n = 1 n n i=1 δ X i converges weakly to µ) and we have the law of large numbers: if g : E → R d is continuous and such that g ≤ ρ(U ) with ρ(t) t → 0 when t → ∞, The following assumption is a stronger version. It was introduced by Donsker-Varadhan (condition H * in [DonVar76]) in order to obtain an upper LDP for the empirical distributions of a Markov chain. Our version is taken from Dupuis-Ellis ( [DupEll] chap 8) : Assumption 2 (Criterion of exponential stabilization associated with (U, V )). There exists a borel-function U : E → R + , a Lyapunov function V and a non-negative constant C such that : Remark 1. a) Under the assumptions 1 and 2, p always has an invariant probability measure (see Proposition 9.2.6 in [DupEll]). This probability is unique if p is irreducible. c) U and W = e U are Lyapunov functions. Assumption 3 (Strong irreducibility). p satisfies the following two conditions : 1) There exists an integer L such that, for every (x, y) ∈ E 2 , 2) p has an invariant probability measure µ. Remark 2. Assumptions 1,2 and 3 are always satisfied in the i.i.d. case. Particular case of the functional autoregressive models Let us illustrate our asumptions considering the following model taking values in R d : We do not know many large deviations results for such models. We can mention the LDP for the unidimensional linear autoregressive model with gaussian noise (see [BryDem97], ). There also exists a moderate large deviation result for the multidimensional linear autoregressive model or for the kernel estimator with a generalized gaussian noise (see [Wor99]). The study of such models with weaker assumptions for the noise is one of the motivations of selfnormalization (following [Sha97]). Let us consider the conditions imposed by the assumptions stated in section 2.1 for this particular model: • This Markov chain is Fellerian if f and σ are continuous; it is almost Fellerian if f and σ are Lebesgue almost everywhere continuous and ε 1 has a bounded density with respect to the Lebesgue measure on R d . The almost fellerian assumption allows to study important models in econometry such as threshold autoregressive models for which function f is defined as follows: with f i continuous, Γ i disjoined with boundaries of Lebesgue measure zero (and ∪Γ i = R d ). • Exponential stabilization is checked for the model if there exists a positive borel-function U such that lim Also, if φ : R + → R + is an increasing function, for any s ∈]0, 1[, we have Therefore, Hence, we can take U (x) = φ( x ) under various assumptions. Of course, a less constraining condition on noise will lead to a more restrictive condition on the function f . a) Under Cramér condition E (exp(τ ε 1 )) < ∞, (τ > 0), we can take φ(t) = τ r t and the condition on f is lim , we can take, for any s ∈]0, 1[, c) If we only assume that the noise is integrable E( ε 1 ) < ∞, then φ(t) = log(t) and the condition on f is • At last, strong irreducibility is satisfied as soon as ε 1 has a strictly positive density with respect to Lebesgue measure. Donsker-Varadhan rate functions According to Donsker-Varadhan theory ( [DonVar75a] and [DonVar76]), we introduce the relative entropy between two probability measures µ and ν on E : and the rates related with the LDP concerning the empirical distributions L n = 1 n n i=1 δ X i−1 ,X i and L 1 n = 1 n n i=1 δ X i−1 are, if ζ ∈ P(E 2 ) and ξ ∈ P(E), Furthermore, we assume that (B, B) is a separable Banach space endowed with its Borel σ-field. For a B-valued measurable function g on E (resp. G on E 2 ), we set, for x ∈ B : These functions are convex (this statement is proved in Paragraph 5.1) but might not be l.s.c.. Hence, we consider the corresponding l.s.c.-normalized functions h g and h G , with These functions are l.s.c. and convex. We call them "Donsker-Varadhan rates". Finally, the following notations are constantly used in the sequel: For a function h : H → R + and a subset Γ of H, we note h(Γ) = inf x∈Γ h(x). • If g : E → B, the study of the rate h g and the sequence g n is a particular case of the one involving h G and G n using the function G(x, y) = g(y) (it is not difficult to show that, in this case, h g = h G ). Hence, we shall only work with functions G defined on E 2 . Previous LDP for vector valued functionals With the Donsker-Varadhan rate h G , we have the following theorem (see [GulLipLot94]) which generalizes known results for bounded functions. Theorem (Full upper LDP ([GulLipLot94]) Let G : E 2 → R d be a continuous function. 1) Assume assumptions 1 and 2 are satisfied. Let V ⊕ V be defined by V ⊕ V (x, y) = V (x) + V (y) and where ρ : R + → R + is a function such that ρ(t) t → ∞ when t → ∞. Then : 2) If assumption 3 is also satisfied, the full LDP with rate h G is valid. As an example, the case studied in section 2.2 (Functional autoregressive models) proves that the domination condition (6) is not easily checked. In section 2.5., we give self-normalized large deviation principles which would be obvious under the assumption (6), as well as [Sha97] and [DemSha98b] who handled self-normalization in the i.i.d. case to get rid of Cramér's condition. Partial Large Deviation Principle We now state our main results extending Dembo-Shao's work ( [DemSha98b]) to our Markovian framework. Theorem 1 (Self-normalized LDP). Assume that the transition probability p is almost Fellerian and satisfies the criterion of exponential stabilization associated with x (1) ≥ r and J(r) = h G (Γ r ). J is increasing, leftcontinuous and, for every In particular, the following Chernoff-like upper bound holds for every compact subset H of H M : (1) n ) satisfies an upper LDP with rate J(·) on R + . We obtain interesting Corollaries : Corollary 1. Assume that the assumptions and notations of Theorem 1 hold. In addition, we suppose that the chain satisfies the strong irreducibility hypothesis with invariant probability measure µ and G (1) is integrable with respect to µ ⊗ p. Then, for any initial distribution ν, the full partial large deviation principle is valid and we have Finally, we give the following more explicit Corollary, applying Theorem 1 to a function G = ( F q , F ). For q > 1 , we introduce the notation J q (r) = J(r q ). Corollary 2. Let F be a continuous function from E 2 into R d . Assume that the transition probability p is almost Fellerian and satisfies the criterion of exponential stabilization associated with (U, V ). Then, for any given q > 1 and any compact subset satisfies an upper LDP with rate function J q on R + . c) If, in addition, the chain satisfies the strong irreducibility hypothesis with the invariant probability measure µ and if F q is integrable with respect to µ ⊗ p then, J q (r) > 0 if F dµ ⊗ p ≤ r( F q dµ ⊗ p) 1/q and, for any initial distribution ν, with K(r) = 0 if and only if F q is integrable with respect to µ ⊗ p and Remark 4. a) If the function U is bounded above on the compact sets of E, the results hold uniformly over the initial states x ∈ K for any compact subset K of E. b) If the function U is l.s.c., then H M is a compact subset of P(E) and the results hold uniformly over H M . Tests on Markovian models The results stated in section 2.5 (more particularly corollary 2) are obviously interesting, as in the i.i.d. case, to obtain exponential speed to the law of large numbers. For example, the large deviation upper bounds allow to reproduce Shao's results on Student statistic stated in 1.1. and build tests for assumptions such as "the random variable observed is a Markov chain with transition p", with exponentially decreasing levels. Let us be more specific for a test between two assumptions. We consider two transition probabilities (p i (x, ·))(i = 0, 1) on (E, E) satisfying assumptions 1 and 2. Let (µ i ) (i = 0, 1) be the unique invariant probability measures associated with p i and let us assume that there exists a measurable, strictly positive function h such that, for any x ∈ E, ν has density f with respect to its distribution under P 0 ν , with: A natural test "p 0 against p 1 " will have the rejection region The errors are Part d) of corollary 2 leads to the following result. Let us assume that L(p 1 | p 0 ) > 0 or L(p 0 | p 1 ) > 0: the models are distinguishable. Then, for any t such that we have the upper bounds: For another application of the self-normalized large deviation, one can look [HeSha96]. 3 Weak LDP for vector valued functions with Donsker-Varadhan rates Known results concerning empirical distributions Several upper LDP for the empirical distributions L n = 1 [Aco90] and Dupuis-Ellis [DupEll]. The statement that follows is a synthesis of the results we need in our proofs : About Donsker-Varadhan rate functions The function G = (G (1) , G (2) ) considered in Theorem 1 is a particular case of the following class of functions that we will call "balanced couples": • G (2) is continuous from E 2 into a separable Banach space B and, for a continuous function Besides, if the function G (1) has compact level sets (i.e. if G (1) is a Lyapunov function), then the couple (G (1) , G (2) ) will be called "Lyapunov balanced couple". The following lemma will be proved in section 5.1. Lemma 1 (Properties of Donsker-Varadhan rate functions). (5) is a convex function. Hence, its l.s.c.-normalized function h G is a convex weak rate function. 1) For any function 2) If G is a Lyapunov balanced couple then h * G defined in Theorem 1 is a convex and l.s.c. function. Upper weak LDP for balanced couples Theorem 2 (Upper weak LDP for balanced couples). Assume that p is an almost Fellerian transition on (E, E) that satisfies the criterion of expo- n )) n satisfies uniform an upper weak LDP for every initial distribution with weak rate function h * G (·). In other words, for any compact subset K of R + × B and any compact subset H of H M , In particular, if R ∈]0, ∞[ and if C is a compact set in B then lim sup Lower LDP A general lower LDP relative to the sums of Banach space valued additive functionals of a Markov chain has been proved by De Acosta and Ney ([AcoNey98]) with no other assumptions that the irreducibility of the chain and the measurability of the function. Yet, it seems difficult to compare the "spectral rate" for which their lower LDP holds with h G . Our demonstration relies on the dynamic programming method developped by Dupuis and Ellis ( [DupEll]) for proving the lower LDP which needs a stronger assumption than standard irreducibility (Condition 8.4.1. in [DupEll]). Therefore, we achieve a less general result than that of De Acosta and Ney but it holds with the rate h G as well as the upper LDP. The following Theorem requires strong irreducibility but no assumption about the regularity of p or G. Theorem 3. If p fulfills the strong irreducibility assumption and if G : E 2 → B is measurable, integrable with respect to µ ⊗ p, then, for every initial distribution ν, the sequence (G n ) n satisfies for any open set U of B lim inf n→∞ 1 n log P ν G n ∈ U ≥ −h G (U ). Cramér and Donsker-Varadhan for i.i.d. random vectors We consider a Polish space E and an i.i.d. E-valued random sequence (X n ) n with distribution µ. • If g : E → B is a measurable function (where B is a separable Banach space), the sequence (g(X n )) n is i.i.d., B-valued, and (g n ) n satisfies a weak-convex LDP with the Cramér rate : . This result is due to Bahadur and Zabell [BahZab79] (see also [DemZei] and [Aze]). Under the Cramér condition E(exp(t g(X) )) < ∞ (for any t if B is a separable Banach space and for at least a t > 0 in the particular situation B = R d ), h Cramer g is a good rate function and the full LDP holds (see [DonVar76] or [Aco88]). • On the other hand, we are in our context with p(x, ·) = µ(·) ∀x ∈ E. There always exists a Lyapunov function V such that e V dµ < ∞. Hence, the criterion of exponential stabilization associated with (V, V ) is satisfied. The strong irreducibility hypothesis is satisfied and Theorem 3 holds for any measurable function g : E → B, integrable with respect to µ. The convex large deviation principle allows us to write for any ν ∈ P(E), any x ∈ B and Hence, h g ≥ h Cramer g and all our upper bound results stated in section 2 involving rate h g are still valid with h Cramer g (without assuming Cramér condition). The lower bound results obtained in Theorem 1 and in corollaries 1 and 2 hold with the rate h Cramer g according to the weak-convex Theorem. As a direct consequence of full upper LDP Theorem in i.i.d. case, h g = h Cramer g whenever we have E(exp ρ( g(X) )) < ∞, (with ρ : R + → R + satisfying ρ(t) t → ∞). Moreover, for x is the gradient of θ → log exp( θ, g(y) )µ(dy) at a point θ(x) belonging to the interior of θ; exp( θ, g(y) )µ(dy) < ∞ and if γ θ(x) is the probability measure proportional to exp( θ(x), g )µ, we have gdγ θ(x) = x and h Cramer Taking up these two facts, one might ask whether h g = h Cramer g is always true. At this point, we cannot answer to this question. But, we show in the following that it is true for our partial large deviations bounds. In order to avoid situations in which the Cramér rate is senseless (for example, when the Laplace transform is infinite everywhere except in 0), it is natural to consider the weak LDP associated to a balanced couple (g (1) , g (2) ) for which the domain of the Laplace transform contains the set ] − ∞, 0[×B * . This is the idea of Dembo and Shao [DemSha98a and b], [Sha97]. Our paper follows the steps of [DemSha98b] where the authors consider an i.i.d. sequence (Y n ) n taking values in B and the balanced couple (ρ • N (x), x) (ρ and N defined as in Theorem 1). Therefore, when ρ • N (Y ) is integrable, corollaries 1 and 2 yield the same self-normalized results than [DemSha98b] with the same rate (namely the Cramér rate). Without assuming that ρ • N (Y ) is integrable, Theorem 1 and parts a,b of Corollary 2 remain valid. About the full upper LDP Can we extend the full upper LDP stated in 2.4 for functions g such that g = O(V )? Answer is no as we can see with the following counter-example (inspired by [AcoNey98]). • Description of the Markov chain Denoting E(·) the integer part, the following sets form a partition of N : We consider the N-valued Markov chain defined by the following transition kernel : This Markov chain is recurrent and aperiodic with stationary distribution µ such that : * µ(0) = 1 4−3p 0 = c, µ(u m ) = µ(v m ) = µ(w m ) = cp m for every m. In order to compute the Donsker-Varadhan rate I 1 , we must determine which transition kernels q are absolutely continuous with respect to p. They are necessarily of the following form : for each m ∈ N, * q(0, u m ) = q m > 0, q(u m , v m ) = q(v m , w m ) = q(w m , 0) = 1. • A function g Let g be the function defined by g(u m ) = m, g(v m ) = −2m and g(w m ) = m. For any probability measure ν such that I 1 (ν) < ∞, we have gdν = 0 and h g (x) = 0 if x = 0, h g (x) = ∞ elsewhere. On the other hand, if we set r(t) = j≥t p j and a > 0, then we have, for this function, P x (g n ≥ a) = P(X n−1 = 0)r(an), P x (g n ≤ −a) = P(X n−2 = 0)r(an). Moreover, P(X Therefore, if the sequence (p m ) m is such that 1 n log r(an) → R(a) > −∞ and if R is continuous, increasing to infinity, then (g n ) n satisfies a LDP with rate J(x) = −R(|x|) for every initial state (see the Lemma 9). The upper weak LDP cannot possibly hold with rate h g . We now check if these results are compatible with the upper LDP Theorem given in section 2.4. • A criterion of exponential stabilization Assume that p satisfies the criterion of exponential stabilization associated with (U, V ), V non-negative, increasing and such that : Large deviation upper bound and regularity of the function This example shows that the regularity of the function G is necessary to get an upper LDP (unlike the lower LDP). We consider the model where (ε i ) i is an i.i.d. sequence having the distribution P(ε 1 = 1) = P(ε 1 = −1) = 1 2 and we take E = [−2, 2]. The transition kernel of this Markov chain is Let ζ be an invariant probability for q t(·) and D be the following subset of [−2, 2] : We can prove by induction that, if q t(·) is absolutely continuous with respect to p, then necessarily its invariant probability measure ζ is such that ζ(D) = 0. As a consequence, the rate h I D is infinite everywhere except in 0 where it is nul. But, starting from 0, the chain remains in D : therefore, we have 1 n log P 0 ((I D ) n ∈ {1}) = 0. According to these two observations, the upper LDP cannot hold for the sequence (I D ) n with rate h I D . Similarly, our results about Lyapunov balanced couples are no longer valid when the function is not regular enough : for instance, if G(x) = (|x|, I D (x)), then we have The upper large deviations Theorem given in 2.4 does not apply to every measurable function G, even when it is bounded. This remark is also true for our weak upper LDP (Theorem 2), hence for our PLDP. 2) The convexity of h * G follows by a similar argument. Let us prove the lower semi-continuity of h * G . Let (x n ) n = (x (1) n , x (2) n ) n be a sequence of R + × B converging to x = (x (1) , x (2) ) and ε > 0 . Assume that lim inf h * G (x n ) < ∞ (otherwise there is nothing to prove). Let (ζ n ) n be a sequence of P(E 2 ) such that : Therefore, as the function ζ → G (1) dζ has compact level sets (because G (1) is a l.s.c. Lyapunov function) and as (x (1) n ) n is bounded, the sequence (ζ n ) n is relatively compact. Then let (x u(n) ) n be a subsequence such that lim n h * G (x u(n) ) = lim inf n h * G (x n ) and (ζ u(n) ) n converges weakly to some probability measure ζ. For the same reasons of uniform integrability, we have Proof of the weak upper LDP Let G = (G (1) , G (2) ) be a balanced couple. Let K be a compact subset of R + × B and Γ K = {ζ ∈ P(E 2 ); Since G (1) is a Lyapunov function, then Γ K is a relatively compact subset of P(E 2 ). According to Donsker-Varadhan Theorem given in section 3.1, we have: Theorem 2 follows from this Lemma : . Let us prove that I(Γ K ) ≤ I(Γ K ). For every given ζ ∈ Γ K , there exists a sequence (ζ n ) n of Γ K which converges weakly to some probability measure ζ. If the strong irreducibility hypothesis holds with the invariant probability measure µ, we have, for any x : This property implies the µ-irreducibility of the Markov chain. According to Nummelin ([Num]), the chain has a small set C. In other words, there exists a probability measure ξ on C, a real number h > 0 and a > 0 such that : In particular, ξ << µ. If we note ξU (C) = ξ(dx)U (x, C) and R C = {ω; i≥0 I C (X i (ω)) = ∞}, two situations can occur : * C is a transient small set : ξU (C) < ∞ and P ξ (R C ) = 0 ; * C is a recurrent small set : ξU (C) = ∞ and P ξ (R C ) = 1. The transient case is incoherent here because it would imply that µ(C) = ξ(C) = 0. Consequently, C is a recurrent small set. We set Γ C = {x; P x (R C ) = 1} and note that 1 = P ξ (R C ) = P ξ ( ∞ i=n I C (X i ) = ∞) = E ξ (P Xn (R C )) for any n. Moreover, P Xn (R C ) = 1 if and only if X n ∈ Γ C , hence ξp n (Γ C ) = 1 for all n. Therefore, for any x of E and any integer l ≥ L, we have p n (x, Γ c C ) = 0. Every point of E leads to the recurrent small set C almost surely and the transition kernel has an invariant probability measure. Therefore, the chain is positive recurrent (see, for example, Theorem 8.2.16 in [Duf]). Lower Laplace principle We follow the method developped by Dupuis-Ellis ( [DupEll]) to prove the lower LDP for empirical distributions. * Representation formula For any initial distribution ν and every j ≤ n, we introduce the following notations: δ X i−1 ,X i and L n,n = L n ; Let f be a bounded Lipschitz function B → R ; F j is the σ-field generated by X 0 , ..., X j . This is the dynamic programming equation for the controlled Markov chain with state space E × B, control space P(E 2 ) and the transition kernel at time j is Q j,n : Q j,n (y, r, ζ; •) = distribution of (Z 2 , r+ 1 n G(Z 1 , Z 2 )) where Z = (Z 1 , Z 2 ) has the distribution ζ. The final cost is (y, r) → f (r) and the running cost is c j (y, r, ζ) = 1 n K(ζ | δ y ⊗ p) . * Lower Laplace principle Let q be a transition kernel of a recurrent Markov chain with stationary distribution α such that I(α ⊗ q) < ∞ and G dα ⊗ q > ∞. For a Markov chain (Y j ) 0≤j≤n with initial distribution ν and transition kernel q, (T j ) 0≤j≤n , is the controlled Markov chain for the control ζ = δ y ⊗ q. Consequently, and we will take ν δ = ζ δ/M . By convexity of the relative entropy, We now prove that ζ t belongs to N . According to Lemma 4, there exists a transition kernel q t with stationary distribution ζ 1 t such that, for all n and any x ∈ E, q n t (x, ·) << p n t (x, ·). Moreover, ζ 1 t << µ. Let us show that q t satisfies the strong irreducibility hypothesis. Obviously, ζ 1 t (·) ≥ tµ(·) . The probability measures ζ 1 t and µ are equivalent. We denote h the density of ζ 1 t with respect to µ ; h ≥ t. Let A and B belong to E ; Consequently , q t (x, ·) ≥ t f (x) p(x, ·) µ almost surely. We modify q t as we did in Lemma 4 for this inequality to be true on E. This modification raises no problem since ζ 1 t ∼ µ : we change q t on a set N such that ζ 1 t (N ) = 0. Therefore, ζ 1 t remains invariant for q t and q n t (x, ·) ∼ p n (x, ·) for every n ∈ N and all x ∈ E. ♦ The lower "Laplace principle" is proved : for any bounded Lipschitz function f , The lower part of Bryc Theorem ("lower Laplace principle" =⇒ "lower LDP") is proved only considering bounded Lipschitz functions and without using the lower semi-continuity of the "rate" (see [DupEll]). Consequently, we have the lower LDP for any initial distribution ν : for any open subset U of B .♦ 6 Proof of the partial LDP An exponential tightness result The following result allows us to take advantage of exponential tightness criteria stated in [DemSha98b]: Lemma 5. Suppose that assumptions 1 and 2 are satisfied and let F be a continuous function from E 2 into R + . Then, there exists a continuous non-negative function T increasing to infinity such that h T •F is a good rate function and, for every compact subset H of H M and r > 0, Proof The result is clear if the function F is bounded. Let us assume that F is unbounded and consider This function increases to infinity. Let β be a continuous function, strictly increasing to infinity and such that β ≥ α. According to the definition of α, We consider a continuous increasing function k such that k(t)/t → 0 The conditions required to apply the full upper LDP given in section 2.4 are satisfied with ρ = k −1 (·), hence h T •F is a good rate function and we have, for every r > 0, An immediate consequence of this result is that, for any δ > 0, Indeed, we have and h T •F ([δT (r), ∞[) → ∞ as r → ∞, for every given δ > 0. ♦ 6.2 Proof of Theorem 1, part a. Proof We only need to check that ρ(t)/t → ∞ as t → ∞. There exists B > 0 such that, if we introduce For every t ≥ B + 1, we have ρ(t) ≥ at + b thanks to the convexity of ρ(·). If we set L(t) = ρ(t)/t then L(t)/L(t 1/2 ) = ρ(t)/(t 1/2 ρ(t 1/2 )) and therefore L(t) → ∞. As a matter of fact, if (t n ) n ↑ ∞ was a sequence such that lim sup L(t n ) < ∞, then we would have L(t n )/L(t 1/2 n ) ≤ lim sup L(t n )/a < ∞, which contradicts our hypothesis on ρ. ♦ Now, we can apply Theorem 2 to the function G : the sequence (G (1) n , G (2) n ) n satisfies a weak upper LDP with rate h * G . According to [DemSha98b], the partial LDP stated in Theorem 1 holds as soon as (G (1) n , G (2) n ) n is exponentially tight with respect to the class S(ρ • N ) ; in other words if, for any positive number R and any set A ∈ S(ρ • N ), there exists a compact subset To prove such a statement, we apply formula (9) proved in the previous Paragraph to the function F = N • G (2) : this yields, for every δ > 0, On the other hand, ρ • N belongs to the class of functions introduced in definition 0.1.1. of [DemSha98b]. Therefore, the proof of Lemma 0.1.1. in [DemSha98b] applies to the function (G (1) , G (2) ) = (ρ • N (G (2) ), G (2) ), and this entails, for any ε > 0 This property allows us to apply Proposition 0.1.2. of [DemSha98b] to prove that the distributions of (G (1) n , G (2) n ) are exponentially tight with respect to S(ρ • N ). Part a) of Theorem 1 is then proved ♦. Assume that h G (Γ r ) = 0. This implies the existence, for any integer n, of a probability measure ζ n such that Gdζ n ∈ Γ r and I(ζ n ) ≤ 1 n . As I is a good rate function, we can consider a subsequence (ζ n ) n that converges weakly to ζ. By construction of (ζ n ) n , we have I(ζ) = 0 and Gdζ ∈ Γ r . The first assertion entails that ζ 1 = ζ 2 (which we note µ in the following) and that µ is an invariant probability measure for p. The second assertion implies that G (1) is integrable with respect to ζ = ζ 1 ⊗ p, and that For any r > 0, the set Γ r = (x (1) , x (2) ); ρ•N (x (2) ) x (1) ≥ r belongs to the class S(ρ • N ). The Chernoff upper bound is an obvious consequence of part a). ♦ We now prove the upper LDP with rate J. Lemma 8. J is a left continuous non-decreasing function. In addition, J is nul in zero and infinite on ]1, ∞[. Proof If r > r then Γ r is a subset of Γ r , hence h G (Γ r ) ≥ h G (Γ r ) and J is non-decreasing. Moreover, for r > 1, the set of probability measures ζ such that Gdζ ∈ Γ r is empty (since the function ρ • N is convex) and J(r) = ∞. We now prove the left-continuity of J (and consequently its lower semi-continuity). Let r ∈]0, 1] and let (r n ) n be an increasing sequence with limit r such that sup n J(r n ) ≤ J(r). Let then (x n ) n be a R + × B valued sequence such that x n ∈ Γ rn and ζ n be a sequence of P(E 2 ) such that Gdζ n = x n and I(ζ n ) < h G (x n ) + 1/n < h G (Γ rn ) + 2/n = J(r n ) + 2/n. Therefore sup n I(ζ n ) < ∞ and, since I has compact level sets, there exists a subsequence ζ u(n) which converges weakly to some ζ ∈ P(E 2 ). The uniform integrability argument we used in 5.1 leads to the following identities: and thus i.e. ( G (1) dζ, G (2) dζ) ∈ Γ r . Consequently, since I is l.s.c., we have We reach a contradiction ; J is left-continuous therefore l.s.c.. ♦ Since J is l.s.c. and infinite on ]1, ∞[, it is a good rate function. The upper LDP of rate J comes via the following Lemma taken from [Wor00]. Lemma 9. Let (µ n ) n be a sequence of probability measures on R + . If there exists a R + -valued function J, increasing to infinity, nul in zero, left continuous and such that, for any r > 0, then (µ n ) n satisfies an upper LDP with rate J(·) on R + . We apply this result to our situation and we obtain that ( ρ•N (G (2) n ) G (1) n ) n satisfies an upper LDP with rate J on R + . Proof (taken from [Wor00]) Since J converges to infinity, the sequence (µ n ) n is exponentially tight. If we consider the rate H associated with a LDP satisfied by a sub-sequence, we have, for any r > 0, and H(r) ≥ H(]r n , ∞[) ≥ J(r n ) for a sub-sequence (r n ) n increasing to r. H(r) ≥ J(r) by the lower semi-continuity of J. ♦ 6.4 Proof of Corollary 2, part d. Applying Theorem 1 to the borelian set A r ∈ S(|.| q ) defined by Using the same arguments as in the proof of lemma 7, we have K(r) = 0 if and only if F q is integrable with respect to µ ⊗ p and F dµ ⊗ p ≥ r( F q dµ ⊗ p) 1/q . ♦ Appendix As the weak Cramér Theorem in i.i.d. case, the weak upper LDP stated in Theorem 2 might have its own interest. We show in this section that it can be easily checked, without assuming the criterion of exponential stabilization, if the transition is fellerian. 7.1 Weak LDP for the empirical distributions The following upper LDP result has been essentially proved by Donsker and Varadhan [Don-Var75a] and [DonVar76]. Theorem Assume that p is fellerian. Then (L n ) n satisfies a uniform upper weak LDP over all initial distributions: lim sup n sup ν∈P(E) 1 n log P ν (L n ∈ Γ) ≤ −I(Γ) for every compact subset Γ of P(E 2 ) Proof • Denoting LB(E 2 ) the space of Lipschitz bounded functions on E 2 provided with the norm . LB = . ∞ + r(·), r(·) being the Lipschitz constant of the function, let J be the weak rate defined on P(E 2 ) by G(x, y)ζ(dx, dy) − log e G(x,y) ζ 1 (dx)p(x, dy) . As a matter of fact, J is l.s.c. because p is Fellerian hence ν → ν ⊗ p is a continuous map. Setting p * G(x) = log e G(x,y) p(x, dy), we have p * G ∈ C b (E) when G ∈ LB(E 2 ) and the following identity is obtained with conditional expectations, for every initial distribution ν : 1 = E ν e n j=1 G(X j−1 ,X j )−p * G(X j−1 ) = E ν e n (G(x,y)−p * G(x))Ln(dx,dy) . Letting n → ∞ and choosing λ arbitrarily close to J(Γ) yields the following upper bound for every compact subset Γ ∈ P(E 2 ) lim sup n→∞ sup ν∈P(E) 1 n log P ν (L n ∈ Γ) ≤ −J(Γ). (10) • The proof will be over as soon as we prove that (10) holds with the rate I instead of J. Weak LDP for balanced couples Taking advantage of the result proved in section 7.1, we can easily check this altered version of Theorem 2: Theorem 4. Assume that p is fellerian. If G = (G (1) , G (2) ) is a Lyapunov balanced couple on E 2 , then ((G (1) n , G (2) n )) n satisfies a uniform weak upper LDP for every initial distribution with the weak rate function h * G (·). In other words, for any compact subset K of R + × B,
10,120.6
2002-11-13T00:00:00.000
[ "Mathematics" ]
Event-Related Potentials in Parkinson's Disease Patients with Visual Hallucination Using neuropsychological investigation and visual event-related potentials (ERPs), we aimed to compare the ERPs and cognitive function of nondemented Parkinson's disease (PD) patients with and without visual hallucinations (VHs) and of control subjects. We recruited 12 PD patients with VHs (PD-H), 23 PD patients without VHs (PD-NH), and 18 age-matched controls. All subjects underwent comprehensive neuropsychological assessment and visual ERPs measurement. A visual odd-ball paradigm with two different fixed interstimulus intervals (ISI) (1600 ms and 5000 ms) elicited visual ERPs. The frontal test battery was used to assess attention, visual-spatial function, verbal fluency, memory, higher executive function, and motor programming. The PD-H patients had significant cognitive dysfunction in several domains, compared to the PD-NH patients and controls. The mean P3 latency with ISI of 1600 ms in PD-H patients was significantly longer than that in controls. Logistic regression disclosed UPDRS-on score and P3 latency as significant predictors of VH. Our findings suggest that nondemented PD-H patients have worse cognitive function and P3 measurements. The development of VHs in nondemented PD patients might be implicated in executive dysfunction with altered visual information processing. Introduction Visual hallucinations (VHs) and cognitive impairment, which are nonmotor symptoms of Parkinson's disease (PD), have been an intriguing issue in recent years [1]. It is crucial to screen mild cognitive impairment and dementia in PD patients because dementia may cause nursing home placement, increased burden for health care and caregiver, and higher mortality [2]. In the mid-stage of PD, VHs act as a clinical predictor of dementia [3,4] and correlate with disease progression and decline in Mini-Mental State Examination (MMSE) scores [5,6]. Recent hypotheses suggest that the development of VHs in PD may result from an imbalance of external and internal inputs and impaired reality monitoring, while cognitive impairment may play a role in reality monitoring [7,8]. Cognitive correlation of VHs in PD patients is evident [9][10][11][12]. A one-year neuropsychological follow-up study reported that nondemented PD patients with VHs have faster decline of complex visual function and multiple cognitive domains than patients without VHs [13]. Previous studies have also reported worse attention and visuospatial function in PD patients with VHs [14,15]. However, another 4-year longitudinal observatory study showed that VHs may be more associated with longer disease duration, increased functional impairment, and premorbid psychiatry illness rather than cognitive impairment [16]. Accumulating evidence has demonstrated that cognitive dysfunction may contribute to the occurrence of VH symptoms of PD patients in nondemented PD patients with VHs regardless of the side effect of dopaminergic medication [17,18]. Indeed, recent study using functional MRI technique suggests desynchronization between aberrant frontal circuit and posterior cortical areas during active visual hallucinations [19]. Event-related potential (ERP) is a developed sensitive and noninvasive tool to detect cognitive dysfunction in patients with mild cognitive impairment and dementia [20][21][22]. Early components of ERP (N1 and P2) are considered exogenous sensory components that have been associated with attention 2 Parkinson's Disease and sensory processing. The N2 component reflects an early detection of cognitive ability, such as target discrimination. P3 is a positive shift when a subject detects an informative task-relevant stimulus [23,24]. While some studies have supported the correlation between ERP abnormality and cognitive impairment in PD patients with dementia, the role of ERP in nondemented PD patients is not confirmed [25][26][27][28][29][30][31]. One study found visual cognitive impairment and prolonged visual P3 latency especially in patients with PD dementia with hallucinations [32]. Since ERP may be a sensitive tool in the detection of cognitive impairment in PD in the absence of clinical dementia and VH is a potentially premonitory symptom of dementia in PD patients, it may be interesting to explore the ERP abnormality in nondemented PD patient with VHs. In the literature, few studies focused on the topic. We aim to assess the visual ERP and neuropsychological assessments in nondemented PD patients with (PD-H) and without VH (PD-NH) and healthy controls and find the linkage. Participants. This study was conducted at the Kaohsiung Medical University Hospital (KMUH), a tertiary referral center in Southern Taiwan. The KMUH institution review board approved all procedures and written informed consent was obtained from study participants. The control subjects were recruited from volunteer in nearby community college. All PD participants had a presumptive clinical diagnosis of PD according to UKPD Brain Bank criteria. Individuals were inquired carefully and were assigned to groups according to whether they had experienced VHs in the past one year. No patient in the population sampled had a clinical diagnosis of either Alzheimer's disease or Lewy body dementia. Patients were excluded if Mini-Mental State Examination (MMSE) is less than 25. Patients with eye disease or migraine or other conditions like concurrent stroke, delirium, delusions, multiple sclerosis, and psychiatric illness or those under neuroleptics treatment were all excluded. Duration of illness and medication were recorded and stage of illness was scored according to the Hoehn and Yahr scale and United Parkinson's Disease Rating Scale (UPDRS) during "on" state. PD patients take neuropsychological assessment and eventrelated potential during "on" state after regular oral medications. Design Fluency (Five-Point Test), and Similarity (WAIS-R) to assess higher executive function; Luria's Hand Sequence to evaluate motor programming function. Event-Related Potentials Measurements. A visual "oddball" stimulus paradigm (NeuroStim, NeuroScan, Inc.) was used to elicit visual event-related potentials, and an electroencephalograph (EEG) was recorded using Ag/AgCl electrodes placed at 5 scalp locations (FPz, Fz, Cz, Pz, and Oz), based on the 10-20 system. All were referenced to linked earlobes. The electrode impedance was kept below 5 kΩ. The EEG was amplified (band pass, 0.01-40 Hz) by a SynAmps amplifier (NeuroScan, Inc.), and continuous EEG records were kept for further offline analysis at a sampling rate of 256 Hz. The averaging epoch was 1024 ms, including 200 ms of prestimulus baseline [21,22]. The subjects sat in a comfortable chair in a soundattenuated room with dim lighting 100 cm in front of a 19inch LCD computer screen. Stimuli were presented in the central of the screen. The stimuli consisted of two neutral pictures from the NeuroScan template on a dark ground. The participants were asked to centrally fixate throughout the recording. We adopted a visual odd-ball task, with a target stimulus and a nontarget stimulus. Mistrials including eyeball movement artifacts were excluded from the offline analysis. Stimuli were presented randomly with the probability of 20% target stimulus and 80% nontarget stimuli. In each block a total of 250 stimuli (50 targets, 200 nontargets) were presented for 100 ms and interstimulus interval (ISI) of 1600 ms and 5000 ms. The experiment consisted of 4 blocks (2 blocks with ISI of 1600 ms and 2 blocks with ISI of 5000 ms). Participants performed a brief training session to ensure they were able to detect the target accurately. During the examination, participants were asked to press the button as quickly as possible when they saw the target. Reaction time was measured relative to target onset for correct trials, while accuracy was measured as the percentage of correct responses out of all responses to the target stimulus. Individual trials with eye blink artifacts (more than 250 V of peak-to-peak amplitude), target trials for which the reaction time (RT) was more than 1.4 s, and nontarget trials with a response were all excluded from the averaging. Separate ERP averages were made for each trial type. For amplitudes analysis, the mean potential during the 200 ms period preceding the stimulus onset served as baseline. The N1, P2, N2, and P3 components at Pz recording were assessed for highest amplitude distribution. The latencies windows were N1 component as the maximum negativity between 75 and 160 ms, P2 component as the maximum positivity between 170 and 260 ms, N2 component as the maximum negativity between 190 and 360 ms, and P3 component as the maximum positivity between 250 and 500 ms. Statistics. We performed statistical analysis with SPSS 12.0 package, and < 0.05 was set to be statistically significant. We used two-tailed t-test for analyzing continuous data of disease characteristic of PD patients. We used analysis of Demographic Data. Twelve PD-H patients, twenty-three PD-NH patients, and eighteen healthy control subjects were recruited in this study ( Table 1). The mean age, education level, and MMSE did not differ significantly between these three groups, while there were significant differences between the PD patients with and without visual hallucinations with regard to disease duration, duration of levodopa use, Hoehn and Yahr stage, and the scores of UPDRS-III. We also found significant difference in Hamilton depression index in PD-H or PD-NH patients when comparing with normal controls. Visual ERP Data. For the highest amplitude distribution, the N1, P2, N2, and P3 components with two different ISI at Pz are outlined in Table 3. There was no significant difference between PD-H patients, PD-NH patients, and controls, regardless of different ISI (1600 ms and 5000 ms). However, the mean latency of P3 with ISI of 1600 ms in PD-H patients revealed significant prolongation when comparing with that in controls. The mean reaction time and error rate of PD-H patients, PD-NH patients, and controls revealed no significant difference. We also assessed the effect of ISI on P3 latencies, P3 amplitude, and reaction time at Pz (Table 3) Supplementary Table 2 summarizes the odds ratio of binary logistic regression for UPDRS-on score and P3 latency in different models. Overall, the results showed that increase of UPDRS-on scores in PD patients was associated with significantly increased risk of VH in four different models. After adjusting age, gender, and UPDRS-on scores, model 2 disclosed that one millisecond increase of P3 latency in PD patients was in line with 6% ( = 0.046) higher risk of having VH. By contrast, model 3 showed that there was nonsignificant trend where poor performance of Trail Making Tests, R-O copy, or Luria Hand Sequence was more likely to have VH. Discussion Our study showed that nondemented PD patients with VHs had worse cognitive function than those without VHs and age-matched controls. In addition to UPDRS scores, the latency of visual P3 was associated with VH after statistically adjusting the possible confounding factors and also correlated with cognitive impairment in PD patients. In accordance with previous studies using neuropsychological assessment or functional MRI [15-18, 33, 34], our finding suggests that frontal dysfunction may play a role in the development of VH in nondemented PD patients. The term of P300 is composed of mainly two distinct subcomponents, P3a and P3b. Although the precise functional origin of P300 induced by visual stimuli is controversial, visual P3b represents parietal cortical distribution reflecting the top-down allocation of attention resources to relevant stimuli [35][36][37]. As we measured our visual P3 latency as P3b, our P3 latency may reflect the top-down attribution of visual processing. In the present study, P3 latency with ISI of 1600 ms in PD-H patients was significantly longer than control and associated with VH after adjustment of confounding factors. As P3 latency of ERPs increases in line with cognitive decline in Lewy body dementia patients and demented PD patients with VHs [29,32,38], our finding implies that visual cognitive functions are particularly impaired in nondemented PD patients with visual hallucinations. It is accepted that VHs in PD could be related to central cholinergic dysfunction in pedunculopontine nucleus [33,39]. On the basis of indirect pharmacological evidence, P3 ERPs in Alzheimer's disease could reflect central cholinergic function [40,41]. Hence, a possible explanation for our findings might be that nondemented PD patients with VHs might have more dysfunction over the frontobasal cholinergic pathways. In addition, visual ERP of fixed ISI with 1600 ms might be an auxiliary tool to detect cognitive dysfunction in nondemented PD. There are several theoretical models implicated in the development of VHs in PD, and integrative approach may be needed to explore sensory, attention, and cognitive deficits [42]. Functional MRI during active VHs showed desynchronization between frontal and posterior cortical areas involved in visual processing [19], while Shine et al. suggest that decreased attentional network activity and increased primary visual system connectivity with default mode network may contribute to the development of VHs [43]. Our PD-H patient also showed significant deficits in tests about attention, visuoconstructional ability, executive function, and motor programming when comparing to PD-NH patients and control. However, latencies and amplitude of N1 ERP or P2 ERP, which may be more correlated with attentional network in brain, did not show significant differences between groups. There were several limitations in our study. First, we collect PD patients from university-based hospital and the collection bias cannot be completely excluded. Secondly, visual ERP may be affected by excessive eyelid blinking related to blepharospasm, which is common in PD [44]. We did not exclude PD patients with blepharospasm in this study but the eyeball movement artifacts are excluded from the analysis. Thirdly, neuropsychological assessment may be affected by poor attention or decreased motor function in PD patients. We arranged the assessment in the morning and patients receive regular medications before the exam, but poor attention or motor fluctuation may still happen during the time-consuming tests. Conclusion We found that P3 ERPs measurements may be associated with visual hallucination and cognitive impairment in nondemented PD patients. Further longitudinal follow-up may be needed to confirm whether P3 ERP measurements and visual hallucinations might predict the development of dementia in PD patients
3,098.2
2016-12-08T00:00:00.000
[ "Psychology", "Biology" ]
The technical challenges and outcomes of ground-penetrating radar: A site-specific example from Joggins, Nova Scotia The Carboniferous Joggins Formation is known for its complete succession of fossil-rich, coal-bearing strata, deposited in a fluvial meanderbelt depositional setting. Hence, the Joggins Formation outcrop is an excellent analogue for studying the 2D geological complexities associated with meanderbelt systems. In this research, a conventional ground-penetrating radar system was tested with the intent of imaging near-surface, dipping, strata of the Joggins Formation (potentially with subsequent repeats as annual erosion provides new visual calibrations). The survey was unsuccessful in its primary goal, and for future reference we document the reasons here. However, the overlying near-surface angular unconformity was successfully imaged enabling mapping of the approximately 8 m of overlying glacial till. A successful outcome would have allowed observations from the 2D outcrop to be extended into 3D space and perhaps lead to an increased understanding of the small (e.g., bedform baffles and barriers) and large (e.g., channel bodies) scale architectural elements, meanderbelt geometry, and aspect ratios. The study comprises a 42-line, 3.46 km ground-penetrating radar survey using a Sensors and Software pulseEKKO Pro SmartCart system. It was combined with a real-time kinematic differential global positioning system for the georeferencing of survey lines. The 50 MHz antenna frequency, with a 1 m separation, was chosen to maximize the depth of penetration, while still maintaining a reasonable resolution. The results show that many of the lines are contaminated with diffraction hyperbolae, possibly caused from buried objects near or under the survey lines or surface objects near the survey lines. A total of thirteen unique radar reflectors are described and interpreted from this work. The thick clay-rich soil overlying the Joggins Formation probably contributed to Introduction The use of 2D siliclastic outcrops for the study of reservoir analogues provides a wealth of knowledge relating to the interwell scale geometrical and petrophysical heterogeneities within a depositional system, which ultimately control permeability and porosity, and thus, the mobility and capacity of reservoir fluids [1]. Ideally, the 2D outcrop would be extended into the third dimension to allow for the development of a continuous model that would further help with interpretations. The issue that one is then faced with is how to best fill in the region behind the outcrop to create a 3D model [2]. One such method that has the potential of providing this data is ground-penetrating radar (GPR); a near-surface geophysical technique that can provide high-resolution images of ancient and modern sedimentary sequences, which can be used to improve the understanding of small (e.g., bedform baffles and barriers) to large (e.g., channel bodies) scale architectural elements, meanderbelt geometry, and aspect ratios, just to name a few [1][2][3][4][5][6][7][8][9][10][11]. GPR is a non-invasive and non-destructive remote sensing geophysical technique that is highly useful and versatile utilized in several different disciplines for the imaging and subsequent study of the shallow subsurface (e.g., [12,13]). It accomplishes this through the detection of electrical discontinuities by the generation, propagation, reflection, and reception of pulsed high-frequency electromagnetic energy (e.g., [12,13]). These discontinuities are directly related to water saturation, salinity, porosity, and mineralogical variations [3,14]. Ideal GPR results are typically achieved from clean, quartzose-rich clastic sediments that contain no clays or silts (e.g., [7,15]). Signal attenuation is a real concern when performing a GPR survey, with problems arising from concentrations of silt, clay, caliche, and moist saline conditions (e.g., [7,15,16]). Here we provide the first comprehensive results from a primarily road-based GPR survey. In this study, a total of 42 GPR lines were collected over the Carboniferous-aged Joggins Formation of northern Nova Scotia, Canada, using one set of 50 MHz antennae arranged as a transmitter/receiver pair. The GPR system was combined with a Real-Time Kinematic (RTK) Differential Global Positioning System (DGPS) to provide a fully georeferenced group of survey lines with positional accuracy of approximately ±2 cm [17]. The goals of this survey were to image the dipping conformable strata of the fluvial-dominated Joggins Formation to identify sedimentary structures that could be correlated with a previously obtained lidar survey of the cliff face. This work was performed with the aim of providing constraining data on the reservoir architecture of the Carboniferous fluvial meanderbelt system of the Joggins Formation in 3D (outcrop + GPR). The results of the GPR study were to be used as inputs to stochastic models of the Joggins Formation with the purpose of understanding the inherent reservoir heterogeneity sensitivities of the analogous reservoir. The four main geometric measures include channel depth, channel width, sandstone thickness and channel-belt width. From those, four aspect ratios can be calculated, (1) channel depth versus sandstone thickness, (2) channel depth versus channel width, (3) channel-belt width versus channel depth, and (4) channel-belt width versus channel width. Gibling [18] documents the width and thickness of fluvial channel bodies from the geological record, including those measured from the 2D outcrop exposure of the Joggins Formation. Additionally, this study was carried out to test the applicability of the GPR system to provide high-resolution imaging of the dipping strata of the Joggins Formation, with the possibility that these images could be integrated with other outcrop (e.g., LiDAR) and subsurface data (e.g., drill core, well logs). The majority of the GPR data show strong diffraction hyperbolae, which is likely the result of above ground and subsurface objects. These objects could not be bypassed since most of the GPR lines were conducted on gravel/dirt roads traversing on top of the Joggins Formation strata. The objects that contaminate the radargrams must be understood and differentiated from the true sedimentary structures that were the purpose of this survey. There are also many other items that were unique to this survey that could potentially result in the radargrams being contaminated. A search of previous research related to the Joggins Formation yields a vast number of publications that are either written directly about the Joggins Formation or mention the Joggins Formation in some capacity (e.g., [18][19][20][21][22][23]). According to a recent publication by Grey and Finkel [24], the bulk of the research occupies one of three major categories; a general geology category that includes sedimentology and stratigraphy publications (e.g., [21,22,25,26]); a paleobiology category that includes taxonomic discoveries and descriptions (e.g., [27][28][29][30][31]); and a paleoecology category (e.g., [20,32]). Despite the abundant research carried out in this area, there is a lack of research into the subsurface imaging of the Joggins Formation, particularly those that utilize ground penetrating radar. The exception dating back to the early 1960's, until approximately 2008, when numerous 2D seismic lines were collected in the onshore Cumberland Subbasin for the purpose of hydrocarbon resources exploration. In addition, several petroleum boreholes were drilled to test areas and structures of potential interest. Some of these wells penetrated the Joggins Formation strata. The Athol Syncline was the focus of a regional seismic study and appeared to show evidence of rapid subsidence within the Cumberland Subbasin resulting from Mississippian salt withdrawal at depth, allowing for thick sediment accumulations and preservation [26]. Study area The community of Joggins is located approximately 230 km north of Halifax, Nova Scotia. Joggins and the corresponding outcrop lie alongside Chignecto Bay, a smaller bay within the grander Bay of Fundy. In this area, the tides ebb and flow some 13 m with each tidal cycle ( Figure 1). The mean annual temperature is approximately 6.0 °C, and the mean annual precipitation is 1154.8 mm. The ease of access, continuity and quality of the Joggins Formation exposure and the numerous road/grass surfaces over which a GPR survey could be completed are the main reasons behind its selection as a study site. The Joggins Fossil Cliffs (Joggins Formation) were nominated in 2008 as a United Nations Educational, Scientific and Cultural Organization (UNESCO) heritage site, together with six additional conformable formations (Ragged Reef, Springhill Mines, Little River, Boss Point, Claremont and Shepody) because of the exceptionally well-preserved rock outcrops and fossil assemblages that document life during the "Coal Age", a time when fertile forests and wetlands occupied the World's tropics [33]. Joggins and the nearby area have seen extensive coal mining that dates back to 1686 [34], continuing intermittently for over 200 years. During that time, elaborate underground mine workings were created, with many of the remnants (e.g., mine opening supports and railway line support timbers) visible in the cliff face from the intertidal zone (e.g., [34][35][36]). The surface development was also substantial, with timbers (rail track and support) and steel spikes still visible on the intertidal zone between Main Street and the Joggins Fossil Cliffs center. The remains of a wooden pier that existed for the load-out of coal onto ships during high tide for destinations throughout the Maritimes and New England [34]. The ground over which the GPR survey was conducted, has seen extensive, human-related activity, which needs to be accounted for when interpreting the data. Figure 1. Location map of Joggins, Nova Scotia. GPR data was collected from the areas outlined with red (basemap from [37]). Overburden geology Overburden (soil and glacial till) geology is important for GPR studies because of the direct impact it has on data collection, especially when attempting to image underlying bedrock. A review of borehole reports from the area describes the overburden as ranging in thickness from 1.5 to 20.1 m (average of 6.4 m). In the study area, the Joggins Formation is overlain by glacial till with a developed soil horizon on top for a total thickness of approximately 8 m (Figure 2). Joggins soils are grayish-brown, moderately fine-textured, and stony with poor internal drainage (e.g., [38,39]). The "A" horizon is at least 15 cm thick and characterized by pale gray to pale brown sandy loam and sandy clay loam with yellowish mottling, an indication of extended saturation and gleying (e.g., [38,39]). Underlying is a 10 to 25 cm thick yellowish and reddish interval of mottled material with a dullish brown matrix, manganese dioxide (MnO2) concretions and considerable free iron accumulation. The remaining "B" horizon begins at approximately 30 cm and continues to 60 cm depth. It is a compact, dense sandy clay loam with a weakening mottled texture and brown to dark grayish brown matrix [38]. Thin clay lines most of the voids, resulting in a low permeability zone. The underlying "C" horizon is also clay-rich, dense, and dull reddish-brown to grayish-brown with rare mottling. The detailed soil horizon characteristics are described by Nowland and MacDougall [38]. Glacial till is defined as a mass of unsorted debris deposited by a glacier and consisting of grain sizes ranging from boulders to clay [40]. The glacial till on which these soil horizons were formed is a grayish, silty clay loam that originated from the fine-grained grey and red sandstones, shales and mudstones of the Carboniferous coal measure beds [38]. A more recent study by Stea and Finck [41] names the till in this location as the Joggins Till. It is described as being a sandy silt that is dark yellowish-brown in colour with clasts composed of grey sandstones, shales, mudstones, and minor red sandstones and shales, limestone and coal [41]. Bedrock geology The Maritimes Basin is comprised of ten onshore/offshore subbasins, of which the Cumberland Subbasin is one (Figure 3). It hosts numerous, well-known coal deposits, of which numerous seams and their associated mine workings are visible from the intertidal zone. The Joggins Formation is part of the Cumberland Group and with the Mabou Group forms a continuous 14.7 km long outcrop ( Figure 4) along the coast of Chignecto Bay [24]. At approximately 4500 km 2 , the Cumberland Subbasin is a fault-bounded depocenter containing some 7000 m of Late Devonian to Early Permian sediment (e.g., [42,43]). The subbasin occurs over areas of northwestern Nova Scotia and to a minor extent, regions of southern New Brunswick. It is positioned to the south by the Cobequid Mountains, to the west by the Caledonian Highlands and Westmorland Uplift, and to the east by the Antigonish Highlands (e.g., [43,44]). It is suggested by Browne and Plint [45] that the subbasin margins are comprised to the south by the North Fault, to the north by the Caledonia-Dorchester fault system, and to the west by the Harvey-Hopewell Fault. The northwestern basin margins, as suggested by Martel [46] are characterized by a laterally trending basal horst along the Hastings Fault. A sequence of synclines occur in the basin, with the more significant examples being the Amherst, Athol, Scotsburn, Tatamagouche, and Wallace, in addition to a couple diapirism-related anticlines known as the Claremont-Malagash and Minudie, both being encircled by the aforementioned synclinal sequence [44]. According to Ryan and Boehner [44] the Cumberland Subbasin structural elements are correlated with basin growth features and include major synclines as well as growth and strike-slip faults. Those structural elements are either unrelated to or are indirectly related to salt tectonics and their related salt structures such as diapiric anticlines, diapirs, domes, and folds/faults related to salt movement [44]. The Cumberland Subbasin is considered a salt-withdrawal basin with both the slump features and movement of salt occurring concurrently with basin deposition. The Joggins Formation has been interpreted to contain three stratigraphic facies; a well-drained floodplain facies that includes reddish siltstone, mudstone and sandstone with minor greyish mudstone, rare coal and limestone beds; a poorly-drained floodplain facies comprised of interbedded deposits of sand-poor and sand-rich beds, green/grey mudstone associated with coal, carbonaceous shale, and minor limestone; and an open-water facies (marine deposits) of sandstones and siltstones with thin limestone [22]. The strata dip to the south at approximately 21°. Figure 3. Map showing the extent of the onshore and offshore regional Maritimes Basin extent. The Cumberland Subbasin is also included in the Maritimes Basin, but has been separately highlighted. The three major fault zones are as follows: CCFZ, Cobequid-Chedabucto Fault Zone; CFZ, Cabot Fault Zone; and HFZ, Hollow Fault Zone (modified from [47][48][49][50]). The acronym "PEI" stands for Prince Edward Island. [24,37]). The stratigraphic column shows the geological formations with their relative ages. The formations, from both the Cumberland Group and Mabou Group, make up the conformable 14.7 km of strata coastline that is UNESCO recognized. Equipment and methods To achieve the research objectives of this study, a GPR and RTK DGPS system was used. To assist with GPR interpretation, data gathering was paired with an RTK DGPS to precisely georeference the GPR data. The Joggins Formation outcrop along the shoreline also provides a valuable 2-D view, while the GPR attempts to add the 3 rd dimension. The outcrop study of the Joggins Formation helps to characterize and confirm the sedimentology and the internal architecture of the fluvial outcrop, particularly at smaller scales, which the GPR imaging is unable to resolve. A Sensors and Software Incorporated pulseEKKO Pro SmartCart GPR system ( Figure 5) was used for this study and supplied by the Dalhousie University Basin and Reservoir Laboratory. The cart is highly durable and has 4-wheels to provide the rapid and continuous collection of data in open areas. The cart is a self-contained system that includes the GPS rover receiver, transmitting and receiving antennae, digital video logging screen, and power supplies for the screen and antennae. The triggering method for the GPR survey is the built-in odometer. The transmitting and receiving antennae were oriented perpendicular to the line profile direction. The 50 MHz antennae with the standard transmitter/receiver configuration and a separation of 1.0 m was used for all lines. It was chosen because the specifications suggested a depth of penetration and resolution sufficient for Joggins Formation imaging. The 50 MHz antennae have a length of 2 m and a nominal spatial resolution length of at most 0.5 m. The step size is 0.1 m, the time window is 400 ns and there are 250 points per trace. The sampling interval is 1600 picoseconds (ps). The transmitter pulsar voltage is 1000 volts. The assumed velocity was 0.100 m/ns, which is between the value for wet clay and dry clay. The imaging of the Joggins Formation is captured to a depth of approximately 300 ns two-way travel time (TWT), corresponding to a depth of roughly 17.0 m. Setup parameters are listed in Table 1. The GPR system was paired with a RTK DGPS to provide a fully georeferenced group of survey lines with positional accuracy of approximately ±2 cm [17]. The survey incorporated a Leica GPS1200+ Series High Performance Global Navigation Satellite System (GNSS) to apply differential corrections and broadcast accurate location data to the rover receiver. The GPS system consisted of a base station and a transmission antenna used to transmit corrections from the base station to the rover receiver in real-time ( Figure 6). The rover receiver was mounted to the midpoint of the GPR cart. The base station was placed over a drilled water well with established surveyed coordinates (UTM Zone 20T; Easting = 387,098.72; Northing = 5,061,126.31; Elevation = 26.453 m) at the rear (water side) of the Joggins Fossil Cliffs Centre. As the GPR cart is pushed along a survey line, the rover receiver acquires GPS coordinates and the wander or drift that is recorded by the base station is subtracted in real-time from the coordinates recorded by the GPR cart. The corrected points are then recorded into the radargram. Figure 6. Sketch of the global positioning system equipment used to provide a fully georeferenced ground-penetrating radar data set (modified from [17,48]). The base station was assembled over a drilled water well with known coordinates. The transmission antenna was placed adjacent to the base station. The rover was mounted on the GPR SmartCart for providing survey line locational coordinates. The processing workflow follows three main tasks; the first being the selection of an acceptable GPR data processing workflow; the second being the selection of the appropriate parameters and inputs for each processing step, where required (the dewow filter does not have any inputs, it is simply applied or not applied); and finally, the observation of end results for each processing step and the correction of any issues caused by an incorrect parameter [51]. EKKO_Project™ software was used for editing, processing, and viewing the GPR data. The software was developed by Sensors and Software Incorporated and is a professional "software version that allows for data plotting, editing and full processing routines including spatial and temporal filters, migration, instantaneous attributes, amplitude spectra, CMP velocity analysis and more." All GPR data were post-processed. Processing was performed using iterations paired with descriptions of how and when each processing technique should be applied. Processes were applied individually and in conjunction with other processes until the radargram was sufficient for interpretation. The GPS data was collected concurrently with the GPR data on the Sensors and Software Digital Video Logger (DVL) and was added to the GPR data using the file that recorded GPS positions at regular trace intervals. The GPS data was stored as the GGA (Global Positioning System Fix Data) format, which is a standard format recognized by the National Marine Electronics Association. The GPS data was converted to UTM coordinates and the step-size re-calculated. Topographic correction of the GPR data along the survey lines was performed using the EKKO_Project™ software. The topographic variation along the survey areas is shown on the inset elevation profile plot in Figure 7a. Signal saturation correction or dewow is a type of time filter and was applied to each trace for the removal of the initial DC component and low-frequency, slowly decaying "wow" [52,53]. This is caused by the arrival of early waves, dynamic range limitations on instrumentation, and/or inductive coupling effects and becomes superimposed on the high frequency reflections [52,53]. It is typically almost always applied and is usually the first process applied. To compensate for the spreading and attenuation of the propagating wave front, the Spreading and Exponential Compensation (SEC) gain was used to apply the exponential gain (approximately 1/r2) that compensates for the spreading and attenuation of the propagating wave front. The input parameters necessary for this gain are an attenuation value for the substrate, a beginning value to be added to the exponential gain function and a maximum value for the gain. The average time-amplitude plot for each trace was examined both before and after the application of the gain to ensure it was properly applied as described by Annan [54]. Each line had unique attenuation, start and maximum values. The values for attenuation ranged between 2.08 and 5.31 dB/m with an average of 3.63 dB/m. The start gain value ranged from 0.62 to 1.44 with an average of 0.92. The maximum gain value applied ranged from 32 to 222 with an average of 123.79. Table 2). The roads are primarily non-linear, so the survey was completed using numerous short straight-line segments. The GPR survey areas are sparsely populated; however, there are an abundance of surficial features, both human-made and natural, that may have varying effects on the quality of the GPR data collected. Many of the surficial features and objects that occur adjacent to or in the vicinity of GPR data collection are shown in Figure 7b. Residential dwellings are generally well-spread out, but are located in certain areas along Hardscrabble Road, Main Street, and Mitchell Street. Along with these dwellings is the associated infrastructure, mainly utility poles and power lines, which do occur in abundance adjacent to the three road surfaces surveyed. The infrastructure also includes steel guard rails with wooden posts along the road areas that closely border the cliff edge. Areas of vegetation (e.g., grass, plants, and trees) are also located along the flanks of the roads where the GPR surveys were completed on. Due to the variation in surficial features along the GPR survey lines, it may be possible to look for subtle changes in the reflection profiles and correlate the response to a particular feature. Processed radargrams The completion of the GPR survey at Joggins, Nova Scotia resulted in 42 unique radargrams across three road surfaces (Hardscrabble Road, Main Street, and Mitchell Street) and a grassy area adjacent to Main Street. The 42 processed radargrams are displayed in Figure 9. All radargrams have an initial high amplitude, thick, horizontally continuous reflector. It always occurs at the top of the radargram and follows the topography of the individual survey line. In addition, all radargrams display a second high amplitude, thick, primarily horizontally continuous reflector that is always below the primary top reflector. It too will generally follow the topography of the individual survey line, but can be discontinuous. Numerous radargrams contain high amplitude, sharp, concave downwards reflectors that have a consistent shape and are either overlapping or have a consistent spacing. In general, coherent reflectors are absent below approximately 6 to 8 m depth. The resulting reflectors that are visible in the radargrams are summarized in the following radar reflectors section and in Figure 10. In total, 13 radar reflectors are described and interpreted. Radar reflectors Selected results from four areas surveyed across the Joggins Formation are presented here. The variability in surficial features along the various surveys results in reflectors that can be correlated nicely to these features. A total of thirteen unique radar reflectors have been identified from the GPR data and are briefly summarized ( Figure 10). RR1 is a high amplitude, thick reflection and is the first signal measured by the receiver and occurs in all 42 collected radargrams. It is continuous for the complete length of each radargram and is followed by a low amplitude signal. It follows the natural topography of the GPR survey line. There are no other features that occur with this reflector. RR2 is a high amplitude, thick reflection and is the second signal measured by the receiver and occurs in all 42 collected radargrams. It is typically continuous for the complete length of each radargram, although it can be discontinuous in several lines. It is followed by a low amplitude signal. It follows the natural topography of the GPR survey line. This reflector is affected by the subsurface and surficial features. Reflectors RR3 is characterized by a thin, high amplitude reflection, followed by a lower amplitude signal. This reflector occurs at approximately 8 m depth in the radargrams it occurs in, which is consistent with the approximate depth of the overburden in the area. The RR4 reflector is common throughout the radargrams. It is characterized by regularly spaced; high amplitude followed by low amplitude concave downwards reflectors. They have a consistent shape and are sharply outlined. The RR5 reflector is also common throughout the radargrams. It is characterized by irregularly spaced; high amplitude followed by low amplitude concave downwards reflectors. They have a les consistent shape and are not as sharply outlined as the RR4 reflector. reflector RR6 is not common throughout the radargrams and is characterized by a break in the second thick, high amplitude reflector. The RR7 reflector is characterized by high amplitude, repeating, and parallel reflectors that reverberate throughout the radargram. The RR8 reflector is reflection-free. There are no coherent reflectors observed in the radargrams. RR9 is a thick, high amplitude, concave downwards reflection that only appears to be visible in a couple of radargrams. It is associated with the high amplitude, thick, continuous reflector (RR2). RR10 is a high amplitude, thick, and discontinuous reflector that occurs below RR2. It displays an undulating profile and is only visible in a single radargram. RR11 is a continuous, upright, high/low amplitude reflector that mimics ra teepee shape. It is very-well defined and sharp. It occurs in only five radargrams. RR12 is a vertical, mottled high and low amplitude reflector that is only visible in a single radargram. The features are not continuous throughout the whole radargram. RR13 is a small, near-surface, high amplitude reflector. It is concave downwards and sharp with a consistent shape. They are associated with RR2. Discussion RR1 is defined as the direct air wave. This high amplitude reflection is the first signal measured by the receiver and occurs in all 42 collected radargrams. It is continuous for the complete length of each radargram and is followed by a low amplitude signal. Typical examples of this radar reflector can be seen in all radargrams, including radargrams 31 and 37 ( Figure 11). Since the radargrams have all been georeferenced, the direct air wave follows the topography. RR2 is defined as the direct ground wave. This high amplitude reflection is the second signal measured after the direct air wave and is also present in all 42 radargrams. The majority of the radargrams display a continuous direct ground wave, except for lines 27, 39, and 45 in which they are discontinuous. Typical examples of this radar reflector can be seen in all radargrams, including radargrams 31 and 43 ( Figure 12). The undulating nature of the ground wave highlights the small topographic changes that occur over the lengths of each GPR line segment. RR3 is interpreted to represent the angular unconformity contact between the Carboniferous-aged Joggins Formation and the overlying Quaternary-aged glacial till and soil cover. The sharpness of the contact as well as the depth it is occurring at (6 to 8 m) are the main reasons for this interpretation. Curiously though, this reflector is not present in all radargrams, even though all survey lines were completed on top of overburden overlying the Joggins Formation. Examples of this radar reflector can be seen in several radargrams, including radargrams 09 and 13 ( Figure 13). The RR4 reflector correlates to utility poles that are erected at certain locations adjacent to the road surfaces. Several radargrams were recorded along road surfaces that have regularly spaced wooden utility poles located less than 10 m from the centerline of the road. The data from these lines show regularly spaced, clear, and sharp, concave downwards reflectors whose locations match those of the utility poles along the survey lines. Typical examples of this radar reflector can be seen in several radargrams, including radargrams 18 and 23 ( Figure 14). The RR5 reflector is also composed of concave downwards reflectors. However, these reflectors are noticeably different when compared to the utility poles described by RR4. Through careful examination of the radargrams, it was determined that RR5 represent trees. In contrast to the regularly spaced utility poles giving regular and clear diffraction hyperbolae, trees on the other hand produce hyperbolae that are randomly occurring and are overlapping. This theory was tested by viewing the GPR lines that did not have power line infrastructure located next to the road surface, but did have abundant, randomly occurring trees. Representative examples of this radar reflector can be seen in radargrams 10 and 32 ( Figure 15). The RR6 reflector is rarely observed in the radargrams. The survey lines where this reflector becomes discontinuous are also the areas where the GPR has passed beneath a power line. It can therefore be surmised that overhead power lines can cause a brief break in the direct ground wave. Typical examples of this radar reflector can be seen in radargrams 27 and 39 ( Figure 16). We interpret the RR8 reflector as areas where the GPR signal has been attenuated. This typically signifies either a lithological unit that is massive and homogenous, the presence of dissolved minerals in groundwater with highly conductive properties, and/or the presence of clay-rich sediments (e.g., [56,57]). In this study, it was determined that the attenuation was caused by the overlying clay-rich glacial till and soil. This reflector makes up the majority of all radargrams and is widespread among all 42 radargrams, indicating that at least locally, the clay-rich glacial till and soil are probably present. Typical examples of the attenuated signal can be seen in radargrams 19 and 20 ( Figure 18). RR10 is interpreted to represent the undulating angular unconformity contact between the Carboniferous-aged Joggins Formation and the overlying Quaternary-aged glacial till and soil cover. The sharpness of the contact as well as the depth it is occurring at (6 to 8 m) are the main reasons for this interpretation. The lone example of this radar reflector can be seen in radargram 30 ( Figure 20). RR12 is interpreted to be two instances where data skips occurred (not collected at a trace location). These two vertical features are not related to any subsurface features. The sole example of this radar reflector can be seen in radargram 25 ( Figure 22). Despite these two instances of non-signal, there are noticeable repeating hyperbolae occurring in the background, which correlates to the wooden utility pole at position 75 m near the end of the line. RR13 is interpreted to represent the locations of boulders and are associated with the RR2 radar reflector (direct ground wave). Since the area does contain several meters of glacial till, it would be reasonable to assume that larger rocks are present beneath the areas traversed with the GPR. Typical examples of this radar reflector can be seen in radargrams 09 and 10 ( Figure 23). Performing a GPR survey as it was done over the Joggins Formation presented a number of challenges. In the 42 radargrams that were gathered over the study area, it does not appear that any contain reflections that would be considered those of the Joggins Formation. The most probable culprit for the failure in imaging the Joggins Formation strata is the thick clay-rich soil and glacial till overburden that would have greatly attenuated (radar reflector 8) the transmitted energy in the subsurface. Another probable culprit is the Joggins Formation itself. It is well known from viewing the outcrop on the intertidal area that the strata dip at a constant 21° and are highly variable with respect to both lithology and thickness. Sedimentary beds of a certain thickness would not be visible since they are below the resolution of the antennae used. Furthermore, the beds are composed of a wide range of lithologies, from clay-sized particles up to gravel-sized particles; thus, the individual beds themselves could be contributing to attenuation as well. Power lines run along the edge of the roads that were surveyed, which may result in some problems. At certain sections of the survey, the GPR was near the cliff edge. This could translate into some edge effects in the radargrams. The GPR data are contaminated to varying degrees by above-ground objects. Several lines were recorded along road surfaces that have regularly spaced wooden utility poles located less than 10 m from the centerline of the road. The data from these lines show regularly spaced and crisp/clear In contrast to the regularly spaced utility poles giving regular and clear diffraction hyperbolae, trees on the other hand also cause a similar phenomenon in the radargrams, except that the hyperbolae occur randomly and are overlapping. This theory was tested by viewing the GPR lines that did not have power line infrastructure located next to the road surface, but did have abundant, randomly occurring trees. Typical examples are shown in radar reflector 5, which displays abundant, irregularly spaced and overlapping hyperbolae. Similar diffraction hyperbolae occur in lines 9 to 13 and 31 to 35 on Hardscrabble Road, and Line 43 on Main Street. In a few instances, a power line will cross over Hardscrabble Road or Main Street. When this occurs, a noticeable feature can be observed in the radargrams (radar reflector 6). The clear and crisp reflectors that are typical with the utility poles become more chaotic. This would infer that the power lines passing overhead do influence the GPR signals. Conclusions In this study, we have utilized GPR in an attempt to image the internal geometry and architecture of the Joggins Formation and thus, aid with extending the 2D outcrop into 3D by way of modeling the larger scale features. Through iterations of GPR processing techniques and examination of the resulting radargrams, it can be concluded that the survey was unsuccessful in showing Joggins Formation structures and internal architecture. This is likely the result of a combination of factors, with the dominant one being the thick, highly conductive nature of the clay-rich glacial till/overburden. Unfortunately, it was determined that numerous difficulties relating to both the Joggins Formation itself and the area over which the GPR data was collected made any imaging of the Joggins Formation non-occurrence. The imaging issues that are directly related to the Joggins Formation most likely include (1) dipping beds cause increased refraction, unlike a horizontal or nearly horizontal bed, (2) the scale of the beds is too fine for the GPR configuration used for individual beds; at best, it could only image the thicker beds, and (3) the abundant jointing and fracturing visible in the outcrop exposure probably permeates throughout the subsurface, thereby compounding imaging problems. It is possible that lower frequency antennae could have provided a depth of penetration sufficient to image the Joggins Formation; however, the resolution would have been affected and the likelihood of being able to interpret the dipping strata would have been compromised. A variety of overburden and surface/subsurface objects may also affect the GPR data collection. Some of the most likely issues include (1) the soil and glacial till making up the overburden layer are clay-rich, thereby leading to signal attenuation, (2) the compacted road surface also has clay materials, in addition to other materials which would also enhance signal attenuation, (3) metallic/wooden objects either exposed at the surface or buried will create additional artifacts in the radargrams. These objects include galvanized steel guard rails, overhead power lines and their associated infrastructure, traffic signage, and cars parked in driveways or passing the GPR equipment while collecting data. Other potential issues that may hamper efforts include the uneven terrain the data was collected on and whether there are GPR data collection issues near the edge of a vertical cliff face. The effects of surface objects, such as trees and utility lines are well-documented as being the culprits of many diffraction hyperbolae seen in the radargrams. Despite the lack of subsurface imaging from the Joggins Formation, significant insight was gained as to the limitations of the GPR application in this type of environment. There was also some moderate success in imaging the angular unconformity, which was apparent in several radargrams. Perhaps the imaging of the unconformity was related to variations in overburden thickness and/or clay content. Although the primary objective of imaging the Joggins Formation was unsuccessful, it nevertheless increased our knowledge concerning the true impact of clay-rich overburden sediments on bedrock imaging, in addition to the impacts that surface features can cause on GPR data collection. The knowledge gained from this study can be utilized in future GPR surveys, particularly with respect to geoforensic studies where the study areas contain similar surface infrastructure (e.g., utility poles, power lines, etc.). This study also demonstrates the usefulness of shallow subsurface GPR for geohazard assessments. It is important to reiterate that this was the site of extensive past coal mining efforts with well-developed surface infrastructure. Therefore, the area probably contains an abundance of erratic metal objects, large boulders, etc., that would have an influence on the generated radargrams.
8,603
2021-01-01T00:00:00.000
[ "Geology" ]
A Space Weather Mission Concept: Observatories of the Solar Corona and Active Regions (OSCAR) Coronal Mass Ejections (CMEs) and Corotating Interaction Regions (CIRs) are major sources of magnetic storms on Earth and are therefore considered to be the most dangerous space weather events. The Observatories of Solar Corona and Active Regions (OSCAR) mission is designed to identify the 3D structure of coronal loops and to study the trigger mechanisms of CMEs in solar Active Regions (ARs) as well as their evolution and propagation processes in the inner heliosphere. It also aims to provide monitoring and forecasting of geo-effective CMEs and CIRs. OSCAR would contribute to significant advancements in the field of solar physics, improvements of the current CME prediction models, and provide data for reliable space weather forecasting. These objectives are achieved by utilising two spacecraft with identical instrumentation, located at a heliocentric orbital distance of 1~AU from the Sun. The spacecraft will be separated by an angle of 68$^{\circ}$ to provide optimum stereoscopic view of the solar corona. We study the feasibility of such a mission and propose a preliminary design for OSCAR. Introduction The OSCAR mission concept was conceived during the Alpbach Summer School 1 2013 on space weather over a period of 2 weeks. We report here the resulting concept that, we believe, is of significant interest for the design of a future space weather oriented space mission. Our mission consists of twin satellites orbiting the Sun at 1 AU, with one leading the Earth and the other trailing behind it. They are designed to improve significantly our knowledge of space weather phenomena as well as to develop a space-based space weather forecasting system. This paper is organised as follows. We first give a quick introduction on the background and motivations that lead to this mission concept (Sect. 2). We then develop the mission objectives and the associated key scientific requirements of OSCAR (Sect. 3). In Section 4, we provide details of the instrumentation needed onboard OSCAR to satisfy those requirements. A preliminary design of the spacecraft for such a payload is given in Section 5. The mission design, from the orbit selection to the operational modes and the ground segment, is then described in Section 6. In the following Section 7, an estimate of the mission cost and subsequent descoping options is discussed. We finally conclude our study in Section 8. Background and motivation Space weather describes the changes in the near-Earth ambient plasma that result from solar and cosmic activity. It is a field of major importance in society today as the environmental conditions in the vicinity of Earth severely affect space-and groundbased systems (see, e.g., Schwenn 2006;Pulkkinen 2007). Most frequent space weather events originate from the Sun. In order to improve our knowledge and anticipation of space weather phenomena one needs to (i) study the origin of those events inside and at the surface of the Sun, (ii) accurately describe their propagation in the interplanetary medium from Sun to Earth and (iii) improve our understanding of how they impact Earth's magnetosphere and atmosphere. Solar dynamics lead to the triggering of CMEs and indirectly to the development of CIRs that propagate in the solar wind. When they encounter the Earth's magnetosphere, they trigger strong magnetic reconfigurations known as geomagnetic storms. CMEs alone cause more than 80% of geomagnetic storms and thus represent a severe threat for modern technology (Zhang et al. 2007;Liu et al. 2014). CMEs originate from the complex magnetic structures within the solar ARs. These regions are created by magnetic flux tubes arising from beneath the photosphere, ultimately forming loops in the corona that are anchored to the solar photosphere (for a review, see Fan 2009). Magnetic shears and stresses within the photosphere lead to an increase of the energy stored in the coronal loops, often triggering magnetic reconnection inside them and leading to explosive events called flares which emit energetic particles and radiate in the X-ray and extreme ultra-violet (EUV) bands. In some particular configurations, CMEs are triggered and erupt from the AR (see, e.g., Nitta et al. 2013). Note that CMEs are not systematically correlated with flares (Webb & Howard 2012), but the majority of large flares are observed to be followed by CMEs (Yashiro et al. 2005). Multiple models have been developed to pinpoint the exact triggering mechanism(s) of CMEs, but their unification is still a major challenge in solar astrophysics (see Zuccarello et al. 2013, and references therein). Such unification would greatly help space weather forecasting programmes to anticipate CME launches from the solar surface, and discriminate the potential threats further in advance. Hence, the understanding of the trigger of CMEs is a major goal for the progress of solar physics as well as the development of space weather forecasting systems. Once triggered, the CMEs propagate outwards from the Sun in the corona and can reach a distance of 1 AU in a time ranging from 14 h up to 5 days (Chen 2011). The fast-moving CMEs, when geo-directed, are generally considered to be the most dangerous space weather events and are also the most difficult to anticipate due to their high propagation velocity. In addition, CMEs can either accelerate or decelerate during their propagation from Sun to Earth (Gopalswamy et al. 2000) depending on the ambient solar wind in which they evolve and on their initial energy (Manoharan 2006) it. Hence, forecasts have to use either well-cadenced observations of the heliosphere or solar wind models that accurately take the acceleration/deceleration processes into account. The best well-cadenced heliospheric data today comes from the Solar Terrestrial Relations Observatory (STEREO) mission which proved to be very useful for the prediction of CME arrival time at Earth. The ability to combine direct images of the heliosphere from the Sun to 1 AU from different points of view with the STEREO spacecraft led to significant novel capabilities in the context of space weather forecasting. To cite a few, Möstl et al. (2014) showed that the use of heliospheric images was extremely effective in reconstructing and forecasting highspeed solar wind streams. Using the stereoscopic capabilities, Davies et al. (2013) recently developed a robust technique to determine time profiles and propagation direction of transients in the solar wind. These observations were even shown to provide the capability to determine the excitation site of solar energetic particles observed at 1 AU and to relate this site to a particular CME event (see, e.g., Rouillard et al. 2012). However, because of the choice of orbits for the two STEREO satellites, the data best suited for stereoscopic observations is available for a very limited amount of time. Hence, no reliable long-term forecasting system of CMEs can be set up from those observations. CIRs are produced when the fast solar wind catches up with the slow wind. They consist of high pressure regions that co-rotate with the Sun in the solar wind. They eventually lead to the formation of co-rotating shocks in the supersonic wind, generally at distances larger than 2 AU (Richardson 2004). Depending on their magnetic field orientation with respect to the Earth's magnetic field, CIRs can transfer part of their energy to the magnetosphere which, in turn, causes weak to moderate magnetic storms. Some of these storms can cause significant damage, not only to space technology but also to communication, transportation and electrical power systems. There are currently missions like STEREO and the Solar Dynamics Observatory (SDO) which aim to study -at least some aspects of -CMEs and CIRs. While these missions are still providing important new insights for our understanding of the solar corona, none of them truly tackle the hard task of providing a space-based, reliable and long-term space weather forecasting system. In addition, STEREO's stereographic images demonstrated the large gain we obtain from observing the Sun with simultaneous different points of view (for recent reviews, see Bemporad 2009;Zuccarello et al. 2013, and references therein). Hereby we propose a new mission concept, OSCAR, which aims to provide new and decisive stereoscopic data that will allow us to finally identify how CMEs are triggered and how to forecast them. With our mission design, for the first time and during its whole lifetime, an efficient forecast of geo-effective CMEs and CIRs will be operating in nearreal time. Taking advantage of the position of the spacecraft, CMEs will be monitored and forecasted with remote-sensing instruments and CIRs will be forecasted thanks to in-situ measurements before reaching the Earth. Combining science objectives with near-real time forecasts The OSCAR mission addresses the difficult challenge of space weather forecasting, from the initiation of interplanetary CMEs (hereafter generically referred to as CMEs) to their coupling with Earth's magnetosphere. In this context, we are interested in the most energetic CMEs and CIRs that can affect the terrestrial environment and human life. In this section we define the mission objectives (Sect. 3.1) as well as the associated scientific key requirements (Sect. 3.2). Mission objectives We break down the mission objectives into one primary objective and two secondary objectives. Unveil the trigger mechanism(s) of CMEs in active regions and their robust forecasting indices The primary objective is to provide data to efficiently forecast the onset of CMEs in active regions on the solar surface. This has to be achieved through the study of the 3D structure of coronal loops to unveil the physical trigger mechanism(s) of CMEs. The highly energetic CMEs that we are interested in are strongly associated with M-and X-class flares (only 10% of the X-class flares are not associated with CMEs, see Yashiro et al. 2005). We will observe the magnetic field of sunspots, the 3D structure of coronal loops and the flaring process in ARs at the onset of CMEs. We know that flares and their associated CMEs (when these are present) occur on a time scale from minutes to hours (Shibata & Magara 2011). High-cadenced observations -one of the major challenges for this mission -of these three quantities will allow us to sample the whole eruption process with unprecedented details. This will in turn provide strong constraints on the various physical models of solar eruptions (Zuccarello et al. 2013) and allow the identification of key plasma quantities as robust trackers of the trigger of CMEs. In addition, we will observe hundreds of ARs during the lifetime of OSCAR and be able to create a catalogue of AR topologies, associated with their ability to trigger CMEs. This catalogue will be used to validate theoretical models and numerical simulations in terms of necessary conditions -e.g., in terms of magnetic helicity in the magnetic structure of the AR or temperature profiles across the flux tubes -for the trigger of a CME. Hence, it will be highly valuable to further guide the future forecasting of CME triggers on the Sun. 2.a Provide the necessary data for a near-real time forecasting geo-affecting/effective CMEs and CIRs Real-time estimates of arrival time for geo-effective CMEs remain rather inaccurate today (Davis et al. 2011) and strongly depend on solar wind models. The aim of OSCAR is to provide data for accurate prediction of arrival time of those CMEs (either with a data-driven or a purely empirical model, e.g. Howard & Tappin 2008). CIRs produce 13% of geomagnetic storms (Zhang et al. 2007). Because they can last a long time and even repeat themselves after a 27-day solar rotation, they represent a threat for space-based infrastructure (Borovsky & Denton 2006). In-situ measurements are consequently required to observe them. Since they co-rotate with the Sun, their propagation speed requires distant in-situ observation to be able to forecast them. With OSCAR, we intend to provide data for reliable forecasts of CIRs. Altogether the OSCAR mission will provide 6-h updated forecasting data of ocurring CMEs based on high-cadenced remote-sensing measurements and of CIRs based on well-located in-situ measurements. This will ensure a forecasting window of about 8 h for the fastest CMEs and of 2 full days for CIRs. Thus, OSCAR will sustain for all its lifetime a reliable forecasting for CMEs and CIRs. 2.b Enhance our understanding of spatial structures of CMEs and CIRs at 1 AU Our knowledge of the composition, geometry and magnetic field of geo-effective CIRs and CMEs is mainly based on local measurements in the vicinity of Earth (except for the STEREO measurements). With the OSCAR mission we will be able to cover an angle of 68°for CME in-situ measurements at 1 AU. In addition, OSCAR will be able to trace the evolution of CIRs on a short time scale of 5.5 days, which was shown by STEREO to be the shortest typical time scale for major changes in the solar wind structure (Gómez-Herrero et al. 2011). Combining this large set of in-situ measurements with the remote-sensing techniques described above, OSCAR shall be able to enhance our basic understanding of space weather relevant CME/CIR aspects such as (i) CME propagation out to 1 AU (Dal Lago et al. 2013), (ii) CME shock acceleration of energetic electrons (Simnett et al. 2002), (iii) interaction of CMEs and CIRs at 1 AU (Gómez-Herrero et al. 2011) and (iv) CIR shock acceleration of low energetic ions, which were found to be an additional precursor for geomagnetic storms (although in general the shock structure of CIRs fully develops much beyond 1 AU, it was shown by Smith et al. 2004. that some inner CIRs can be fully developed and are at the origin of some large geomagnetic storms). Finally, OSCAR also provides numerous targets of opportunity for studying space weather events from the lower corona to 1 AU with multiple line of sight observations and multiple in-situ measurements. We choose to focus here on the main objectives of OSCAR, which are directly related to the forecast of space weather events (see list above) to clearly outline the feasibility of this mission concept. We leave a more exhaustive objectives list -with more science-oriented aspects -for further studies. Scientific requirements In order to fulfil our objectives, we detail the scientific requirements for each of them in Table 1. We separate (top level) global requirements from (second level) quantitative requirements. We give hereafter short justifications for some key requirements, which will be used to identify the instruments required for OSCAR (Sect. 4). The stereographic observations (required for any 3D reconstruction of coronal loop structures) are one of the cornerstones of our mission. Thanks to the experience gained from STEREO, we can precisely define an optimal separation angle between the two lines of view for the three-dimensional reconstruction of coronal loops in an AR. Aschwanden et al. (2012) demonstrated that the quality of a stereoscopic reconstruction depends on both (i) the quality of correspondance between the two images and (ii) the accuracy of the triangulation process. The former decreases as the separation angle decreases while the latter is maximised for an angle of 90°. The product of the two factors gives the overall quality factor of the reconstruction that is given in Figure 1. An optimal angle is found around 68°and an acceptable stereoscopic reconstruction is obtained at a separation angle between 22°and 125°. The onset of a CME occurs on a time scale of tens of seconds in a coronal structure. Hence, our ambitious primary objective requires high cadenced -high resolution (see Table 1) observations of coronal loops (and associated photospheric magnetic fields) in ARs. The major requirements for our secondary objectives are (i) the ability to provide sufficient regular data for an efficient forecast and (ii) to accurately measure the propagation and expansion of CMEs (from the lower corona to 1 AU) and the evolution of the spatial structure of CIRs (at 1 AU) with a combination of remote-sensing and in-situ observations. The former requires relatively high-cadenced data (images of the Sun-Earth heliosphere every 2 h) to ensure that the fastest CMEs that reach Earth within approximately 15 h can be accurately forecasted. We refer the reader to Table 1 (and references therein) for more details regarding the other specific requirements. Some requirements apply to several objectives, in which case they have not been repeated for sake of simplicity. Anticipating the detailed design we give in Section 6, the combination of second level requirements for objectives (1) Objective Top level requirements Second level requirements 1 -Trigger mechanism(s) of CMEs in AR and their forecasting indices Stereographic view of coronal loops at different heights in the lower corona The separation angle shall be between 22 and 125°, as close to 68 degrees as possible ( Fig. 1) Capture the time scale of flares and of the triggering sequence of strong CMEs Time resolution of 5 s for coronal loop images Resolve distinct coronal loops in ARs Spatial resolution better than 500 km in the solar upper transition region Synchronised stereographic images to ensure a proper 3D reconstruction during the eruption process The two spacecraft shall be synchronised with a precision of 0.1 s Observe the photospheric vector magnetic field in ARs Spatial resolution better than 750 km, with a precision of 0.1 G for the longitudinal field and 20 G for the transverse field The duration of OSCAR shall ensure high statistics for the CME triggering The duration of the mission shall be no less than 5 years 2.a -Provide data for the forecast of geo-affecting CMEs Track geo-directed CMEs over the whole Sun-Earth distance Observations from the lower corona to 1 AU shall be possible Determine the shape, direction and velocity of the leading edge of the CME (common with 2.b) The cadence shall be of 2 h and stereoscopic view shall be available a Sufficient data to forecast the arrival time of all geo-directed CMEs The data shall enable a 2-day forecast updated every 6 h 2.a -Provide data for the forecast of geo-effective CIRs In-situ measurements of geo-affecting CIRs before they reach Earth One spacecraft shall be positioned on a keplerian orbit close to 1 AU following Earth Guarantee sufficient warning time Minimal separation between the Earth-following spacecraft and Earth of 29.7°(warning time of 2.25 days) In-situ measurements of the magnetic field (common with 2.b) The vector magnetic field shall be measured within the range ±200 nT with an accuracy of 0.1 nT and an operational time resolution of 1 min b Measurement of the in-situ solar wind proton plasma parameters (speed, temperature, density; common with 2.b) The solar wind protons shall be measured up to a speed of 1000 km/s (with a 5% accuracy) with an operational time resolution of 1 min b 2.b -Measure the propagation and spatial expansion of CMEs out to 1 AU Measure continuously the plasma parameters and the magnetic field at 1 AU (in CMEs and CIRs) The solar wind proton speed, temperature and density, the electron 3D velocity distribution, prominent ion charge states of C, O, Si, Fe and magnetic field shall be measured every 15 min c Measure high energetic electrons accelerated at CME shock fronts close to the Sun The energetic electrons passing the spacecraft at 1 AU in the range 40 keV-300 keV shall be measured every minute d,e 2.b -Measure the evolution of CIR spatial structures at 1 AU Remote-sensing and tracking of CIRs within one Carrington rotation CIRs shall be monitored in the heliosphere over an elongation angle of 150°over 20 days (brightness sensitivity <3 · 10 À16 the solar brightness) f Multi-point observations at 1 AU to detect changes in the structure of CIRs within one Carrington rotation Minimum seperation angle of 66°in longitude to catch major longitudinal changes, latitudinal separation <5°g Measure low energetic ion events accelerated at CIR shock fronts at or beyond 1 AU The protons and alpha particles in the energy range 50 keV-4 MeV shall be measured every minute d,g and (2a) suggests the use of two identical spacecraft separated by an angle of 68°orbiting the Sun in the Earth orbit. Hence, one can easily imagine one spacecraft leading the Earth and the other trailing behind it with a separation angle of 34°. A schematic of this design is given in Figure 2. This design also fulfils all the other scientific requirements listed in Table 1. As an example, the satellite orbiting behind Earth will measure CIRs 2.25 days before they hit Earth. This design also happens to be achievable with moderate technological developments, as will be made clear in Sections 5 and 6. Instrumentation Each spacecraft will carry an identical set of instruments for the purpose of investigating the Sun and the heliosphere. A suite of telescopes will image the space between the solar surface and 1 AU on the Sun-Earth line almost continuously. Another suite of sensors will measure the in-situ particle and magnetic field environment at 1 AU. An overview of the planned instruments, with estimates of their mass and power consumption, is given in Table 2. In order to demonstrate the feasibility of OSCAR, we particulary outline the heritage of our planned instruments. Should this mission concept be further explored, we give paths of improvements for the critical instruments of OSCAR. Remote-sensing instrumentation The package of remote-sensing instruments will consist of four instruments carrying in total six telescopes. Together they will be able to cover the photosphere, the upper transition region, the corona and the heliosphere to beyond 1 AU. Photospheric Imager (PIM) -The Photospheric Imager will provide 2D-maps of the magnetic field vector in the photosphere by measurements of the Zeeman effect. Similar instruments like SDO's HMI have been built in the past, whereas the upcoming Solar Orbiter's Polarimetric and Heliospheric Imager (PHI) will serve as heritage (Gandorfer et al. 2011) in this case. Unlike PHI, only one telescope is required for PIM since the PHI's High-Resolution Telescope's (HRT) of 200 km at 0.28 AU already fulfils the spatial resolution as well as the cadence requirements when constantly positioned at a 1 AU orbit. PHI's restricted field of view of 16.8 0 has to be doubled to observe the full solar disk at 1 AU, which has to be resolved in further development. With this setup PIM will be able to observe the spectral line of neutral iron at 617.3 nm with a spatial resolution of about 720 km in the photosphere and a cadence of 45-60 s, which are both within the requirements presented earlier in Table 1. The soon to be flight proven PHI instrument on board Solar Orbiter will provide a good base for developing the PIM instrument. The changes of design would mostly be focused on the single telescope adaption to the new orbit and the extension of the field of view. A higher observation cadence, although not mandatory, could be very valuable to our understanding of erupting ARs. EUV Active Region Imager (EUVARI) -The EUV imager will provide simultaneous measurements of the full solar disk in two different wavelengths on each spacecraft. Since an instrument with these properties and suitable mass does not exist, technological developments will be needed to produce an instrument that fulfils our requirements. This new instrument will consist of two telescopes. One of these will observe the lower corona in the 17.1 nm wavelength, and the other will be able to switch between 9.4 and 21.1 nm using a filter wheel. These wavelength bands cover the footprints of the loops and the loops themselves. They can also be used to detect solar flares (see Sect. 6.2 for an operational use of those filters). To provide more flexibility, to simplify the design and to provide some redundancy, both telescopes will be equipped with filter wheels covering each of the three wavelengths. Significant heritage is available for such an instrument: the SDO, STEREO and Solar Orbiter missions have all flown, or will fly, instruments with some of the characteristics matching our requirements. For instance, the EUV imager on Solar Orbiter will have two telescopes with a combined mass of 23.5 kg, but the filters and fields of view must be changed to suit an orbit at 1 AU. However, the development and testing of this instrument will be the greatest design challenge of the mission (see Sect. 4.3 for details). The simultaneous stereoscopic observations -at high resolution -of the photospheric magnetic field and the upper coronal structure in ARs will be the key to characterise in detail the trigger of CMEs. Coronagraph (COR) -The coronagraph (COR) is an externally occulted Lyot coronagraph, which will be capable of imaging the electron density of the solar corona through observing the polarised brightness of Thomson scattered light. To reduce instrument development costs, we can rely on the COR2 instrument flying on the STEREO spacecraft. COR will provide data on early CME propagation between 2 and 15 R for first hazard assessments. The instrument is designed to observe moderately fast CMEs moving with speeds of about 750 km/s, which covers the average initial CME speeds as seen in LASCO data (Manoharan & Mujiber Rahman 2011). Like its heritage instrument, it will produce a three-polarisation image sequence in visible light. This will enable us to observe the polarised brightness of the coronal plasma, which can be directly related to the distribution of electrons (using, e.g., the solar rotational tomography method, see Frazin & Kamalabadi 2005). To observe the average CME event, short exposure times of <4 s are necessary to avoid pixel smear. Faster events can also be resolved by binning pixels . Heliospheric Imagers (HI) -Given the success of the stereoscopic reconstruction with STEREO's Heliospheric Imager (HI; see, e.g., Lugaz et al. 2012), the same instrument is also planned to be reused aboard the OSCAR spacecraft. The HI will take over CME observation once they are out of the coronagraph's field of view, following CMEs from 12 to 277 R , corresponding to an 85°field of view on the Sun-Earth line. It consists of the two cameras HI-1 and HI-2 to cover the decreasing intensity of visible Thomson scattered light during the CME's propagation. The opening angles also correspond to typical average CME widths which have been found to range between 47°and 61°from solar minimum to maximum (Yashiro et al. 2004). Due to the different intensities, the exposure times vary between HI-1 and HI-2. Exposure times of 12-20 s with typically 150 exposures per image for HI-1 and 60-90 s exposure time with typically 100 exposures per image allow a nominal image cadence of 60 and 120 min for this instrument ). Using the COR and HI the volume in which CMEs can be viewed then reaches from 2 R to beyond 1 AU, as seen in Figure 3. In-situ instrumentation The in-situ instrumentation aboard each of the OSCAR spacecraft consists of two identical magnetometers and two different particle instruments, one measuring the solar wind particles, the other one the shock-accelerated high energy electrons, low energy protons and alpha particles. The overall mass of the in-situ instrumentation package is estimated to be 29.4 kg, the power consumption will be 17.0 W. Solar Wind Particle Monitor (SWPM) -The SWPM will measure the 1D velocity distribution functions (VDFs) of solar wind protons, alpha particles and the more abundant charge states of certain heavier elements (C, O, Si, Fe) in the solar wind as well as the 3D-VDF of solar wind electrons. Due to the large differences between the expected particle fluxes, the proton/alpha measurement and the heavy ion measurement will be done separately with two different sensors SWPM-AP and SWPM-HI both mounted on the spacecraft body looking towards the Sun. SWPM-AP will measure protons and alphas in the velocity range between 180 km/s and 2100 km/s. Thus the velocity range is extended compared to regular solar wind speeds in order to measure not only CIRs but even high CME bulk speeds.This design is directly taken from its heritage instrument ACE/SWEPAM which has an energy-per-nucleon range between 260 eV/nuc and 36 keV/nuc and a relative energy-per-nucleon resolution of 5% (McComas et al. 1998). SWPM-HI will be able to measure the charge states of carbon, oxygen, silicon and iron with up to 100 kev/nuc in order to determine locally the stream interfaces in CIRs and transient CMEs. The third sensor SWPM-E finally will measure the solar wind electrons. As heritage instruments for SWPM-E we propose STEREO/SWEA (Sauvaud et al. 2008), which can determine 3D velocity distributions of electrons in the energy range between 1 eV and 3 keV. As its heritage instrument SWPM-E will be mounted on the spacecraft boom. Energetic Particle Monitor (EPM) -The EPM instrument consists of two identical sensor pairs which will both measure low energetic proton and alpha particles covering ion energies from 46 keV up to 4.8 MeV, and electron energies between 40 keV and 350 keV. One sensor pair EPM-1 will be tilted (À45°) in longitude with respect to the sun-spacecraft line to capture the high energetic electrons that are expected to propagate along the spiraling interplanetary magnetic field. The second sensor pair EPM-2 will point 180°away from the first sensor. This design, implying a rough spatial resolution within the ecliptic, will enable us to (i) identify anisotropic electron events, which are the relevant ones in the context of CME shock accelerated particles (Simnett et al. 2002) and (ii) distinguish between low energetic ions accelerated inside or beyond 1 AU. The heritage instrument is ACE/EPAM (Gold et al. 1998), but note that the sensor mounting would have to be adapted to the three-axis-stabilised OSCAR spacecraft as described above. Magnetometers (MAG) -The MAG instrument consists of two identical fluxgate magnetometers. Each of them will measure all three components of the local interplanetary magnetic field vector in the energy range ±200 nT with an absolute accuracy of 0.1 nT and an operational time resolution of 1 minute for space weather forecasting. Subsecond time-resolved data will be available for scientific studies of the magnetic field, such as wave and turbulence phenomena in the solar wind. The magnetometers will be synchronised and both mounted on the spacecraft boom at a distance of 1.0 m and 2.25 m from the spacecraft body, respectively. This will allow us to reconstruct the magnetic background of the spacecraft itself. The Solar Orbiter MAG (Carr et al. 2007) will serve as heritage instrument for the MAG instrument. The combination of MAG and SWPM data will provide the necessary data for a reliable CIRs forecasts in real-time. Critical technology requirements Technically a large proportion of the risk relies on the successful implementation of the EUV Active Region Imager. To minimise it the EUVARI telescope will be based on three already developed EUV imagers, which by the estimated time of launch will each be flight-proven. Since the data produced by this instrument are critical for mission success, redundancy in certain crucial wavelengths imaging will be built into the design in case of damage during launch or flight. Another large proportion of the risk to our mission lies in the requirement to successfully implement major changes in the PIM instrument compared to its heritage instrument PHI: doubling the 16.8 0 field of view of PHI -to ensure a full-disk field of view while maintaining the high spatial and time resolution -is the critical challenge for the successful realisation of this instrument. However, abandoning the two-telescope design of PHI should save weight and make room for PIMspecific changes, therefore keeping it compact and lightweight. Spacecraft design We give now a brief tour of the design of the OSCAR twin spacecraft. The system architecture and satellite design are shown in Figure 4. The OSCAR spacecraft are three-axisstabilised spacecraft that actively point the imaging instruments towards the Sun. The Attitude Determination and Control System (ADCS) computes the attitude and if necessary utilises reaction wheels and lateral thrusters to alter the orientation. Solar panels are utilised to harvest energy which is processed towards the Power Control and Distribution Units (PCDUs) and stored in the batteries. The communication subsystem consists of two redundant X-band transceivers connected to a high gain antenna and two low gain antennas. Table 3 summarises the mass budget for each OSCAR spacecraft. The estimated mass for each spacecraft is 580.5 kg including -on top of subsystem margins -an additional margin of 20%. If one Soyuz launcher is used for both spacecraft (see Sect. 6.1), an unused mass of 28.2 kg for each spacecraft provides us a comfortable margin. Mass budget For structure, thermal control, onboard computer (OBC) and data handling (DH) a margin of 10% is provisioned. A margin of 5% is assumed for the subsystems ADCS and EPS thanks to space heritage. The margins of scientific payload and telemetry, tracking and command (TT&C) result from the single margins of the subsystems components, based on the standard ESA margin policy. The weight of harness is estimated to 5% of the net spacecraft mass. Power budget The power system consists of three main modules: the primary module, the secondary module and the PCDUs. The primary module covers the main power harvesting in order to operate the satellite. The triple junction solar cells with a GaInP2/GaAs/Ge composition (produced, e.g., by Spectrolab), which are designed for space missions, could be used. Each satellite would embark a solar panel area of 2.5 m 2 featuring an efficiency of 29.5%. When the solar panels are exposed to direct sunlight at a distance of 1 AU, the maximum power conversion achievable is calculated to be 905.1 W with the assumption of surface temperature of 28°C. The radiation degradation of the solar cells after 7 years of operation time (end of nominal mission) is expected to be approximately 10%. The secondary module consists of the backup power stored in rechargeable lithium ion batteries. For a long lifetime, the batteries are discharged to 50% of the full capacity and charged to 90% of the full capacity. The total energy capacity of the battery pack is calculated to be 880 Wh with the assumption that the batteries operate at 20°C. The distribution of the power is handled by two low-power PCDUs from Thales Alenia Space which manage the batteries and the maximum peak power tracking for power harvesting. Each PCDU is able to deliver up to 330 W. Table 4 summarises the power budget for each spacecraft of the proposed mission OSCAR. The margin for the OBC&DH including the solid state recorder (SSR) is assumed to be 10%. For other subsystems which still require more modifications a margin of 20% is assumed. This applies to TT&C, EPS or propulsion system. The margins of scientific payload and ADCS result from single margins of the components the subsystems comprise of based on the standard ESA margin policy. The estimated power for each spacecraft is 855.4 W including on top of subsystem margins an additional margin of 20%, which provides us with an unused power of 49.8 W per spacecraft. On board computer, data handling and telemetry The data on board the satellite will be handled by an On Board Computer (OBC) of the type OSCAR (coincidentally) manufactured by EADS Astrium. The OBC utilises the LEON3 core and provides up to 40 MIPS at 48 MHz core frequency. With 256 MB of RAM and 512 MB of exchange memory the computer meets our requirements. Not only the telemetry data and command handling but also the execution of the ADCS algorithms and time synchronization can be performed on the OBC. The processing and analysis of acquired images is dedicated to a separate image processing unit. The images produced by the scientific instruments will generate a large amount of data (see Sect. 6.2), some of which will be required to be stored on board before being downloaded on Earth. A flight-proven EADS Astrium SSR based on the flash technology could be utilised to ensure a storage capacity of 20 Tbits (~2.5 TB). The power consumption is estimated to be 60 W and its mass to be 20 kg based on a realistic increase of performances of SSR in the coming years. These specifications are included in the corresponding budgets (Tables 3 and 4) in the OBC&DH entry. The data will be downloaded to the ground station (or ground station network) periodically. Because of the large distance between the spacecraft and the Earth, the telemetry design is particularly critical for the feasibility of OSCAR. In order to provide sufficient downlink budget, X-band communication is utilised. Two redundant transceivers with the output power of 200 W each feeding a 1.7 m-diameter parabola antenna will ensure a downlink data rate of 1.4 Mbps if the ESA ESTRACK network is used, and 260 kbps if smaller 15 m ground station antennas are utilised. The total daily data budget would then be 2.35 GB and 218 MB, respectively. In both cases the signal margin of 3 dB is maintained in order to guarantee proper operation. We demonstrate in Section 6.2 how such a reasonable telemetry budget is able to meet the scientific requirements of the mission, and propose alternative budgets depending on the ground stations availability. Thermal control subsystem The role of the thermal control subsystem is to maintain all spacecraft and payload components within their required temperature limits during the mission. Table 5 shows the thermal requirements for each component of the spacecraft. The PIM instrument inherits its property from the future PHI instrument that will fly on Solar Orbiter. Its operational and survival temperature ranges are still not exactly known today. Similarly, the exact EUVARI instrument does not exist yet (see Sect. 4.3), and its operational temperature ranges can only be speculated. In addition, those two instruments would include their own thermal system. For these reasons, we chose not to consider them in the following preliminary analysis. A thermal analysis is generally needed to define an adequate radiator area to accommodate the maximum operational power during the hottest and coldest operational environment, without exceeding the allowed temperatures of 0°C and 30°C (see Table 5). As a first approximation it is possible to assume an isothermal and spherical spacecraft with a radius equal to the maximum dimension of the longest subsystem (~1 m) located at 1 AU from Sun and 0.59 AU from Earth. The satellites will Table 2). Antennas a À100 to 100 À120 to 120 Solar panels a À150 to 120 À200 to 130 MAG b À100 to 100 À100 to 100 SWPM c,d À25 to 50 À30 to 60 EPM c À25 to 50 À30 to 60 COR e 0 to 40 À20 to 55 HI e À20 to 30 À60 to 60 a be heated continuously by the direct solar radiation during the whole mission. The temperature of each spacecraft depends on the balance between its absorbed, internally diffused and externally radiated thermal power. The internal power dissipation varies from 99.3 W (safe mode) to 858.2 W (nominal mode). By considering a spacecraft emittance of 0.8 and an absorptivity of the solar radiation of 0.6, one finds equilibrium temperatures of À3.85°C and 48.99°C, respectively. A radiator (with an emittance of 0.9) of 3.42 m 2 would be required to accommodate the internal dissipation power in normal operations. This unrealistically large radiator area promotes a more detailed thermal analysis that could follow two paths. Because the OSCAR spacecraft will conserve their orientation with respect to the Sun during the mission, heat shields could be designed on the Sun-facing part of the spacecraft to lower the internal temperature. In a perfect-shield case, a radiator of 1.98 m 2 would suffice to maintain the internal temperature in the operational range. Alternatively, an active internal cooling system could also be designed to accommodate the internal power dissipation. Finally, the spacecraft temperature in safe mode is very close to the survival and operational limits, henceforth a passive thermal control could be designed to maintain an acceptable temperature in this case. This basic thermal analysis shows that the design of OSCAR is a priori feasible. Mission design After the description of the instruments and spacecraft design of OSCAR, we now give insights into the mission design. We detail the possible orbits of the spacecraft and the operational phases in Section 6.1. We then propose operational modes for the scientific and forecast data (Sect. 6.2). We finally give an overview of a possible ground segment design (Sect. 6.3). Planned orbit and operational phases The OSCAR spacecraft will be inserted into a heliocentric orbit at a distance of 1 AU from the Sun with one spacecraft leading the Earth and other trailing behind with a separation angle of 68 ± 3°(see Fig. 2). This configuration will allow an optimum observation of the Sun surface to study CMEs and coronal loops, and the optimal acquisition of binocular high-resolution images as explained in Section 3.2. Additionally, observation of the CME propagation along the way to the Earth will also be possible. Five operational phases have been identified for the entire mission: (i) the launch, (ii) a two-year spacecraft drift period, (iii) 5 years of nominal mission time, (iv) a possible mission extension and (v) potential deorbiting. The total launch mass, i.e. the sum of dry mass (~1161 kg, see Table 3), the propellant mass (~575 kg) and the mass of the payload adapters (~270 kg), is approximately 2006 kg. A Soyuz rocket, which is capable of delivering 2200 kg to an Earth escape trajectory, has therefore been selected for the mission (the two OSCAR spacecraft also fit in the Soyuz rocket fairing, as shown in Fig. 5). After escaping the Earth gravity, the two spacecraft will each perform a 0.47 km/s delta-v manoeuvre in opposite directions i.e. Oscar 1 (leading Earth, see Fig. 2) will perform a retrograde burn and Oscar 2 (trailing Earth) a prograde burn with respect to their heliocentric orbit. This marks the start of a 2-year drift phase for both the spacecraft. At the end of this phase each spacecraft will have reached its final target position relative to Earth and will require another 0.47 km/s delta-v manoeuvre to stop the drift in order to maintain the final constellation. The distance between the spacecraft and Earth will be about 0.59 AU. The launch will take approximately 30 min for the spacecraft to reach the Earth escape trajectory. Once they reach this point, their propulsion system will be activated and the drift phase will begin. After about 8 months the angle between the Sun and both spacecraft will reach 22°which is the minimum requirement for the spacecraft to start performing observations to achieve part of our mission objectives (Sect. 3.2). Additionally, at this stage we can start evaluating and if necessary optimise the data handling and propagation of data from the ground stations to various data centres that are mentioned in Section 6.3. The scheduled science operation phase is 5 years. However, since we have considered generous margin for the propellant and the power budgets, an extension of the mission duration is optional. After the end of mission both spacecraft will be transferred to a disposal orbit around the Sun with the major semi-axis length of 0.99 AU. 6.2. Operational mode and ground segment 6.2.1. CME trigger data Since our primary objective is to study the trigger mechanism(s) of CMEs in high detail, and we can only transmit a limited amount of data to ground stations due to the satellites distance from Earth, onboard autonomy is clearly required. Instead of sending data from all instruments at full cadence and full resolution, our satellites will perform onboard CME trigger event detection using one of the EUV telescopes. We will use a dedicated image processing unit as well as customisable CME trigger event detection software. The following operation mode (summarised in Fig. 6) shall enable OSCAR to fulfil its primary objective in spite of the telemetry limitations. Both EUVARI and PIM telescopes shall continuously buffer images at full resolution and their fastest cadence (a). The buffering shall be synchronised using ground stations on both satellites taking into account their distance to the Sun. One EUV telescope will record images in 17.1 nm, while the other can switch between the 9.4 nm and the 21.1 nm channel. The trigger detection shall be based on the detection of strong flares in the 9.4 nm channel. Whenever a strong flare is detected (b.1) the satellite will change the 9.4 nm filter to 21.1 nm for the next hour (c). This ensures that we always have 17.1 nm images available, and most of the time right after an event is detected also 21.1 nm images. After 1 h the satellite will switch back to the 9.4 nm channel to continue the online flare detection (d). The output of the event detection is an estimate of the class of the flare, as well as the location. This meta-data will be sent back to a ground station (b.2), where, based on the trigger detection meta-data from both satellites and possibly other sources, it is decided which data from both the EUVARI and PIM instruments to request from both satellites (e). It is also possible to request any of the other buffered data, given external trigger detection using third-party data. Furthermore, if simultaneous CMEs were to be triggered, the data could still be retrieved since the EUV telescopes are continuously imaging the full-disk of the Sun. Other possibilities for online CME trigger event detection can be based on dimmings and EUV waves, instead of on strong flares. Both dimmings and EUV waves are strongly related to the onset of CMEs (Zhukov & Auchere 2004). An advantage over the flare detector operating on 9.4 nm images would be that both EUV telescopes can continuously record in the wavelengths that are best suited for coronal loop imaging (17.1 and 21.1 nm), also for the minutes leading up to CME trigger event. Both EUV waves and dimming detectors as well as flare detectors are currently being developed at the Royal Observatory of Belgium as part of the FP7 project AFFECTS. These can be adapted for near-real time operation on satellites. Instead of requesting full-resolution data, cropped images will be downloaded for a time period spanning from 10 min before the event until 60 min after the event was detected. This ensures the total telemetry, given on average 200 M1 or stronger flares per year (Tang & Le 2005), does not exceed 1235 MB per day for each satellite (see the budget including regular and science data in Table 6). To retrieve this data we make use of the Deep Space Antennas (DSA) of the ESA's ESTRACK network, using one timeslot of 8 h each day per satellite. The DSA consists of three 35 m-diameter antennas and is designed to ensure constant availability for spacecraft distant from Earth. It provides the highest telemetry rate Fig. 6. OSCAR operational mode for the CME trigger data. See text for details on this particular operational mode. available on Earth for our mission. The onboard storage can be viewed as a buffer for the present discussion, containing events of potential interest from which only a portion will be downloaded. The large onboard storage (Sect. 5.3) enables a storage of 4 days of the whole science data. Depending on the available timeslots of the DSA, more data could be downloaded, other operational modes could be proposed, and calls for allocation time could then be launched. The possibility of using the smaller 15 m-diameter antennas of the ESTRACK network (in total six antennas) could also be considered for those imagers as long as this does not affect the downlink time required for the forecasting aspect of the OSCAR mission (see next section). Nevertheless, the 15 m-diameter antennas would not support the large amount of the stereoscopic observations by itself and should only be used as complementary antennas. Real-time forecasting For near-real time forecasting we rely on the availability of 15 m telescopes that receive data from both satellites every 6 h. The total telemetry for near-real time forecasting is estimated to be 47 MB every 6 h per satellite. This includes telemetry for the coronograph (36 MB), the HI instruments (9 MB) and the in-situ measurements provided by the Particle monitor and Magnetometer (2 MB). This amount of data can be transferred in less than 1 h. The 15 m antennas of the ESTRACK network could in principle be used for such telemetry. While this demonstrates the feasibility of OSCAR's forecasting objectives, it could also be considered to develop dedicated infrastructures to ensure a more reliable forecasting system. Ground segment The operational modes also require a specific design for the mission ground segment. A schematic of our ground segment operation is given in Figure 7. Our design involves the communication with the ESA's ESTRACK network wherein the DSA is used for CME trigger study and the 15 m diameter antennas for the forecasting data. Our Mission Operation Center (MOC) would provide an interface between the two antenna networks and the science operation and space weather centres. It would also interact with the two satellites for the data downloading requests. We would of course aggregate a scientific community around our CME trigger study. Specific partner research institutes would be involved in the analysis and use of the data to achieve our first mission objective. The forecast data would be directly interfaced with e.g., the SSA Space Weather Coordination Centre (SSCC) through our MOC. A constant link between SSCC and the OSCAR SOC would enable a good use of our forecast data. The forecast itself will be either provided by the SOC or by SSCC, depending on the available manpower. Subsequently, SSCC would be in charge of releasing the forecast and alerts obtained from our data. Additionally our forecast data may be combined with other spacecraft data to provide a better insight into the 3D structure and temporal evolution of CMEs and CIRs at 1 AU. Cost analysis and descoping option We give here rough cost estimates of the OSCAR mission in the eventuality of a launch at the horizon 2022-2025. Thanks to the compact design of OSCAR (Sect. 5), one Soyuz launcher can be used for the two satellites. It has the advantages of a (relatively) low cost and high reliability. A nearequatorial launch location is compatible with the orbit design (Sect. 6.1) and costs 60 M€ from Kourou. Predictably weight, power and performance are the main cost drivers. The largest cost in the mission is for the platform (255 M€) which represents 57% of the cost, for which orbit and attitude control and data management represent a large proportion of the platform budget. The payload cost is also considerable, estimated at 170 M€ (estimate of 1 M€/kg, see Wertz & Larson 2003). The development cost of the improved EUV imager is estimated at 5 M€ by itself. Adding the forecast (100 M€) and science (60 M€) operations costs, the total cost for the two satellites mission is estimated around 650 M€. Should it be necessary to reduce the scope of the mission, particularly to reduce the overall cost, the following option has been investigated. Since both spacecraft have identical instrumentation, a possibility would be launch only one of them. For this to retain the objective of loop reconstruction, this spacecraft would need an orbit relocated at L5 and would rely on additional image data in the equivalent wavelengths and cadences from L1 or Earth, using already flying satellites. If this could be provided by other partners the primary objective could be fulfiled, while investigating a different portion of the Sun's surface. However, the matching of suitable data would be a significant challenge, and data would likely need to be processed and interpolated to match in time with the mission data. In addition, the data rate would be expected to drop by 64% compared to the nominal mission plan due to the additional spacecraft distance. Although feasible, it clearly appears that this solution would seriously threaten the mission while saving only a sixth of its cost. Conclusions We reported a first study for an innovative space weather mission concept, OSCAR. We presented the scientific basis for a twin spacecraft mission, leading and trailing the Earth with a separation angle of 68°. OSCAR is designed to answer fundamental questions behind the trigger of CMEs in the lower solar corona, as well as to set up a space weather forecasting system for geo-effective CMEs and CIRs. The advantage of OSCAR resides in the originality of its design that enables to tackle those two goals simultaneously at moderate cost. We furthermore detailed in this work the basic analyses for the feasibility of the OSCAR mission. We put a particular emphasis on showing that, thanks to significant heritage, such a mission requires fairly small instrument developments (the main challenge resides in producing a sufficiently light EUV imager of the lower corona) to lead to important improvements in our scientific understanding of space weather events. In addition, we sketched a full spacecraft design and proposed very simple orbital phases to achieve the required constant angular separation of 68°of the two spacecraft (see Fig. 2). In spite of the large distances involved, the telemetry needed for our mission is accessible with today's terrestrial infrastructures. It must be noted that even though the telemetry requirements may seem demanding, OSCAR would produce the necessary data for very valuable near-real time forecasts of the most dangerous space weather events. In conclusions, the design of the OSCAR mission includes for the very first time real-time predictive capabilities and provides a strong basis for the development of future space weather missions.
12,394.2
2014-09-01T00:00:00.000
[ "Physics" ]
Digital Tools for the Correct Use of the Slovenian Language in Mathematic Classes in Secondary Vocational School Electronic grammar and spelling-checking tools can be valuable for improving language skills in both spoken and written communication. These tools, such as Language Tool and Editor in Word, offer grammar and spelling suggestions for correction, while applications such as InstaText or Grammarly can analyse word usage and sentence structure in a particular language, such as English. The aim of this article is to explore useful tools available to Slovenian students to improve their writing skills. It examines the tools that students use to improve their writing and the extent to which they use modern technology to do so. The present study is based on triangulation and survey. The research was carried out at the Secondary Vocational and Technical School of Mechanical Engineering, where the author teaches mathematics. A total of 57 students responded to the survey. We found that students know about online portals for text improvement, but they don't use them. A few of them know about and use word editors. The reasons for this vary, including limited awareness of the usefulness of the tool, lack of motivation, or the understanding that these tools are tools rather than substitutes for developing writing skills. To address this issue, it is crucial to raise students' awareness of how electronic tools can help to improve grammar and writing skills. Encouraging their use during the learning process is essential. However, it is equally important to emphasise the importance of regular writing practice and learning from mistakes as essential components of skill development. Introduction Modern technology is also having an increasing impact on education.In the circumstances we experienced during the pandemic, both educators and students had to rely heavily on information and communication technologies (ICT) to facilitate teaching and learning.As a result, the education sector needs to adapt and adopt new pedagogical approaches and strategies, while educators themselves need to develop their digital literacy.To meet these evolving needs, teachers, students, and other educational staff need appropriate training and continuous professional development.This training should include subject matter expertise, pedagogical knowledge, practical skills and familiarity with the necessary infrastructure and tools.By equipping educators with the right skills, they can effectively use technology in their teaching practice and promote enhanced learning experiences (Ratheeswari, 2018). In the field of educational development, teachers are key actors in the introduction of information and communication technology (ICT) into the teaching process. The path to effective ICT integration depends on knowledge of technology.The modern teacher has a responsibility to go beyond the traditional boundaries of teaching and to use technology as a powerful tool to improve pedagogy.By fostering an understanding of the potential of technology, teachers create a new dimension of teaching in which innovation and tradition are reconciled (Bindu, 2016).It is essential that teachers are skilled in ICT, because only when a teacher has a certain knowledge of ICT can he or she pass on this knowledge to the students. ICT has many positive benefits for learning.It can enhance educational opportunities and transform the processes of teaching and learning.In addition, teachers need to encourage students to be active learners to engage in active knowledge construction.This involves openended learning situations rather than learning conditions that focus on the mere transmission of facts.ICT can also act as a tool for curriculum differentiation and is also a transformative tool if used effectively for the classroom atmosphere.Most importantly, teachers and learners must be able to use learning time effectively (Bindu, 2016). By using technology and incorporating different tools into the classroom, educators can create engaging and dynamic learning environments.These resources provide opportunities for modern teaching approaches, interactive instruction, and effective learning strategies, ultimately enhancing the educational experience for both teachers and students (Haidari & Yusof, 2020). Preparation for life in the information society should go beyond the traditional 'information technology' subject and be integrated into the curricula of various disciplines.Young people need guidance in acquiring the most up-to-date knowledge in this rapidly evolving field.In our interconnected world, the role of the educator has changed to that of a guide, a transition that is gaining momentum as technology advances.In this digital age, a teacher's role on educational platforms is not only to facilitate students' understanding of how to use information in their daily lives, but also to broaden their intellectual horizons (Tondeur, 2019).A school's responsibility is to equip students for effective participation in the information society, and this means incorporating multimedia technologies into the educational process.Using these tools, educational institutions empower students to navigate the complexities of the modern world (Kuchai et al., 2022). Numerous studies show that ICT skills have significantly improved resource efficiency, significantly reduced production costs and led to much higher demand and investment in all sectors of the economy (Habibi & Zabardast, 2020). Writing skills in any language serve to communicate and interact in everyday life.Contemporary communication underlines the paramount importance of fostering the ability to produce intelligible written expression, especially on different platforms on the Internet, but also in emails and other communication applications.Teachers are faced with the task of engaging learners in the art of writing, transcending traditional boundaries by making use of the rich repertoire of resources and tools that the digital age has provided (Espinoza-Celi & Pintado, 2020).It is not limited to the contours of a first or foreign language.It permeates all domains and crosses different subjects and activities.Becoming proficient in writing cuts across academic disciplines and provides learners with an essential skill that extends beyond the classroom. Effective writing in a foreign language depends on three things: a deep understanding of the subject matter, a wide range of vocabulary and appropriate use of grammar (Calkins & Ehrenworth, 2016).In this sense, some authors mention that vocabulary is important for the mastery of any language (Espinoza-Celi & Pintado, 2020).Some authors recommend metacognitive strategies, which include elements such as selfplanning, self-monitoring, and self-regulation, and have the potential to significantly improve the texts produced by secondary school students.These integral activities within the metacognitive framework act as stimulators, fostering the refinement of learners' linguistic and cognitive skills in writing.Through these strategic efforts, students cultivate a heightened awareness of the intricate layers underlying their composition, resulting in the creation of texts characterised by quality and depth (Cer, 2019). Other authors recommended the use of different technological tools, exploiting their potential to promote robust student engagement and enhance understanding.Among these transformative tools, social networks, virtual learning environments and microblogging services stand out.Of these, Twitter emerges as a particularly pervasive choice due to its userfriendly interface and widespread popularity (Espinoza-Celi & Pintado, 2020).A dynamic micro-blogging platform such as Twitter has the potential to be seamlessly integrated into daily classroom activities.Its hallmark -constant communication -is a powerful factor that brings constant connectivity to the learning ecosystem.In doing so, Twitter unfolds a spectrum of possibilities: it provides an avenue for diverse work alternatives, serves as a repository of invaluable resources, and seamlessly embeds enriching content.Moreover, Twitter's inherent versatility further enhances its usefulness.Armed with the platform's capabilities, students can seamlessly curate and share an eclectic range of content.From images that vividly illustrate concepts, to audio snippets and video fragments that add dynamism, the microblogging canvas becomes an expansive space for multifaceted expression.Twitter serves as an avant-garde extension of the traditional classroom, blending with modern technology to create an enriched and interactive learning environment (Allam, Elyas, Bajnaid, & Rajab, 2017). In some of the articles, the authors argue that social networking also improves interaction between teachers and students outside the classroom, increases students' knowledge, motivation, and self-confidence, and helps to improve their vocabulary skills.The authors suggest using the Google+ platform for student writing lessons (Mohammad et. al, 2018).This research aims to investigate the extent to which secondary school students are familiar with and use digital tools to improve their seminar papers.The objectives of this study include several key aspects, namely: to provide an overview of the digital tools available for better writing skills, to investigate the use of these tools among secondary school students and to investigate the prevalence of language tools among students at our school.It also aimed to determine the usefulness of the tools in improving the writing of seminar papers and other assignments, and the extent to which students use the language tools available online to improve their writing and identify their preferred choices.By comprehensively investigating these objectives, we aim to shed light on the level of digital tool adoption among secondary school students.Furthermore, these findings will help to inform educational practices and strategies aimed at optimising students' use of language tools in classes. E-Tools for Better Writing There are several benefits of using ICT in the learning process.One of the benefits is the improvement of the teaching and learning experience.By using technology, teachers can create more interactive and engaging lessons that cater for different learning styles.This can help students retain information and improve their learning performance.In addition, ICT training can help teachers keep up to date with the latest teaching methods and tools, enabling them to provide a more effective and efficient learning experience for students.A key contribution of ICT in education is that it brings inclusion.Pupils with special needs are no longer disadvantaged as they have access to essential materials and special ICT tools can be used by them to use ICT for their own educational needs.Children are fascinated by technology because it encourages and motivates them to learn (Ratheeswari, 2018). The integration of electronic tools, applications and software into teaching practice represents a fundamental evolution in educational methodology.It heralds a period in which technological innovations will work in synergy with traditional pedagogical approaches to increase the effectiveness of the acquisition and improvement of writing skills.By ensuring that these resources are readily available in a variety of contexts and settings, educators can achieve richer and more flexible learning pathways (Amponsah & Stonier, 2020). E-Tools for Foreign Language Corpora are a technological tool in the field of language learning, mainly due to their extensive lexicographic resources (Frankenberg et al., 2019).The corpus is primarily about explanation, presented as a collection of speeches, dialogues, compositions and other linguistic artefacts that learners use to unravel and explain the complexities of language (Dash & Arulmozi, 2018). The London-Lund corpus, for example, is a vast repository of some 435,000 spoken words of British English.In essence, this repository encapsulates slices of authentic linguistic expression.Within its boundaries is a mosaic of 5000-word samples.This corpus includes telephone conversations, face-to-face discourse, dialogues, lectures, and radio commentaries (Stubbs, 2001).As such, corpora such as the London-Lund corpus allow learners to immerse themselves in the vibrant landscape of language as it is naturally used in real-world contexts (Shadiev & Yang, 2020). In the literature, Facebook, Twitter, Instagram, and other social networking platforms are mentioned as good for language acquisition.Schreiber's work describes extensive research into the linguistic practices of a Serbian university student on Facebook.In this study, the language student manoeuvred between complex linguistic variations and seamlessly intertwined different genres of English and Serbian (Schreiber, 2015). Yundayani suggests using the graphic design tool Canva, an easy-to-use visual technology platform characterised by drag-and-drop functionality.This tool opens doors to a vast array of resources, with a collection of over a million photos, graphics, and fonts.Its repertoire includes numerous images, photo filters, icons, shapes, and a variety of fonts.From the experiences of students who have integrated Canva into their workflow, it offers great benefits in improving their writing performance.By allowing them to seamlessly incorporate images, colours, pictures, photographs, fonts, and graphics, Canva provides them with the tools they need to develop their writing ideas (Yundayani et al., 2019). Computer programmes for automatic feedback are also a useful tool.This mechanism differs from traditional personal corrective feedback (Rassaei, 2019) and acts as a substitute for teacher feedback (Li et al., 2015).When learners input language material into the automated feedback system, it provides immediate feedback.This feedback helps to correct grammar problems at sentence level.This approach frees teachers to focus on more complex issues (such as content and discourse) and allows learners to self-correct their work without having to consult teachers. An example of such a tool is ChatGPT, which is not only useful for correcting essays and reviewing papers, but also for class scheduling and reminders, personalised learning, student engagement, research assistance, tutoring and support.Such tolls will change the future of education.Overall, ChatGPT has the potential to increase student participation and motivation in online courses and to improve student performance.However, these roles are stated by ChatGPT and although some of them are possible now, some others are potential uses for the future as its database and analytical skills, such as writing, become better (Biswas, 2023). Mompean and Fouz-González explore the use of Twitter to refine pronunciation skills in the field of language learning and teaching.In their study, they conclude that the students' pronunciation skills have improved, a testament to the effectiveness of this digital medium in promoting linguistic progress (Mompean & Fouz-González, 2016). Haidari highlights the many benefits associated with integrating social media and wikis and stresses that they can have a positive impact on students' writing skills.However, the researchers advise that educators need to maintain a balanced perspective and consider pedagogical aspects when planning activities.This means using effective pedagogical strategies and teaching and learning dynamics.The present study focuses on a comprehensive investigation of the use of social media and wikis and their profound effects on students' writing skills.The findings show that the deliberate use of different technological platforms and tools in the field of social media and wikis not only improves learners' language skills and competences, but also extends its positive impact to a wider range of competences.These include the development of teamwork, critical thinking, collaborative engagement, and collaborative problem-solving skills (Haidari, et. al., 2020). WeChat is emerging as a dominant application within the Chinese-speaking community as a widely used social networking application.For non-native Chinese speakers who wish to grasp the nuances of the language, WeChat is an invaluable tool.Xu and Peng found that the integration of WeChat resulted in a noticeable improvement in language proficiency.In addition to quantitative progress, the study also revealed a positive atmosphere among learners, demonstrating the platform's unique potential to promote language growth.Thus, through technological integration and pedagogical innovation, WeChat emerges as a tool that not only refines speaking skills, but also fosters an environment of enthusiasm and receptivity among learners (Xu & Peng, 2017). In the past decade and especially after the pandemic, a lot of different ICT tools has made, from hypertext-driven Internet platforms to interactive learning objects, audiovisual aids, forums, chats, instant messaging, blogs, whiteboards, wikis, and even the ubiquitous iPod.These tools collectively offer a blend of synchronous and asynchronous communication modalities.In recent times, a noteworthy shift has occurred, characterized by an increasing allure toward mobile phones.The ubiquity of mobile phones is staggering, with most individuals possessing at least one such device.These devices have found their way into various domains, from the professional realm where employees wield them, to the realm of education where students, despite institutional regulations against their use in classrooms, wield them too.So-called mlearning is spreading, because it represents an effective teaching tool for the teacher (Vinci, 2007). E-Tools for Slovenian Language The impact of technological advancements extends to the field of writing and learning the Slovenian language, providing teachers with access to electronic text collections and portals.However, the utilization of these resources still relies on individual teachers and their technological proficiency, familiarity with existing tools, portals, and text repositories, linguistic awareness, and willingness to stay updated with linguistic technological developments.It is essential for educators to actively monitor and embrace linguistic technology advancements to effectively integrate these resources into their teaching practices. Significant progress has been made in the field of linguistics in Slovenia over the last decade.Various linguistic tools and resources have been developed, including Fran, Jezikovna Slovenija, Clarin, CJVTvejice and Slovenščina.eu.These initiatives have put Slovene on a par with other prominent European languages, enabling modern approaches to teaching and the widespread use of interactive materials (Verdonik, 2015).Some language tools in Slovenia for the Slovene language (Logar et. al., 2020): • Gigafida is a large and thoughtfully built reference corpus containing 1,134,693,933 words from 38,310 texts created between 1990 and 2018.The Gigafida 2.0 corpus is a fundamental data source for modern Slovene and is used for linguistic research, language description (dictionaries, grammars) and the development of language technologies and procedures.Unlike previous editions, the 2.0 version is a corpus of standard Slovene, which means that it contains mainly texts written in standard language, • The Šolar corpus is a language acquisition corpus, containing about 350 texts have been manually corrected with error flags, and a newer version of machine markup has been applied.As a result, the formatted annotations are more reliable and new annotations are available, such as dependency syntax and named entities.The corpus is available in the CLARIN.SI concordances, separately for the students' source texts and for the teacher's corrected texts, • CJVTvejice is a web-based tool for placing commas.The tool is easy to use paste up to 3,000 characters of text into the box and press the red arrow.The tool then highlights missing commas in grey and redundant commas in blue.It is designed to help with comma spacing and is not a substitute for proofreading.According to tests, the software currently works successfully 94 % of the time, • Slovenščina.eu: the results of the project Developing Slovenian in the digital environment include various tools, some of which are suitable for classroom use (e.g., Slovenian-English translator, Summarising, Answering questions), i.e., open-source tools for the Slovenian language in the digital environment., • Fran: Fran is a Slovene online language portal that brings together dictionaries, Slovene language resources and collections that have been or are being developed at the Fran Ramovš Institute for the Slovene Language of the Slovenian Academy of Sciences, as well as dictionaries that have been digitised as part of the work of the Institute. Some useful e-materials are available on various websites, such as e-manuals with videos, animations, interactive exercises and other elements (e.g.interactive workbooks, textbooks, readers on the LiliBine portal; I Textbooks, etc.), e-environments that, in addition to graphics, animations and sound effects, use gamification with virtual environments, rewards and animated characters (e.g.UČIMse), or portals that exploit the use of language technologies for didactic purposes (Pedagogical Grammar Portal) (Verdonik, 2015). Teaching aids are more accessible and can also be integrated into the classroom.These are, for example, electronic dictionaries (SSKJ, Slovene Vocabulary, Slovene Orthography), Slovene language corpora (Fida and Fida plus, Nova beseda) and online libraries (Urbančič et al., 2021). Methodology The approach used in our research was triangulated.It should be stressed that the design of a specific method was an evolutionary process and did not refer to any established methodological position at the beginning of the research.Later, when the research was underway, particular attention was paid to experimenting with the triangulation method, especially with a view to comparing the results. Triangulation in this article is a form of cross-questioning, or the application of a textual analysis approach to address a research question, primarily with the aim of ensuring greater credibility of the data obtained.Combining qualitative and quantitative methods to extend the evidence, increase the credibility of these findings, and validate the findings of the text analysis method with the findings of the survey (Almalki, 2016, Kuckartz, 2014). Purpose of the Research At the Secondary School of Mechanical Engineering in Škofja Loka, where I teach mathematics, teachers try to introduce innovative approaches, different innovations and use different ICT to support learning and solving different tasks.That's why I've introduced an innovation in maths lessons, where students write a seminar paper on a variety of topics.When I checked the seminar papers, I noticed a lot of mistakes in Slovenian language.Although the students have Slovenian as a subject, the seminar papers had many shortcomings in this area, so I decided to research whether our students know and use tools that help them improve their writing and correct spelling when producing various written products.As I am not the only mathematics teacher and I do not teach all classes, I limited my research to the classes and students that I teach. The purpose of this research is to investigate whether current students can independently use tools to help them improve their writing and correct spelling. The second aim was to make secondary school students aware of the tools they can use to help them improve their writing and correct spelling. Research Questions 1. Are students aware of the tools available for the correction of written products?2. Do students make use of tools for the correction of written products? Research Design In our research, we opted for a comprehensive methodology that combines text analysis with a targeted survey utilizing closed-ended questions.This approach was strategically chosen for a comprehensive evaluation of the effectiveness of e-tools in promoting the correct use of the Slovene language among students when producing seminar papers and other written work. Triangulation involves the systematic checking and assessment of written content.Triangulation aims to determine whether the use of technology and digital tools for checking and correcting written products is increasing and whether students know and use these tools. The data and articles were obtained from the relevant literature, which was searched in Google Scholar for key hits on ICT, writing improvement, writing skills, electronic and digital revision tools for better writing. In addition, we have included a survey component in our research design to gain a holistic perspective.Specifically, we adopted closed-ended questions, a methodological choice that was tailored to elicit specific and measurable responses.This way of asking effectively narrows down the possible answers to a definitive choice or a binary answer of 'yes' or 'no', allowing for a clear and concise insight into students' interactions with e-tools to improve their writing. In addition, we have added open-ended questions, e.g.How do you find online portals from which you can access language advice, dictionaries, corpora, and other language resources (e.g., Fran, CJVTvejice, etc.) from one place? The aim of the survey is not only to find out about students' awareness and use of e-tools, but also how satisfied they are with them.By asking for direct responses, we can quantitatively analyse the prevalence of e-tool use and determine whether learners perceive these digital tools as valuable assets in improving their language and composition skills. The combination of triangulation and survey methodology allows us to conduct a comprehensive assessment, providing both qualitative and quantitative data to draw wellinformed conclusions.The results from the text analysis, complemented by the insights garnered from the survey, will help us ascertain the extent to which e-tools contribute to the correct utilization of the Slovenian language by students in their seminar papers. Research The main aim of this research was to investigate the level of familiarity of grammatical and written digital tools by secondary school students in Slovenia at the Secondary Vocational and Technical School of Mechanical Engineering.The aim was also to check whether students were familiar with different tools for improving the Slovenian language, spelling and grammar checking tools and translation tools. The survey was anonymous using the Slovenian online survey portal 1ka.A total of 57 students from the 1st to 4th year of a secondary engineering school participated (aged 15 -18 years), representing various fields of study.All respondents were male, as most secondary engineering schools are attended by boys.11 % of students have the highest score in Slovenian language subject and 2 % the lowest.The rest are somewhere in between: 16 % have a very good grade, 46 % a good grade and 26 % a fair grade. A closer look at the data reveals that a remarkable 67 % of students are aware of the range of online portals offering language advisors, dictionaries, corpora, and other language resources.This is shown in Figure 1.However, the actual use of these resources is different, at a surprisingly modest 23 %, as shown in Figure 2.Among the subset of students who have engaged with these online portals, there are further nuances.A 12 % praise the usefulness of these digital resources, underlining their effectiveness in improving language skills.Meanwhile, 10 % are satisfied with their experience, while 4 % are slightly dissatisfied.Interestingly, a comparable 4 % express complete dissatisfaction, although without specifying the instances or ambiguities they encountered in the dictionaries.This range of statistics encapsulates a mosaic of perceptions and interactions and sheds light on the complex interplay between students' awareness, use and satisfaction with these online language portals.From the perspective of many educators, writing is often seen as one of the more difficult productive language skills to acquire and subsequently teach.Its intricate communicative process requires meticulous precision and demands a nuanced focus on accuracy.With the continuing development of technology and the increasing ubiquity of computing resources, the dual function of the computer as both a carrier of feedback and a channel for its transmission has become increasingly important in both practice and research.This growing importance is partly due to the rapid development of educational technologies.At the same time, the significant increase in the availability of distance learning courses and the emergence of online research supervision have acted as catalysts, further enhancing the role of the computer as a central medium for providing feedback and facilitating its differentiated delivery.Students have the potential to make significant improvements in their writing through computer revision, even in the absence of external feedback ( Hyland & Hyland, 2006). Figure 2. Do you use online portals from which you can access language advice, dictionaries, corpora, and other language resources (e.g., Fran, CJVTvejice, etc.) from one place? As far as tools for improving the Slovene language in general are concerned, students use dictionaries (4 %), SSKJ (Dictionary of the Slovene Literary Language) (4 %), Fran (4 %) and textbooks and books (2 %). Also, some authors have also traced that electronic dictionaries embedded in digital learning tools play a key role in the unpacking of online texts.These digital lexicons easily turn their analogue counterparts into a fast and accessible research area.In the field of foreign language learning, the electronic dictionary is an indispensable companion.Learners are often confronted with a wide range of unfamiliar words as they wade through complex passages of foreign language texts.An electronic dictionary can help with this, as it is accessible anywhere and at any time.This makes it easy for learners to decipher the meaning of obscure terms and thus reveal the essence of the subject they are dealing with (Chang et al., 2018). In a study by Karras, a new training approach with an emphasis on the use of online dictionaries had interesting results.Participants showed a remarkable increase in vocabulary acquisition and use, confirming the quality of this approach.The click-and-click dictionary is one possible approach to improving vocabulary.In the field of lexicography, these two variants are proving to be very popular ways of achieving improvements in linguistic areas (Karras, 2016). The spelling and grammar checker in Word was known and used by only 19 % of students, which is very low as shown in Figure 3. Students who have used this tool are very satisfied with it (12 %), but 8 % of the respondents are of the opinion that it is useful.The provision of feedback, both from teachers and peers, can occasionally exhibit inconsistencies due to the inherent fallibility of human judgement.This variation in feedback presents an inherent challenge, making the identification of consistent writing problems an arduous task.As a result, students are left to grapple with the divergent messages they receive from their instructors, potentially sowing the seeds of confusion.That's why electronic correction tools, which are becoming more and more common, can be so helpful (Ranalli et al., 2017). The online translator was known and used by 86 % of the pupils, with 17 % expressing high satisfaction, 60 % reporting satisfaction, and 6 % expressing dissatisfaction.Dissatisfaction was primarily related to accuracy and comprehensiveness issues.This can be seen in Figure 4. Lim's academic research has explored the field of web portals in depth and revealed their indispensable role in guiding student interpreters through vocabulary acquisition and collection.The clear results of this research showed that through the judicious integration of websites and digital resources, students were more successful in mastering vocabulary. Students were able to build up a rich repertoire of words and phrases more easily and quickly. In addition, the study showed that learners showed a preference for websites in their searches.This study is an example of how technology and pedagogy can be combined and shows that web portals can also lead learners to richer vocabulary acquisition (Lim, 2014).Much research has been devoted to the development of computer programs that to assist with the assessment and feedback of writing skills.Some of the authors' contributions encourage undergraduate learners to use Grammark and Grammarly as free automated writing assessment tools to improve their writing skills (Perra & Calero, 2019). The findings from the 17 articles demonstrate the strong impact of social media and wikis on improving learners' writing skills.This body of evidence underscores the compelling rationale for educators and students alike to actively integrate various social media and wiki platforms into the realm of teaching and learning.As such, the incorporation of these dynamic tools is strongly recommended to facilitate an enriched pedagogical experience (Haidari, et. al., 2020). The results of this research provide an insight into the knowledge and use of digital language materials in the vocational secondary school for the classes I teach.The outcomes derived from the comprehensive survey offer a deep-seated understanding of the degree of familiarity and utilization of digital language materials among our secondary school students. Notably, the survey findings deftly underscore avenues for potential enhancement within our educational framework.The revelation that certain tools and resources remain underutilized prompts a compelling call to action. A comprehensive analysis of current research confirms an escalating trend among educators and students to use electronic tools, applications, and various software to facilitate the teaching, assessment, evaluation and correcting of writing skills.This paradigm shift towards digital pedagogy underscores the evolving educational landscape in which technological advances are used to optimise the teaching and learning of writing skills. The findings also highlight the growing potential of online portals and technologies to improve language skills.Embracing these digital platforms as facilitators of language learning and skill development could stimulate transformative changes in our pedagogical paradigms.In this digital age, fostering a culture that makes full use of these resources could be crucial in teaching writing to students who excel not only in writing term papers, but also in having a better command of the language in a variety of contexts. The researchers focused on empirical studies that provided research-based evidence of the technology's effectiveness.The technology also enables better learning outcomes for language learners in terms of outcomes, interaction, feedback, impact, motivation, and meta-linguistic knowledge.In all the studies, teachers were the main actors and facilitators (Williams & Beam, 2019).In the present study, however, students use the available electronic tools on their own initiative to revise their own written products. By implementing these strategies in a thoughtful way, we can achieve a situation where learners are not just passive recipients of knowledge, but active participants who use technological tools to enhance their literacy skills.This research points us towards an approach to education in which innovative digital resources are seamlessly integrated with traditional methods, enabling learners to complement and enhance their writing and language expression in a more holistic way.In addition, modern education is characterised by the need for ubiquitous access, and it is increasingly important that these electronic resources transcend geographical and time constraints.The ability to interact seamlessly with these tools, inside or outside the classroom, is a key factor in enabling a flexible and dynamic learning experience. Conclusion This study investigated students' knowledge and use of Slovenian tools in their writing. Triangulation was also used in this area.The results show that a large proportion of students are familiar with portals aimed at improving writing skills.So however, we find that a significantly smaller proportion of students actively use these tools.The survey also showed that, on average, students find these tools good and useful. Regarding the first research question, whether students are familiar with tools to improve their writing, it turns out that students are overwhelmingly familiar with electronic tools on the web.This is confirmed in many studies, such as Shadiev's study (Shadiev & Yang, 2020).This is consistent with other studies, such as Williams, who found that the integration of technology into the writing classroom led to a discernible improvement in students' compositional techniques and writing skills, while also strengthening their understanding and application of emerging literacies.Students engaged in a dynamic realm in which they conceived, created, and presented a range of multimodal and digital compositions.These creations, diverse in form and rich in content, served as conduits for the embodiment of their insights into literary works and the exploration of contemporary social justice issues.The use of technology has positive aspects such as motivating students to engage and participate in writing tasks and has increased social interaction and collaboration among peers (Williams & Beam, 2019). Although a significant number of students indicate that they are familiar with these tools, they do not use them as much as teachers do when engaging in the writing process.In relation to the second research question, whether students use tools to improve their writing, students overwhelmingly do not use online electronic tools on their own, nor do they use the text and grammar editor in Word.This is not consistent with the studies (Vinci, 2007, Shadiev & Yang, 2020, Ratheeswari, 2018).In the studies, however, students do use these tools in class, where they do so together with the teacher or at the teacher's request.In our study, however, students had to use these tools on their own initiative. The survey concludes with a key finding -tools designed to improve writing skills are rarely used by students in independent work.This key finding highlights an important pedagogical dimension.It underlines the need for teachers to take a proactive role in introducing these tools into the classroom environment.Guiding students in the art of using these tools purposefully and accurately should be the cornerstone of pedagogical efforts. Teachers need to embrace the development of digital literacy and systematically introduce technology as a tool to improve learning outcomes.In the area of writing, students should have access to online tools that allow them to excel not only in writing term papers, but also in the broader area of effective communication. The findings collected during the review supported the assertions that technology has contributed to enhancing language learners' performance across areas such as outcomes, interaction, feedback, impact, motivation, and meta-linguistic knowledge (Bindu, 2016, Shadiev & Yang, 2020). ICT should be used thoughtfully in the classroom, according to learning objectives, appropriate methods, and the student population.ICT should therefore be chosen so that students use these tools to help themselves and acquire the knowledge and skills they will need in later life. Limitations and Future Directions While this study provides insights into the field of learning writing skills with digital tools, it is important to acknowledge certain limitations that underscore the scope of its findings.One of the limitations is the relatively modest and focused sample, which consisted exclusively of pupils in the classes I teach.While this focus allowed for in-depth exploration within a controlled setting, it inevitably raises questions about generalizability to a broader population. A notable avenue for future research is the consideration of alternative approaches to teaching.The results of this study inevitably traverse an evolving terrain, influenced by the unique context and methods employed.It stands to reason that the results would undoubtedly have taken on different contours if the students had been guided through a subsequent phase of text revision.This revision process, infused with the deliberate integration of specific dictionaries, corpora, and other electronic tools, could potentially provide a deeper understanding of students' grasp of language nuances and their ability to use these digital resources to refine their written expression. While the present study offers a diverse insight into the intersection of written language skills and digital tools in the context of a selected group of learners, it is also a challenge for further research.In future, we would like to conduct a study on a larger sample, and we should do more detailed instructions would be added. Figure 1 . Figure1.Do you know of any online portals from which you can access language advice, dictionaries, corpora, and other language resources (e.g., Fran, CJVTvejice, etc.) from one place? Figure 3 . Figure 3. Do you know and use the spelling and grammar checker tool in Word? Figure 4 . Figure 4. Do you use an online translator?
8,512
2023-12-05T00:00:00.000
[ "Mathematics", "Computer Science", "Education", "Linguistics" ]
Evaluation and Commissioning of Commercial Monte Carlo Dose Algorithm for Air Cavity The purpose of this study was to compare the Pencil Beam (PB) with Monte Carlo (MC) calculated dosimetric results using phantoms for air cavity region. Measurements in Tough water phantom with air gaps were used to verify the calculated dose. The plane-parallel ionization chamber was moved from 2 mm to 20 mm behind air gap. Calculations were performed for various air gaps (1.0, 2.0, 3.0 and 4.0 cm) and field sizes (4.2 × 4.2, 6.0 × 6.0 and 9.8 × 9.8 cm). The lateral missing tissue measurement was performed using the radiochromic RT-QA film. Dose difference between PB and chamber measurement near an air gap was greater for smaller field size, larger air gap thickness, and shallower depth behind air gap. As the distance from the phantom edge became shorter, the dose differences of the PB calculation and film measurement became larger. MC calculations were found within 3% agreement to the measured dose distributions. Our results demonstrate an excellent agreement between ionization chamber and radiochromic RT-QA film measurements and MC calculations. Introduction The effect of heterogeneous corrections is an important issue that has increasingly drawn the attention of the medical physics community for last several years.In the report of Task Group No. 65 of the Radiation Therapy Committee of the American Association of Physicists in Medicine, inhomogeneity correction algorithms were categorized according to the level of anatomy sampled for scatter calculation and the inclusion or exclusion of electron transport [1].The two photon dose algorithms available with the iPlan RT Dose (Brainlab, Munich, Germany) are the Pencil Beam (PB) and the Monte Carlo (MC) algorithms.Hurkmans et al. reported the limitations of dose calculations in the case of head & neck tumor using the PB algorithm [2].MC algorithm was able to predict the dose distributions with a higher accuracy [3].Many researchers have investigated the effect of air cavities on the dose distribution and dose reduction near air cavity, depending on geometry, beam energy, and field size using various MC codes in water equivalent phantoms [4][5][6][7]. Fragoso et al. performed an experimental verification of the iPlan v. 4.1 MC algorithm, using water-, lung-and bone-equivalent materials to investigate the differences between measured and calculated dose distributions [8].However, reports of PB and MC calculations for air cavity using commercial treatment planning system are lacking.The purpose of this study was the evaluation and commissioning of iPlan RT MC Dose algorithm for air cavity region.We compared the calculated dosimetric results between PB and MC algorithms using phantoms.In addition, we also compared the results of dose differences between PB and MC algorithms for maxillary cancer patient. Materials and Methods Figure 1 shows the schematic of the experimental setup for dose measurements.Tough Water phantom of 3 and 20 cm thickness was used above and below the air gap, respectively.Source-to-surface distance (SSD) was set at 950 mm.Tube voltage of 120 kV was used in the computer tomography scans to produce a group of images of 512 × 512 pixels, with the slice thickness of 1.25 mm.Markus parallel-plate ionization chamber (PTW, Freiburg) was used to measure depth dose distributions and dose profiles were measured using Gafchromic RTQA film.The plane-parallel ionization chamber was moved in 1 mm steps from 2 mm to 10 mm and in 2 mm steps from 10 mm to 20 mm behind the air gap.Central axis depth doses were measured with two different approaches.At first, the field size was kept constant to 4.2 × 4.2 cm 2 and thickness of air gap was varied from 1.0 cm to 4.0 cm with the increments of 1.0 cm.Then, the thickness of air gap was kept fixed to 3.0 cm for field sizes of 4.2 × 4.2, 6.0 × 6.0 and 9.8 × 9.8 cm 2 . The radiochromic RTQA (ISP Corp, Wayne NJ, USA) film was inserted into Tough Water phantoms using a constant field size of 9.8 × 9.8 cm 2 .Epson Expression 1680 desktop flat-bed document scanner was used.Calibration curve was created in the following fashion: One sheet of film was cut into 3.0 × 3.0 cm 2 pieces and 6 pieces of film were irradiated to establish each calibration curve.Pieces of film were exposed to 50, 100, 150, 200, 250, 300 monitor units, respectively. The plan was calculated with the iPlan RT Dose ver.4.1.2treatment planning system.Monitor units (MU) were determined from the prescribed dose to the isocenter, based on the PB algorithm.The plan was recalculated using MC algorithm while keeping the same planning parameters for beam arrangement, leaf positions, isocenter, position, and monitor unit, using the full MLC geometry simulation 'Accuracy Optimized Model' with a spatial resolution of 2 mm and variance of 1%.The 6 MV photon beam energy was used from Novalis shaped beam radiosurgery unit (Brainlab, Munich, Germany).The charge for 200 MU was measured three times for each position of the ionization chamber and the average of these measured values was used. We selected the treatment plan of a patient who had been treated with Intensity-modulated radiation therapy (IMRT) for squamous cell carcinoma of maximally sinus.The gross tumor volume (GTV) was defined as the visualization of any gross disease.The clinical target volume (CTV) was defined as the GTV plus a 1 mm margin.The planning target volume (PTV) was defined as the CTV plus a 2 mm margin to account for tumor motion and setup uncertainty.GTV, CTV and PTV sizes were 1.5, 2.7 and 4.4 cc, respectively.A dose of 50 Gy in 10 fractions was prescribed to 95% of the PTV (D95) with PB algorithm in this case.We recalculated the planned dose using MC algorithm while keeping the same number of MU per beam.A dose-volume histogram (DVH) analysis was performed for GTV, CTV and PTV.The brainstem, optic chiasm, eyes, and optic nerves were contoured as organs at risk (OARs).The following dose indices were used to evaluate the plan quality.a) PTV D95; the minimum relative dose that covers 95% of the volume of the PTV.b) PTV V95; the relative volume of PTV that receives at least 95% of the prescribed dose.c) GTV D99 CTV D99; the minimum relative dose that covers 99% of the volume of the GTV and CTV, respectively. Results Figure 2 shows the dose distributions in a water phantom for 4.2 × 4.2 cm 2 field size with air gap.Significant dose difference can be observed between PB and MC calculations in re-build up region.Figure 3 shows the calculated and measured data for various field sizes with a fixed air gap of 3 cm.A direct relationship can be observed between calculated dose and field size with fixed air gap.PB calculated results did not match the measured data in re-build up regions.At 2 mm depth, the calculated results by the PB algorithm were 16.1% higher, whereas, MC calculated results were only 0.8% higher than measured dose with field size of 4.2 × 4.2 cm 2 . Figure 4 shows the calculated and measured data for various air gaps with a fixed field size of 4.2cm.An inverse relationship can be observed between calculated dose and air gap with fixed field size.PB calculated results did not match the measured data in re-build up regions.At 2 mm depth, the calculated results by the PB algorithm were 20.4% higher, whereas, MC calculated results were only 3.8% higher than measured with air gap of 3.0 cm. Figure 5 shows the dose distributions calculated using PB and MC algorithms, and calculated and measured dose profiles in a water phantom for 9.8 × 9.8 cm 2 field size with lateral missing.Significant dose difference can be observed between PB and MC calculations near to the air gap.The PB results were 7.5% higher and MC results were 1.3% higher than measured dose at 10 mm from the phantom edge.As the distance from the phantom edge became shorter, the dose differences of the PB calculations became larger.The MC calculations and film measurements were in good agreement. Figure 6 shows the comparison of dose distributions and DVHs for the PB and MC calculated plans.For PB calculated plan, the dose distribution was more homogeneous than MC calculated plan.The Table 1 compares the PB and MC calculated plans with regard to the PTV, CTV and GTV.D95 of the PTV, D99 of the GTV and D99 of the GTV using the MC algorithm were on average 26.4%, 30.7% and 25.8% lower than those with the PB algorithm, respectively.For OARs, the dose difference between PB and MC calculations is small. Discussions The results of the present study indicate the calculated dose differences between PB and MC algorithms.For both algorithms, the calculated dose increases with increasing field size.For MC algorithm, this effect is more significant in re-build up region showing a non-linear curve.For PB algorithm, this effect is comparatively less prominent in re-build up region showing a linear curve.The PB calculated dose show a minor linear decrease in re-build up region with increase in the thickness of air gap.However, MC calculated dose reduces more significantly in re-build up region with increase in the size of air gap showing a non-linear curve.The PB algorithms are able to account for the change of primary transmission in heterogeneous media with relatively simple algorithms, but cannot account for loss of electronic equilibrium at near tissue-air interfaces.MC algorithms model the actual physical processes leading to a dose deposition, including secondary electron distribution [9].The reduction in dose to the points located beyond the air gap is due to a reduction in scattered radiation, produced in the material placed before the air gap.The loss in scatter contribution to the point of measurement is due to the lateral spread of the scattered radiation within the air gap. A re-build up region occurs as electrons are once again generated in water-equivalent material.Allen et al. reported that the dose reduction near an air cavity was greater for smaller field size, higher energy, larger air cavity size, and shallower depth in water where the air cavity was situated [10].Klein et al. results showed that following a 2.0 cm wide air channel for a 4 MV X-ray with 4 × 4 cm 2 field, there was an 11% under dose at the distal interface, while a 2.0 cm cubic cavity yielded only a 3% loss [4].Petoukhova et al. reported an excellent agreement of i Plan RT Dose MC calculation with the experimental data for phantoms with air cavities [11]. Behrens reported that the build-down effect was much smaller than the build-up effect and therefore not as important [12].The dose reduction near an air cavity would have a negative clinical impact in a treatment if the region adjacent to the cavity consists of the target.Wang et al. employed the MC algorithm to assess the degree to which tissue inhomogeneities in the head & neck affect static field conformal treatment plans.They concluded that the pencil-beam calculation corrected for primary attenuation by the equivalent path-length is a sufficiently accurate method for head & neck treatment planning using 6-MV photons [13].Yoon M et al. showed that when the beams pass through the oral cavity in anthropomorphic head & neck phantom, the average dose difference becomes significant, revealing about 10% dose difference to prescribed dose at the isocenter [14].Waldron et al. reported significant local recurrence rate of malignant disease in two separate retrospective trials involving 29 ethmoid sinus and 110 maxillary antrum cancer cases treated with curative intent.In these studies they expressed concern about the risk of local control of disease due to potential under dose of the target because of the physical uncertainties of the dose distribution achieved in irradiating large air cavities [15,16].The limitation of this study was to use one beam irradiations.However, in a realistic therapy, a combination of multiple numbers of sub-fields within fields from multiple beam directions is used.Thus, the magnitude of interface dose reductions will likely be smaller than those for the limited number of radiation fields used in this study. Conclusion In conclusion, comparison of the depth dose and dose profiles between the measurements and MC calculations near air gap demonstrates excellent agreement concerning different air gaps and field sizes.We recommend that MC algorithm should be employed for accurate dose calculations in the presence of air cavity. Figure 1 . Figure 1.Schematic of the phantoms used in the chamber and film experiments.(a) The black box represents the plane-parallel ionization chamber; (b) The black line represents the RT-QA film. Figure 2 . Figure 2. Dose distributions calculated using (a) PB and (b) MC algorithms in a Tough water phantom for 4.2 × 4.2 cm 2 field size with air gap.Compared to PB, MC calculations provide substantially lower doses to the air gap and rebuildup region. Figure 3 . Figure 3. Depth dose in a Tough water phantom for 4.2 × 4.2 cm 2 , 6.0 × 6.0 cm 2 , and 9.8 × 9.8 cm 2 field sizes with 3 cm air gap.Solid lines show the PB calculations.Dashed lines represent the MC calculations.Measured values are shown by dots. Figure 4 . Figure 4. Depth dose in a tough water phantom for 1 -4 cm air cavities with 4.2 × 4.2 cm 2 field size.Solid lines show the PB calculations.Dashed lines represent the MC calculations.Measured values are shown by dots. Figure 5 . Figure 5. Dose distributions using (a) PB and (b) MC algorithms, and (c) calculated and measured dose profiles for the lateral missing phantom.Compared to PB, MC calculations provide substantially lower doses in the close proximity to the air. Figure 6 . Figure 6.Planned dose distributions calculated by (a) PB and (b) MC algorithms for a patient with maximally sinus cancer; (c) comparison of dose volume histograms (DVHs) between PB (solid line) and MC (dashed line) calculated plans for the same patient.The PTV is shown as a translucent pink region.
3,192
2014-01-27T00:00:00.000
[ "Physics", "Engineering", "Medicine" ]
GLACIER VARIATIONS IN THE EUROPEAN ALPS AT THE END OF THE LAST GLACIATION The Last Glacial Maximum in the Alps lasted from approximately 30 to 19 ka. Glaciers reached out onto the forelands on both sides of the main Alpine chains, forming piedmont lobes in the north and filling the Italian amphitheatres to the south. Pullback of glaciers from their maximum extent was underway by 24 ka. Glaciers oscillated at stillstand and minor re-advance positions for several thousand years forming Last Glacial Maximum (LGM) stadial moraines. North and south of the Alps, the various stadials cannot yet be unequivocally matched. Glaciers had receded back within the mountain front by 19-18 ka. During the early Lateglacial phase of ice decay remnants of the once huge valley glaciers that fed the piedmont lobes downwasted and were likely calving into the extensive lakes that formed in the lower valley reaches. The first Alpine-wide glacier re-advance took place during the Gschnitz stadial, 17-16 ka, which was likely a response to Europewide cooling during Heinrich event 1. By the Bølling/Allerød interstadial much of the Alps were ice-free. Glaciers advanced repeatedly to an extent several kilometers from the cirque headwalls, during the Egesen stadial in response to the Younger Dryas cold period. Egesen stadial moraines, at some sites several sets of moraines, were constructed in valleys all across the Alps. 10Be exposure dates for Egesen stadial moraines are in the range 13.5 to 12 ka. Moraines located at an intermediate position between the Little Ice Age moraines and the Egesen moraines formed at the margins of glaciers that advanced during the closing phase of the Egesen stadial or during the earliest Holocene at 10.5 ka. Variaciones glaciares en los Alpes europeos al final de la última glaciación RESUMEN. El último máximo glaciar en los Alpes duró desde aproximadamente 30 a 19 ka. Los glaciares alcanzaron la parte más externa de ambos lados de las principales cadenas alpinas, formando lóbulos de piedemonte en el norte y rellenando los anfiteatros italianos en el sur. El parón de los glaciares desde su máxima extensión ocurrió no más tarde de 24 ka. Los glaciares oscilaron con pequeños reavances durante varios miles de años, dando lugar a las morrenas estadiales del Último Máximo Glaciar. Al norte y al sur de los Alpes los diferentes estadiales no presentan una exacta correspondencia. Los glaciares retrocedieron ABSTRACT.The Last Glacial Maximum in the Alps lasted from approximately 30 to 19 ka.Glaciers reached out onto the forelands on both sides of the main Alpine chains, forming piedmont lobes in the north and filling the Italian amphitheatres to the south.Pullback of glaciers from their maximum extent was underway by 24 ka.Glaciers oscillated at stillstand and minor re-advance positions for several thousand years forming Last Glacial Maximum (LGM) stadial moraines.North and south of the Alps, the various stadials cannot yet be unequivocally matched.Glaciers had receded back within the mountain front by 19-18 ka.During the early Lateglacial phase of ice decay remnants of the once huge valley glaciers that fed the piedmont lobes downwasted and were likely calving into the extensive lakes that formed in the lower valley reaches.The first Alpine-wide glacier re-advance took place during the Gschnitz stadial, 17-16 ka, which was likely a response to Europewide cooling during Heinrich event 1.By the Bølling/Allerød interstadial much of the Alps were ice-free.Glaciers advanced repeatedly to an extent several kilometers from the cirque headwalls, during the Egesen stadial in response to the Younger Dryas cold period.Egesen stadial moraines, at some sites several sets of moraines, were constructed in valleys all across the Alps. 10 Be exposure dates for Egesen stadial moraines are in the range 13.5 to 12 ka.Moraines located at an intermediate position between the Little Ice Age moraines and the Egesen moraines formed at the margins of glaciers that advanced during the closing phase of the Egesen stadial or during the earliest Holocene at 10.5 ka. Introduction The Alps and the Jura Mountains played a key role in the birthplace of the Ice Age theory (Krüger, 2008, and references therein).In the Alps, as well as in Scandinavia and the UK, the heart of the question was the origin of the erratics.How could huge blocks of granite be transported away from their original bedrock outcrops in the Alps and be deposited on the limestone bedrock of the Jura?The elevation where many granitic erratics are presently located is up to almost 1400 m a.s.l., while the floor of the valley which lies between the Alps and the Jura lies at about 400 m a.s.l.Perraudin, a hunter in Oberwallis, observed that glaciers carried huge amounts of debris and left it at the margins as moraines.He realized that moraines further downvalley must record earlier, greater extents of the same glacier.He discussed his ideas with Venetz who then convinced both de Charpentier and Agassiz.The latter became an enthusiast and now is one of the bestremembered proponents of the Ice Age theory (Krüger, 2008 and references therein).Penck and Brückner (1909) provided a particularly comprehensive overview of the state of research in the early 20 th century on the glaciations of the Alps.They introduced the concept of four glaciations; Günz, Mindel, Riss, and Würm.While an extended version of the four-fold system is maintained in the forelands of the Eastern Alps (Fiebig et al., 2011 and references therein), in the Swiss sector to the west it has been abandoned.At least 15 glaciations are suggested to have occurred since the beginning of the Quaternary at 2.58 Ma (Schlüchter, 1988(Schlüchter, , 2004)).Extreme changes in base level related to the Alpine Rhine draining either predominantly to the east to the Danube or to the west into the Rhine graben resulted in a more complex aggradation and incision configuration than on the eastern Alpine outwash plains (Kuhlemann and Rahn, 2013). For nearly two centuries, the past extents of glaciers in the Alps have been estimated based on mapping of moraines and other ice-marginal landforms, in conjunction with detailed study of associated sediments.Changes in equilibrium line altitudes (ELA) of glaciers are an excellent proxy of changes in climatic conditions at the time; allowing estimation of increases or decreases in summer temperature or precipitation sums (Kerschner, 2009 and references therein).In the Alps, paleo-ELA's are based on glacier surface reconstruction and an ablation area ratio of 0.67 (Gross et al., 1977;Maisch et al., 2000;Kerschner, 2009).In this paper we summarize the present state of knowledge on the end of the last glaciation and the Alpine Lateglacial stadials including recent mapping and dating results.The timing of moraine construction has been constrained at several sites with 10 Be surface exposure dating and radiocarbon dating of organic material in associated sediments.All previously published 10 Be exposure ages discussed here are recalculated based on the NE North America 10 Be production rate (Balco et al., 2009) with the Lm (Lal/magnetic) scaling system.For detailed locations of dated moraines and boulders the original references should be consulted.Radiocarbon dates are calibrated with OXCAL4.2 with the INTCAL13 data set (Reimer et al., 2013) and are listed as calibrated age ranges (ka cal BP).Measured radiocarbon dates are found in the given references.For discussion all dates are presented as ka, the 10 Be ages and calibrated radiocarbon dates result in well comparable time scales. Last Glacial Maximum stadials and termination Reaching of the maximum extent on both the northern and southern side of the Alps occurred during MIS2, in concert with the global Last Glacial Maximum between 26 and 19 ka (Clark et al., 2009;Shakun and Carlson, 2010;Hughes et al., 2013).A steep drop in sea level reflecting increasing global ice volumes at around 30 ka with a more gradual lowering until to 20 ka, was followed by a stepwise rise during the Lateglacial and early to middle Holocene (Lambeck et al., 2014 and references therein).Reaching of the maximum ice extent on the Alpine forelands during the last glacial cycle is constrained by mapping of the outermost 'fresh' or Jungmoränen (Penck and Brückner, 1909) and their associated outwash deposits (so-called Niederterrassen).The moraines formed during the last glaciation are distinguished from older morphologically less distinct outboard moraines attributed to penultimate glacier expansions during MIS 6 or 8 (Graf, 2009;Keller and Krayss, 2010;Preusser et al., 2011;Fiebig et al., 2011;Gianotti et al., 2015).Records from sites on both the northern and southern side of the Alps suggest advance of glaciers beyond the Alpine front at around 30 ka ago (Keller and Krayss, 2005;Monegato et al., 2007;Scapozza et al., 2015).Ehlers and Gibbard (2004).At specific locations the present understanding of the LGM ice margin or ice surface height may differ in detail from this depiction.Piedmont lobes and Italian amphitheatres are labelled. Figure 2. Index map of the European Alps showing locations discussed in the text.Areas above 700 m a.s.l. are shaded grey (modified from Ivy-Ochs et al., 2009).LG= Lake Geneva, LZ= Lake Zurich, LC= Lake Constance, H stands for Hohronen. During the Last Glacial Maximum in the European Alps (late Würm) local ice caps and extensive ice fields in the high Alps fed huge outlet glaciers that occupied the main valleys, for example, the Rhone, Reuss, Rhine, Inn, and Salzach in the northern sector and the Tagliamento, Piave, Sarca, Ticino, Dora Baltea, and Dora Riparia along the southern side (Figs. 1 and 2).Systems of transection glaciers with transfluences over many of the Alpine passes existed during the LGM (Florineth and Schlüchter, 1998;Kelly et al., 2004a;Bini et al., 2009), for example at Grimsel Pass (Fig. 3) where Rhone glacier ice flowed northward into the Aare glacier catchment.Geological evidence, such as bedrock ice-flow direction mapping, trimline mapping and locations of high elevation erratics, points to thicker ice to the south of the main (present-day) water divide (Florineth and Schlüchter, 2000;Hippe et al., 2014;Wirsig et al., submitted).The LGM Rhone glacier comprised a northern Solothurn lobe and a southern Geneva lobe (Figs. 1 and 2).The Solothurn lobe reached its furthest extent in the region of Wangen a.d.Aare as marked by broad, moraine ridges and till mantling Molasse bedrock highs. 10Be exposure dates for boulders of hornblende granite brought from the southern valleys of the Valais located along the right lateral position indicate initial glacier withdrawal no later than 24±2 ka (Ivy-Ochs et al., 2004).To the southwest, the Rhone glacier filled the Lake Geneva basin during the LGM and extended southward where it encountered the Isère, Arve and Arc glaciers whose end moraines make up the systems to the east of Lyon (Coutterand and Buoncristiani, 2006;Coutterand et al., 2009).Based on seismic stratigraphy and core sediment studies in the Lake Geneva basin submerged moraines marking LGM stadial Rhone glacier positions have been recognized (Girardclos et al., 2005;Fiore et al., 2011).The Jura Mountains hosted a local ice cap during the LGM (Buoncristiani and Campy, 2011) that downwasted with timing similar to the Rhone glacier (Ndiaye et al., 2014).The Reuss glacier comprised long narrow finger like lobes confined by the intervening Molasse ridges in the Swiss forelands in the region between the Rhone Solothurn lobe and the Linth-Rhine lobe.Reber et al. (2014) obtained a minimum 10 Be age of 22±1 ka for withdrawal of the Reuss glacier from the maximum end moraines.Keller and Krayss (2005) estimate Rhine glacier reaching of the maximum extent at Schaffhausen at no later than 24 ka cal BP. Glaciers on the northern Alpine forelands retreated step-wise with intervening stillstands or minor re-advances discussed as the LGM stadials (Hantke, 1978).Eight moraine chains attributed to three LGM stadials have been discussed for the Rhine glacier (Keller and Krayss, 2005).The several well-preserved lateral moraines along the left side of Lake Zurich mark the ice margin fluctuations during the Zurich stadial (Fig. 4).Between the outermost Killwangen stadial position and the Zurich position, the Linth-Rhine glacier halted at the Schlieren position (Keller and Krayss, 2005).A piece of wood in delta sediments deposited in the lake formed between the Schlieren and Zurich stadial moraines was radiocarbon dated to 24.3 to 23.4 ka cal BP.Schlüchter and Röthlisberger (1995) place this as a maximum age for the Zurich stadial re-advance.Sediments reported in the early part of the last century but subsequently removed during quarrying and construction of the road dam across Lake Zurich suggested a brief stillstand at Hurden.This may record the terminus of a floating ice tongue pinned on the bedrock bench that separates the upper and lower Lake Zurich basins (Hantke, 1978(Hantke, -1983;;Graf, 2009).The oldest radiocarbon date from wood in Lake Zurich cores suggests a minimum age of 18.4-17.1 ka cal BP for an ice-free lake basin (Lister, 1988).In the Austrian sector, the age of a piece of wood from the Rödschitz site shows retreat of the Traun glacier (Fig. 1) to inside the mountain front by 19.8-17.6 ka cal BP (van Husen, 1997). Data from several of the Italian amphitheatres suggest that LGM glaciers were at their most advanced position of the last glacial cycle between 27 and 21 ka cal BP (Monegato et al., 2007;Ravazzi et al., 2012 and references therein).Based on lithostratigraphy and pollen study underpinned by 14 C dating, a two-phase LGM maximum extent has been recognized at the Tagliamento glacier amphitheatre (Fig. 1).Monegato et al. (2007) found that the first advance, the Santa Margherita advance, took place between 26.5 and 23 ka cal BP.The second advance, the Canodusso advance, where the glacier nearly reached its previous extent, was between 24 and 21 ka cal BP.The Remanzacco recessional moraines were constructed between 21 and 19 ka cal BP.Starnberger et al. (2011 and references therein) discuss three end moraine systems of the LGM Salzach glacier in southern Germany and adjacent Austria.The authors describe an outermost, Nunreuter phase, the recessional Radegunder phase, and the innermost minor re-advance Lanzinger phase.Luminescence dating suggests that the second phase, which occurred between 21-20 ka, may correlate to the Canodusso phase of the Tagliamento amphitheatre (Starnberger et al., 2011).2).The uppermost meadow is the Rossberg (1010 m a.s.l.), mapped as pre-LGM glacial sediments ('Riss') (Hantke, 1978).The moraine indicated by the upper red arrow is at 950 m a.s.l. and closely approximates the maximum LGM ice surface height (Bini et al., 2009).The lowermost arrow points to a Zurich stadial moraine (780 m a.s.l.), just below it lies an inner Zurich stadial moraine (740 m a.s.l.).The hills in the right foreground belong to recessional stages of the Zurich stadial. The Dora Baltea glacier whose headwaters lie in the Mt.Blanc massif filled Val d'Aosta and terminated in the Ivrea amphitheatre.The largest moraine, the Serra d'Ivrea, has been attributed to pre-last glacial cycle glaciations (Gianotti et al., 2008;2015).At Ivrea, Gianotti et al. (2008Gianotti et al. ( , 2015) ) report four stadials for the LGM; named Pavone, Bienca, Prà San Pietro and Germano stadials.The 10 Be dates from the LGM moraines of the Ivrea amphitheatre indicate that pullback from the outermost Pavone moraines was underway by 24±2 ka (Gianotti et al., 2015).Similar 10 Be ages were obtained for the timing of construction of the LGM moraine in the Gesso River valley in the Italian Maritime Alps (Federici et al. 2011;submitted).Based on a compilation of radiocarbon dates, Scapozza et al. (2015) estimate that the Ticino and Adda glacers reached their LGM furthest extent between 28.5 and 22.9 ka cal BP.At Lake Iseo, ice-free conditions after downwasting of the Oglio glacier ensued no later than 18.6 to 17.9 ka cal BP (Ravazzi et al., 2012;Baroni et al., 2014).Ravazzi et al. (2014) suggest LGM Sarca glacier collapse in the Garda Lake region was underway by 17.7 and 17.2 ka cal BP.Dating of the timing of shutdown of sediment delivery to the distal outwash plain in the Friuli region at around 20.4 to 19.5 ka delineates the moment of withdrawal of the Tagliamento glacier from the LGM moraine complex (Fontana et al., 2014b).Along the southern forelands a similar timing for cessation of delivery of glaciofluvial sediments to megafans and the onset of incision into fan deposits is noted (Carton et al., 2009;Fontana et al., 2014a). The Alpine Lateglacial The Alpine Lateglacial began when glaciers had receded back behind the mountain front; collapse of the piedmont lobes in the north and withdrawal from the amphitheatres in the south.The Alpine Lateglacial, 19-11.6 ka, comprised the Oldest Dryas cold period, Bølling warm period, Older Dryas cold period, Allerød warm period, and Younger Dryas cold period (e.g.Ammann et al., 1994).During this time glaciers advanced repeatedly as recorded by moraines in many Alpine valleys.Penck and Brückner (1909) proposed a three-fold division of the Lateglacial; the Bühl, Gschnitz, and Daun stadials.This system was extended and modified and the sequence from oldest to youngest Bühl, Steinach, Gschnitz, Clavadel/Senders, Daun, Egesen became an accepted paradigm (Kerschner, 2009 and references therein).Moraines are assigned to a stadial according to their relative position in a valley sequence, their morphological character and their ELA-depression relative to the Little Ice Age (LIA) ELA (Maisch et al., 2000 and references therein) (Table 1).The system is based on the idea that moraines in comparable morphostratigraphic positions, with similar morphologic characteristics, and similar ELA depressions located within a homogeneous climatic region were deposited during the same glacier advance period at around the same time. Stagnant, downwasting remnants of the LGM glaciers filled the main valleys during the earliest Lateglacial.This period, which formerly comprised the Bühl and Steinach Lateglacial stadials (Penck and Brückner, 1909;Mayr and Hueberger, 1968), is now called the phase of early Lateglacial ice decay (Reitner, 2007).Based on detailed geomorphological and sedimentological study at the Bühl type locality in the Hopfgarten region, Reitner (2007) concluded that the ice-marginal positions at that site do not record climate-driven glacier readvances.Ice-marginal landforms, especially kame terraces (van Husen, 2000) that built up along the sides of the decaying ice masses, indicate that the glaciers were no longer responsive to climate signals from the accumulation areas (Reitner, 2007).Luminescene dating constrains this period to 17±2 ka (Klasen et al., 2007).In smaller catchments, for example in the Alpstein area of eastern Switzerland, early Lateglacial climate-related re-advances are reported (Keller and Krayss, 2005).Re-calculation of the 10 Be exposure dates from the Ponte Murata moraine in the Italian Maritime Alps and ELA determinations suggest a re-advance at around 18.5 ka that may be attributable to the Bühl stadial (Federici et al., submitted).The Magland-Tour Noire stage of the Arve glacier with an ELA depression of 800-850 and 10 Be dated to 18 ka may have formed during this phase of the early Lateglacial (Coutterand and Nicoud, 2005). 10Be ages on glacially striated bedrock located about 30 km upstream from the end moraines at Ivrea of 17-16 ka provide minimum ages for an ice-free lower Val d'Aosta (Gianotti et al., 2008(Gianotti et al., , 2015)). The main valleys, for example the Rhone, Rhine, and Inn, are markedly overdeepened, with bedrock surfaces well below sea level at some points (Hinderer, 2001;Preusser et al., 2010;Reitner et al., 2010;Durst Stücki and Schlunegger, 2013).Lack of dating of the valley fill makes estimation of the elevation of the valley floors at the time of LGM ice decay difficult.Thickness of post-LGM valley fill varies significantly both between and within valleys (Müller, 1999;Hinderer, 2001;Preusser et al., 2010).Müller (1999) suggests that in the region of the Rhine glacier diffluence at Sargans most pre-existing valley fill was eroded during the LGM; little of the pre-Last glacial cycle sediment remains at depth in the Rhine valley (Hinderer, 2001).The deepest sediments are a thin layer of till followed by hundreds of meters of lake sediment with dropstones suggesting the Rhine glacier was calving into a lake in the earliest Lateglacial, although the lake sediment remains undated.This lake, with a suggested level of 420 m a.s.l.(few to tens of meters higher than the present levels of Lake Zurich or Lake Constance), likely extended at least all the way up to Chur.A Lateglacial lake encompassing the present Lake Constance, Walensee and Lake Zurich has been suggested (Graf and Müller, 1999). In the delta and alluvial fan gravels that overlie the lake sediments in the upper Walensee reach, at a core depth of 39 m below surface, a piece of wood was dated at 14.9 to 13.9 ka cal BP (Müller, 1999). It is likely that parts of the oversteepened and suddenly ice-free valley flanks collapsed in huge rock avalanches (van Husen, 2000).For example at Almtal (Fig. 2), the oldest dated landslide in the Alps (van Husen et al., 2007).Nevertheless, 36 Cl, 10 Be and 14 C dating has shown that none of the huge Alpine landslides (Flims, Koefels) recognizable at the surface today was ever in contact with a glacier (e.g.von Poschinger and Haas, 1997).Early Lateglacial rock slope failures may be represented by the cavernous niches along the valley walls, such as those present in the Rhone valley (Pedrazzini et al., 2015).Their deposits may lie below the hundreds of meters of post-glacial valley fill. The Gschnitz stadial represents the first real glacier re-advance as shown by sites where Lateglacial till overlies early Lateglacial mass movement deposits or outwash (van Husen, 1997).Preserved moraine evidence shows that Gschnitz stadial glaciers occupied large areas in the principal tributary valleys and in the upper reaches of the main valleys.The type locality of the Gschnitz stadial moraine at Trins (Fig. 2) in the Gschnitz valley (Penck and Brückner, 1909), was 10 Be exposure dated to 17-16 ka (Ivy-Ochs et al., 2006).ELA depression during the Gschnitz stadial was 600-700 m in comparison to the LIA ELA.The Clavadel/Senders and Daun stadials lie between the Gschnitz and Egesen stadials (Kerschner, 2009 and references therein). Key records for the early Alpine Lateglacial come from lakes Längsee and Jerzersee in southern Austria (Schmidt et al., 2009(Schmidt et al., , 2012)).In cores from Längsee, based on diatom and paleobotanical evidence constrained by radiocarbon dates from bulk gyttja, a warm period termed the Längsee oscillation (18.5 to 18.1 ka cal BP) was identified (Huber et al., 2010).This was followed by a cold period that lasted until the beginning of the Bølling/Allerød interstadial.The Längsee cold period between 17.5 and 14.6 ka cal BP may have consisted of two cold intervals separated by an intervening slightly warmer period.It is suggested that the earlier of the Längsee cold intervals (17.6 to 16.9 ka cal BP) is linked with the Gschnitz stadial (Schmidt et al., 2009(Schmidt et al., , 2012;;Huber et al., 2010). Recognition of an Oldest Dryas pollen signal and the transition from Oldest Dryas to Bølling has been reported from numerous sites in the Alps (Burga, 1998). 10Be exposure dating of glacially striated bedrock shows that no later than the beginning of Bølling the LGM ice surface had lowered to the extent that ice ceased to move across the main transfluence passes in the high Alps (Ivy-Ochs et al., 2006, Kelly et al., 2006;Dielforder and Hetzel, 2014;Hippe et al., 2014;Wirsig et al., submitted).Sedimentological and paleobotanical studies of lake and bog sediments show that not only the main valleys but also the tributary valleys were ice-free at the beginning of the Bølling or just before.This is confirmed by radiocarbon dates that calibrate to between 15.9 and 14.3 ka cal BP from sites all across the Alps (Maisch, 1987;van Husen, 1997;Keller and Krayss, 2010;Ivy-Ochs et al., 2006;Kelly et al., 2006;Baroni et al., 2014;Heiri et al., 2014). Daun stadial moraines often lie just downvalley from Egesen stadial moraines.They differ from the latter in having relatively broad, smoothed ridges that are poor in boulders.Consequently, they are conspicuously difficult to exposure date.Palynological data as well as 14 C dates on bulk sediment from the Albula region led to the conclusion that Daun moraines relate to an advance prior to the beginning of the Bølling warm period (Maisch, 1987).Van Husen (2000) correlates Daun advances to the Older Dryas, that lies between the Bølling and Allerød warm periods.In that case, the Daun stadial may correlate to either the Aegelsee (Older Dryas) or the Gerzensee (Intra-Allerød) cold pulse, which were recognized by del 18 O and paleobotanical evidence from lakes in northwest Swiss forelands (Lotter et al., 1992(Lotter et al., , 2012;;Ammann et al., 2013). Egesen stadial moraines (Mayr and Heuberger, 1968) are relatively abundant and easy to locate in many Alpine valleys.They are recognizable as steep-walled, blocky moraine sets located several kilometers downvalley from the LIA moraines.ELA depression with respect to the LIA ELA is between 250-350 m.Somewhat higher ELA depressions are reported from sites on the northern fringes of the Alps (Kerschner et al., 2000) as well as from the Maritime Alps (Federici et al., 2009, submitted).The Egesen stadial glacier advances are linked to the Younger Dryas cold period (Ivy-Ochs et al., 2009 and references therein).The period of extreme climate fluctuations led to construction of two to three groups of moraines at some glaciers (Maisch, 1987).The first two are termed Egesen maximum and Bocktentälli.At Julier Pass (Fig. 2), the Egesen Lagrev glacier constructed a moraine complex of sharp-crested lateral moraines enclosing a former tongue region of hummocky, ill-defined ridge sets at the terminus around 13.5-12 ka (Ivy-Ochs et al., 2009).Reconstruction of the surface of the Lagrev glacier (Fig. 5) implementing an ArcGIS automated tool (Pellitero et al., 2015) indicates an ELA depression of 220 m in comparison to the LIA ELA.The Great Aletsch glacier built single-walled, long, continuous lateral moraines during the Egesen stadial.The end position is poorly defined as the terminus was located within the Rhone valley. 10Be exposure dates from boulders on the left-lateral moraine yield ages of 12±1 ka (Kelly et al., 2004b).In contrast, the nearby and much smaller Belalp glacier fluctuated markedly building up at least four nested Egesen stadial moraines between 13-11.5 ka (Schindelwig et al., 2011).Moraines of glacier stabilization phases correlative to the Egesen stadial are reported from the southwestern Alps (Federici et al., 2009;Cossart et al., 2010;Darnault et al., 2011). In some valleys a moraine can be found between the Egesen moraines and the LIA moraines.These moraines record a glacier advance to just a few hundred meters further downvalley than the LIA extent, with an ELA depression of about 120 m (Kartell stadial) (Ivy-Ochs et al., 2009).The Kartell moraine stabilized at around 11.9 ka suggesting it is related to the final phases of the Egesen stadial (Egesen III, Maisch, 1987).The left lateral moraine of the Tsidjiore Nouve glacier located just out side the LIA lateral moraine complex stabilized at 11.4 ka (Schimmelpfennig et al., 2012).These glacier advances, smaller in extent than during the Egesen, may relate to the Preboreal Oscillation (PBO) when cool and humid conditions struck Europe (11,300-11,150 cal BP) (Bjorck et al., 1997).In other valleys, the intermediate moraines have been dated to about 10.5 ka, for example at Belalp near the Great Aletsch glacier (Schindelwig et al., 2011) and at Stein glacier (Schimmelpfennig et al., 2014) (locations shown in Fig. 2).In the latter case the terminal position is difficult to identify, hampering glacier reconstruction and ELA calculation.This cold pulse may correspond to CE-1 an early Holocene cold phase identified in paleobotanical records (Haas et al., 1998;Ivy-Ochs et al., 2009;Solomina et al., 2015).2).Glacier reconstruction with an ArcGIS toolbox (Pellitero et al., 2015) based on the equilibrium profile model of Benn and Hulton (2010).The determined equilibrium line altitude of the glacier was 2550 m a.s.l., giving a ∆ELA of -220 m in comparison to the LIA ELA. Note that the contour lines of the glacier surface have not yet been adjusted for concave shape above the equilibrium line and convex shape below it. During cold periods associated with the Lateglacial stadials, in regions of the Alps with suitable topoclimatological conditions, especially in areas where the cirque floor or rockwall niche was below the regional ELA (Zasadni, 2007), rock glaciers developed (Kerschner, 1978;Kellerer-Pirklbauer et al., 2012).Relict rock glaciers several hundred meters below the present lower limit of discontinuous permafrost likely formed during the closing phase of the Egesen stadial (Kerschner, 1978;Maisch, 1987).Relict rock glaciers at even lower elevations may have formed already during the early Lateglacial, becoming relict as temperatures were no longer conducive at that elevation. 10Be dating of rock glaciers of the former category at Julier Pass and in the Larstig valley near Ötztal (Fig. 2) (Ivy-Ochs et al., 2009) shows that conditions remained cold yet became drier in the footprint of Egesen stadial glaciers towards the end of the Younger Dryas (Cossart et al., 2010(Cossart et al., , 2012;;Böhlert et al., 2011;Darnault et al., 2012). Discussion and conclusions In the overdeepened valleys, LGM glaciers likely attained ice thicknesses of 1500-2000 m, for example on the order of 2000 m in the lower portion of the LGM Rhine glacier upstream of Lake Constance (Benz, 2003) and 2500 m in the Hohe Tauern region (Reitner, 2013).On the northern forelands, piedmont lobes ranged in thickness to 400-700 m (Bini et al., 2009) depending on the elevation assumed for the glacier bed.The equilibrium line during the LGM maximum extent of the Rhine glacier has been estimated at 1000 m a.s.l.(Keller and Krayss, 2005) suggesting an ELA depression of more than 1500 m with respect to the LIA ELA of the catchment area (cf.Maisch et al., 2000).Benz (2003) calculated an LGM ELA of 950 m a.s.l.based on an ArcGIS reconstruction of the LGM Rhine glacier.This suggests a mean annual air temperature of -5.5 degrees compared to 8 degrees (between 1931-1960) and summer temperature of 3.2 degrees instead of 17.6 for Zurich (Benz, 2003). Can the ice margin fluctuations of the LGM recorded at sites on either side of the Alps be correlated north to south?Brief advances to positions just beyond ('Maximalstand' of van Husen, 1997) the main LGM maximum positions ('Hochstand' of van Husen, 1997) during the LGM ice build-up phase have been discussed (van Husen, 1997(van Husen, , 2000;;van Husen and Reitner, 2011) for the Austrian sector and for the Swiss sector (Keller and Krayss, 2005;Graf, 2009).Lack of dating control for this outermost stage hinders evaluation of its potential correlation to the well-dated Santa Margherita phase at Tagliamento (Monegato et al., 2007).It seems reasonable that the exposure ages pinpointing initiation of withdrawal from the maximum positions at 24±2 ka the Rhone glacier (Ivy-Ochs et al., 2004) and at 22±1 ka at the Reuss glacier (Reber et al., 2014) correspond to the end of the period of occupation of the Tagliamento Santa Margherita phase (26.5-23 ka, Monegato et al., 2007;Ravazzi et al., 2012Ravazzi et al., , 2014;;Gianotti et al., 2015).Starnberger et al. (2011) correlate the second LGM advance phase of the Salzach glacier to the Candusso re-advance at Tagliamento (24-21 ka).Speculation suggests that the Tagliamento Remanzzaca phase, which ended at 19 ka, would then overlap in time with the Zurich re-advance stadial of the Linth-Rhine glacier, which ended no later than 18 ka (Lister, 1988).The two major advances of the LGM maximum and the subsequent inner recessional moraine complexes found at the Tagliamento amphitheatre at this moment can be only loosely connected to the advances in the north.The inner LGM stadials of the northern side are spread somewhat farther behind the outermost maximum moraines (up to ca.50 km) than on the southern side of the Alps where outer, older arcs of moraines of the amphitheatres may have hindered expansion.Huge differences in catchment extent and hypsometry, ice volume, and lateral extent on the forelands must be taken into account. LGM glaciers vanished rapidly from the forelands.A small increase in ELA, perhaps related to the slight warming of the Längsee oscillation around 19-18.5 ka (Schmidt et al., 2009), forced the long-flat foreland tongues to collapse catastrophically (Reitner, 2013).Lakes formed in the narrow, overdeepened valleys at some sites glacially dammed on the upstream ends.Glacier downwasting may have been enhanced by calving into the lakes (van Husen, 2000(van Husen, , 2004;;van Husen and Reitner, 2011).This is true for both the northern as well as the southern valleys.Scapozza et al. (2015) note that during the early Lateglacial Cugnasco stadial the Ticino glacier was calving into the paleo-lake Verbano. Valley glaciers re-advanced during the Gschnitz stadial.Systems of interconnected dendritic glaciers likely still existed (Ivy-Ochs et al., 2006).By that time, 17-16 ka, more than 80% of the LGM ice volume was gone (van Husen, 1997).The Gschnitz stadial corresponds in time to the early phase of the Längsee cold period noted in southern Austrian lake sediments (Schmidt et al., 2009(Schmidt et al., , 2012)), as well as the Ragogna cold oscillation recognized at sites in the Tagliamento amphitheatre (Monegato et al., 2007).Summer temperature during the Gschnitz stadial was 8-10 degrees colder than today, with precipitation reduced by 25-50% (Kerschner, 2009;Schmidt et al., 2009).By the beginning of the Bølling interstadial much of the Alps were free of ice. Glaciers re-advanced all across the Alps in response to the Younger Dryas cold period building up Egesen stadial moraines.In Figure 6, a longitudinal profile through Haslital (Fig. 2) for the reconstructed LGM, Gschnitz, and Egesen stadial glaciers allows comparison of the ice surface elevations.During the Egesen, temperatures were 3.5 or up to 5 degrees colder than today, depending on the precipitation values assumed, which may have been just slightly or markedly lower than today (up to 30%), respectively (Kerschner et al., 2000;Kerschner and Ivy-Ochs, 2008;Kerschner, 2009).Chironimidbased July temperature reconstructions from sites across the Alps point to a temperature depression of 3.5 to 4 degrees for the Younger Dryas, with the first half about 1 degree colder at some sites (Heiri et al., 2014 and references therein).As Egesen glaciers downwasted the continued cold conditions but decreasing precipitation sums led to the development of rock glaciers in the former tongue regions.Moraines that are upvalley and morphologically distinct from Egesen stadial moraines may record positions of glacier stabilization during the Preboreal oscillation (Kartell stadial; Kerschner, 2009).A final glacier advance around 10.5 ka was recorded at Belalp (Schindelwig et al., 2011) and at Stein glacier (Schimmelpfennig et al., 2014).This represents the last advance before glaciers in the Alps attained a size as small or smaller than today and remained so until the late Holocene (Ivy-Ochs et al., 2009;Solomina et al., 2015). There are three important climatic cornerstones of termination I in the Alps.The first is the moment of deglaciation, corresponding to massive downwasting on both sides of the Alps.This may be set at about 20 ka but no later than 19 ka.The second cornerstone is the Gschnitz stadial, 17-16 ka.It is the first true re-advance of glaciers after ice decay and marks an Alpine wide climatic deterioration tracked in both paleobotanical as well as glacial morphologic patterns.Our knowledge of the Gschnitz stadial must be underpinned by detailed mapping and more dates.Only then can the patterns of glacier extent and ELA depression be revealed.This goal has been impaired by the lack of sites suitable for dating with 10 Be, and lack of material for 14 C dating directly related to the moraines.The third cornerstone is the Egesen stadial.As the moraines constructed during this stadial are typically blocky, this stadial has been dated with 10 Be at sites ranging from the Maritime Alps in the southwest to the Tauern region in the Eastern Alps (Bichler et al., submitted).Nevertheless, many questions remain about the timing and structure of deglaciation.Details on events during the phase of early Lateglacial ice decay are known from only a few regions.Finally, the late Pleistocene/Holocene transition displays an interesting pattern; cold oscillations represented by glacier advances to positions between the Egesen and LIA extents, as well as rock glacier activity, appear to have lasted until about 10.5 ka.Although a coherent picture is emerging there are still time ranges that require further study to bring them into focus.(Wirsig et al., submitted).Note the change in the trace of the profile from north-south to east-west at Grimsel Pass. Figure 1 . Figure1.The Alps during the Last Glacial Maximum (late Würm).Ice extent taken fromEhlers and Gibbard (2004).At specific locations the present understanding of the LGM ice margin or ice surface height may differ in detail from this depiction.Piedmont lobes and Italian amphitheatres are labelled. Figure 3 . Figure 3. Excerpt from the LGM map of Bini et al. (2009) showing ice extent during the LGM for the Haslital area of central Switzerland (located just slightly to the north of Grimsel Pass which is indicated on Fig. 2). Figure 4 . Figure 4. Photograph ofLGM stadial moraines on the left-lateral side of the Linth-Rhine glacier in the Lake Zurich region.View is to the SSE towards the Hohronen (labelled H in Fig.2).The uppermost meadow is the Rossberg (1010 m a.s.l.), mapped as pre-LGM glacial sediments ('Riss')(Hantke, 1978).The moraine indicated by the upper red arrow is at 950 m a.s.l. and closely approximates the maximum LGM ice surface height(Bini et al., 2009).The lowermost arrow points to a Zurich stadial moraine (780 m a.s.l.), just below it lies an inner Zurich stadial moraine (740 m a.s.l.).The hills in the right foreground belong to recessional stages of the Zurich stadial. Figure 5 . Figure 5. Reconstruction of the Lagrev glacier at Julier Pass during the Egesen stadial (location shown in Fig.2).Glacier reconstruction with an ArcGIS toolbox(Pellitero et al., 2015) based on the equilibrium profile model ofBenn and Hulton (2010).The determined equilibrium line altitude of the glacier was 2550 m a.s.l., giving a ∆ELA of -220 m in comparison to the LIA ELA.Note that the contour lines of the glacier surface have not yet been adjusted for concave shape above the equilibrium line and convex shape below it. Figure 6 . Figure 6.Longitudinal section through the Haslital from Grimsel Pass (location in Fig. 2, 3) showing reconstructed ice surfaces during the LGM, Gschnitz, and Egesen stadials (modified from Wirsig et al., submitted).Ice-surface elevation based on field mapping of ice flow direction indicators (striae, crescentic gouges), 10 Be surface exposure dating, and ice flow modelling with the indicated basal shear stress values(Wirsig et al., submitted).Note the change in the trace of the profile from north-south to east-west at Grimsel Pass.
8,733.2
2015-04-30T00:00:00.000
[ "Environmental Science", "Geology" ]
MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods. Introduction Undersampling of k-space acquisition allows for accelerated MR exams. Partial Fourier methods and parallel imaging exploit redundancy in k-space such as Hermitian symmetry [1] or differences in spatial sensitivity maps of multiple receive coils [2][3][4] to restore missing k-space profiles. In contrast, reconstruction techniques exploiting redundancy in the image domain depend on the information content in the image data. By exploiting transform properties of correlated image data, undersampling artifacts are removed by filtering in a transform domain. For example, in k-t methods [5][6][7], undersampling artifacts are removed by adaptive filtering of the data in the spatiotemporal frequency domain. Nonlinear transforms for MR image reconstruction have also been used for nonlinear GRAPPA [26] where the nonlinearity in the bias between ground truth and noisy GRAPPA coefficients is modeled with a polynomial kernel and transformation into a higher dimensional feature space. For dynamic imaging, CS reconstruction in a feature space with linear and quadratic terms motivated by a second degree polynomial kernel allowed for higher undersampling factors for ASL perfusion data sets [27]. Further work included kernels with radial basis functions [28] and self-learned nonlinear dictionaries [29] for enhanced sparsity in time domain. In the present work, suppression of incoherent undersampling artifacts by linear projection of nonlinearly transformed image block arrays is proposed. In each iteration, the current image estimate is subdivided into overlapping blocks. Each block is grouped with matching blocks from the image based on a preceding clustering analysis. The block array is transformed according to nonlinear Gaussian weights assigned to each block where the mapping is implicitly calculated based on kernel PCA with a Gaussian kernel. Denoising in the nonlinear domain is achieved by projection onto the most significant principal components followed by a backmapping into the image domain. MR image reconstruction is performed by iteratively interleaved gradient updates for consistency with the acquired k-space data and denoising in the kernel feature space. The efficacy of the reconstruction is evaluated on two-dimensional cine data of the heart. Theory Image reconstruction by denoising of matching image blocks. CS image reconstruction relies on iterative image denoising while ensuring consistency with the acquired k-space data. Early implementations employed algorithms with explicit or implicit assumptions on the underlying image such as being piece-wise constant for total variation based denoising or being smooth with a small set of discontinuities in Wavelet based image reconstruction algorithms [8]. Advanced techniques employ overcomplete dictionaries [19,20] or data-dependent transforms based on image patches to preserve image details and reduce smoothing artifacts by exploiting redundancy in substructures of image blocks [30]. To denoise a reference image block x, image blocks are sorted according to a similarity measure, e.g. based on the Euclidean distance (Fig 1a). By choosing an upper cut-off criterion, all similar image blocks are stacked and transformed in stack direction using a sparse transform, for example using the FT or a singular value decomposition. Each image block contributes equally to the transform and the upper cut-off only allows for a limited number of blocks to be used. If there are only a few image blocks with high similarity, the transform domain sparsity is deteriorated. Adding more blocks with lower similarity leads to denoising artifacts and smoothing. These limitations can be mitigated using nonlinear transforms where data-dependent transforms can be composed of all available image blocks. Employing kernel principal component analysis (PCA) with a Gaussian kernel, for example, the contribution of each image block to the transform depends on the mutual distances in a nonlinear way. Image blocks with high similarity relative to x contribute more to the transform and blocks with lower similarity contribute less, but there is no need for an upper cut-off (Fig 1b). An introduction to kernel PCA is given in the next paragraph. Kernel PCA. Kernel PCA [31] is a nonlinear extension to PCA where a linear analysis is performed in a high-dimensional nonlinear feature space. It comprises of a three step process including (1) nonlinear data mapping into feature space where data is linearly separable (Fig 2a), (2) a conventional linear PCA to project data onto the first n eigenvectors, and (3) back-mapping of data points from feature space to input space by numerical inversion of the implicit transformation (Fig 2b). In this work, multi-dimensional image blocks are stacked to vectors composing the kernel PCA input space χ. The nonlinear mapping F:χ!F from input space χ to the high-dimensional feature space F is not calculated explicitly. Instead, kernel PCA reformulates standard PCA in feature space to operate on scalar products of function values F(x) H F(y). The dot products are evaluated directly in input space by means of Mercer's theorem, which states that any positive semi-definite, symmetric, and continuous kernel function k:χ×χ!R can be written as an inner product k( Kernel PCA can then be performed by solving the kernel eigenvalue problem Mla ¼Ka, where M is the number of data vectors (image blocks x i ) in the input space and λ the eigenvalue to the eigenvector α.K is given by centering [32] the kernel matrix k ij = k(x i ,x j ) defined by all entries in the input space χ = {x i }. The projection of a centered feature space vector~ðxÞ onto the n'th principal component is given by b n ¼ X M i a n;ik ðx; x i Þ [33] where α n,i is the n'th component of the eigenvector α i . The projection onto a subspace of principal components spanned by the first q eigenvectors can be written as P q FðxÞ ¼ whereis the mean of the mapped data and g i ¼ X q n b n a n;i . For large numbers of data vectors in the input space and high-dimensional kernel mappings, the nonlinear mapping F has typically no analytical inverse function [34]. Approximate can be employed to separate image data from artifacts (red circles). b) Denoising is performed by projecting the test vector x onto the first q principal components by P q . Backmapping of the projected data is done by finding a so-called pre-image z in image space which minimizes the Euclidean distance between Φ(z) and P q Φ(x). doi:10.1371/journal.pone.0153736.g002 solutions can be found by mapping an estimate z in the input space to the feature space and update it by optimizing a cost function for the best fit to the projected test value P q F(x). In this study, an iterative pre-image algorithm [33] is used which minimizes the Euclidean distance in feature space ||F(z)−P q F(x)|| 2 with a fixed-point iteration scheme. For any kernel of the form k(x,y) = k(||x−y|| 2 ) as given for Gaussian kernels, the iteration steps can be written as Linear versus nonlinear regime with Gaussian kernels. Throughout this paper, a Gaussian kernel kðx i ; x j Þ ¼ expðÀ0:5kx i À x j k 2 2 =s 2 Þ is used. To account for the dependency of the Euclidean distance ||x i −y j || 2 on the square root of the number of pixels per vector, the kernel function can be written as kðx i ; and σ p is the average distance per pixel [33]. The kernel width σ controls the degree of nonlinearity of the mapping and indicates how well the test data match the input space data χ = {x i } [35]. In the linear limit of a very large σ, the kernel matrix k ij = k(x i ,x j ) comprises only of ones and all data contribute equally to the transform just as for linear transforms. In the nonlinear limit of overfitting where σ (||x i −x j || 2 ,8x i ,x j 2χ, the kernel matrix approaches the identity matrix with the canonical basis vectors as eigenvectors. In this case, the test vector x is mapped to the vector in the input space {x i } with minimal Euclidean distance. Accordingly, the kernel width should match the scale of the structure which should be denoised [36]. Kernel PCA input space and back-mapping to an image. In the present work, the temporal mean of the dynamic data set is removed prior to kernel PCA computations to increase the similarity between image blocks. The current image estimate is subdivided into overlapping image blocks which are stacked to vectors x i . To reduce computational complexity and allow for tailored parameters, the blocks are grouped into N clusters using a similarity cluster analysis [37]. A kernel PCA input space is generated for each input cluster using a maximum number of M randomly selected input space vectors from the cluster. The dimension of the input space is given by the number of voxels per block (Fig 3). An M x M kernel matrix is then populated with the selected image blocks using a Gaussian kernel function and a PCA of the kernel matrix is performed. Each image block from the subdivision is finally projected onto the first few principal components in the features space spanned by the M image blocks from the input space and subsequently mapped back to the input space using the fixed-point iteration scheme of Eq (1). The filtered image blocks are multiplied with a normalized 3D Gaussian shape function and then added together according to the voxel locations to form the next estimate of the dynamic data set. Projected MR reconstruction for nonlinear transform domains. MR reconstruction inverts a linear encoding equation where d are the acquired k-space data, E the encoding matrix including Fourier sampling and weighting with receive coil sensitivities, and m the image to be reconstructed. Image reconstruction is motivated by iterative thresholding algorithms [38,39] and consists of iteratively interleaved steps of gradient updates from the acquired k-space data and nonlinear denoising. The basic update rule is given by where P kPCA represents the kernel PCA denoising for each image block and m k the current image estimate. Algorithm 1: Pseudo-code implementation of iterative reconstruction with block-based kernel PCA denoising for dynamic data sets. Methods Data acquisition. Six fully sampled two-dimensional cine data set in short axis view of the heart were acquired on a 3.0 T scanner (Ingenia, Philips Healthcare, The Netherlands) with a 28-channel coil array. The scan parameters of the balanced SSFP sequence included a field-ofview of 270x270 mm 2 , 8 mm slice thickness, TR/TE of 3.8/1.84 ms, voxel size of 1.4x1.4 mm 2 , 45°flip angle, 192x190 acquisition matrix, and 23-28 heart phases. Noise samples were acquired prior to the scans to calculate the noise covariance matrix. All data were acquired in healthy subjects after written consent was obtained according to institutional and ethics guidelines. The study protocol was approved by the ethics committee of the canton of Zurich. Data preparation. All k-space data sets were pre-whitened with the noise covariance matrix [41] and normalized to a mean signal strength of 1 in the region-of-interest (ROI) around the heart. k-space data were compressed to 12 virtual coils [42] and normalized coil sensitivity maps were obtained with ESPIRiT [43]. Retrospective undersampling was performed in phase encoding direction using Cartesian pseudo-random undersampling [8]. Undersampling factors were 5, 6.5 and 8. Data reconstruction. The cine 2D data sets were reconstructed with k-t SPARSE-SENSE [40], k-t ℓ 1 -SPIRiT [44], block matching with Fourier filtering similar to LOST [23] and the proposed algorithm. The k-t SPARSE-SENSE algorithm minimizes with d and E as defined in Eq (2), and F t being the temporal FT. The regularization parameter λ was chosen to minimize the RMSE in the ROI of an exemplary data set. In the k-t ℓ 1 -SPIRiT reconstruction of the cine 2D data, the term was minimized with the Fourier sampling matrix E S , the multi-coil image m S , the coil-wise temporal FT F t , and G being the image domain representation of the 7x7 k x -k y SPIRiT [42] interpolation kernel derived from a 30x20 temporally averaged center of k-space. Eqs (4) and (5) were minimized using an iterative soft-thresholding [39] and a projection onto convex sets algorithm [45], respectively, both leaving the acquired data unchanged. The final image m was composed by Roemer combination of the multi-coil images m S . Thresholds for the soft-thresholding operations were chosen based on minimum RMSE in the region-of-interest of an exemplary data set. Reconstruction with block matching and Fourier filtering involved the same parameters and clustering algorithm as for the proposed algorithm. The kernel PCA filtering was replaced with soft thresholding in a Fourier domain similar to LOST. The number of blocks per cluster was restricted to a maximum of 24 to reduce smoothing artifacts. For the proposed algorithm, each iteration consisted of a data consistency and a kernel PCA denoising step comprising of 600 clusters (N) of image blocks, a maximum number of 120 blocks (M) to generate the kernel matrix per cluster and a block size of 5 pixels in each spatial dimension while using all time frames along the time dimension. The kernel width of the proposed kernel PCA filtering approach was fixed to the median of the mutual distances between the 25 closest image blocks within each cluster to achieve linear filtering between the most similar blocks and increasingly less contribution for blocks with lower similarity. The number of retained principal components was determined per image block cluster based on a two-component model for the cumulated energy in the principal components [36], with a maximum number of 20 retained principal components. Computation time for the clustering refinement was between 1s and 5s, for kernel PCA artifact removal 5s-15s per iteration on current computer hardware with 8 cores. Results Results for one exemplary cine data set comparing k-t SPARSE-SENSE, k-t ℓ 1 -SPIRiT with a temporal FT sparsifier, block matching with Fourier filtering, and the proposed kernel PCA reconstruction relative to the fully sampled reference are shown in Fig 4 for 5-fold undersampling. Image quality is compared for systolic and diastolic still frames as well as using temporal profile plots. The proposed algorithm shows less smoothing artifacts especially in the time dimension. RMSE values relative to the reference were determined in the 3D ROI as indicated. Discussion In this work, an algorithm for image reconstruction from undersampled MR data exploiting block-matching and nonlinear kernel PCA has been proposed and implemented. Images were reconstructed iteratively by interleaved gradient updates using the acquired k-space data and shrinkage of nonlinearly transformed image block arrays. Undersampling artifacts in twodimensional cardiac cine MR data were reduced and results compared favorably relative to those obtained with other CS-based reconstruction methods. Compared to linear transforms of image block arrays, the contribution of each image block to the transform with the proposed kernel PCA approach is given by a nonlinear function. The transform is implicitly calculated by kernel PCA and the artifact removal is performed by projection onto the main principal components in the nonlinear transform domain. Gaussian kernels employ the Euclidean distance as dissimilarity measure. Better results may be achieved using dissimilarity measures which are more suitable for MR images, if the kernel function fulfils Mercer's condition. The kernel width determines how each image block in the kernel PCA input space contributes to the transform. By choosing the median of the mutual distances of the most similar image blocks as kernel width σ, image blocks with high similarity are filtered linearly, while image blocks which are above the cut-off distance for linear filtering contribute less. The maximum number of retained principal components is calculated per cluster and based on a twocomponent model. PCA in feature space is based on a ℓ 2 penalty function which is sensitive to outliers. Especially for high reduction factors, large and correlated undersampling artifacts can already differentiate the first few principal components from the desired ones. Pre-filtering of data and employing statistically robust linear feature selection in feature space [46] could further improve artifact removal and simplify the selection of kernel width and number of principal components. Iterative thresholding algorithms have computationally economic iteration steps but require more iterations until convergence than gradient descent ℓ 1 minimization [47]. An adaption of approximate message passing [47] can reduce the maximum number of iterations and reduce reconstruction times. Further improvements in reconstruction speed can be achieved by correlating multiple image blocks at once and employing iterative kernel PCA schemes such as the kernel Hebbian algorithm [48], which also scales linearly with the sample size. The use of many computer nodes in parallel, as for example available on graphics cards, could further reduce reconstruction times. The convergence rate of the reconstruction could be increased by modifying the gradient updates with prior knowledge or gradient directions from previous iteration steps [47,49]. Conclusion Image reconstruction from undersampled data exploiting nonlinear transform domains and kernel methods is feasible and outperforms conventional k-t SPARSE-SENSE, block matching with Fourier filtering and k-t ℓ 1 -SPIRiT reconstruction. The method holds considerable potential to allow for higher acceleration factors relative to CS for a range of MR applications including cardiovascular imaging.
4,250.4
2016-04-26T00:00:00.000
[ "Computer Science", "Engineering", "Medicine" ]
What is the best or most relevant global minimum for nanoclusters? Predicting, comparing and recycling cluster structures with WASP@N †‡ To address the question posed in the title, we have created, and now report details of, an open-access database of cluster structures with a web-assisted interface and toolkit as part of the WASP@N project. The database establishes a map of connectivities within each structure, the information about which is coded and kept as individual labels, called hashkeys, for the nanoclusters. These hashkeys are the basis for structure comparison within the database, and for establishing a map of connectivities between similar structures (topologies). The database is successfully used as a key element in a data-mining study of (MX) 12 clusters of three binary compounds (LiI, SrO and GaAs) of which the database has no prior knowledge. The structures are assessed on the energy landscapes determined by the corresponding bulk interatomic potentials. Global optimisation, using a Lamarckian genetic algorithm, is used to search for low lying minima on the same energy landscape to con fi rm that the data-mined structures form a representative sample of the landscapes, with only very few structures missing from the close energy neighbourhood of the respective global minima. Introduction The application of structure prediction in the eld of clusters and nanoparticles has resulted in literally millions of structures being discovered for different compounds, systems with different magnetic ordering, systems containing different dopants, or simply systems of different sizes. [1][2][3][4] Crucially, each system can be described as an energy landscape and the initial target or targets are the location of the global minimum (GM) or the locations of low energy local minima (LM). 5 Today when one wants to study a new compound of interest within certain sets of parameters, including stoichiometry, size, environment, etc., a key question springs to mind: is it worth running new simulations that employ one or several contemporary global structure optimisation algorithms? We arguenot necessarily! Thoughtful exploitation of the available data that can be found in the literature presents a viable alternative that turns out to be the most efficient way to discover new structures, materials, and their physics and chemistry. [6][7][8][9][10][11][12][13][14] Similar considerations, apart from size, can be applied to crystal structures, including molecular, metallic, ionic, covalent, or hybrid organic and inorganic frameworks. [15][16][17][18] Another problem encountered by practically every practitioner of global optimisation for structure prediction is how to ascertain that the newly discovered conguration of a particular compound is not known from competitors' studies, for example, or exists out there under the guise of a different compound of similar stoichiometry, or is not published but is known as a lower ranked local energy minimum (i.e. data that has a rank that is beyond a chosen set threshold for publication). The use of slightly different energy functions, unintentional effects of tolerances both in energy denition and local optimisation, or possibly an intentional bias to match measurable properties (for example, infrared data) will all muddle the waters further. The choice of the bestor most suitable for the investigator's purposescost (or tness) function is uncertain, and could be quite different in different studies even on the same system. To address these challenges, we have developed a database complemented by a toolkit that includes structure comparison as a key element. Aggregating structures and their properties into one place also enables the sophisticated exploration of structural motifs and particular properties and the discovery of structure-property relationships. Databases are not a new concept in materials modelling, 19-29 even in the eld of nanoclusters. 30,31 Crucially, our searchable database generates a map of connections relating different structures. In this article, we describe both the database and the algorithms that generate these mappings, followed by simple showcase examples. Web-assisted structure prediction at the nanoscale (WASP@N) In the development of the database, our Hive of knowledge, we aimed to arm the scientic community and general public, from professional researchers to school pupils, with a new intelligent tool to search, discover and disseminate structures and properties of new nanoclusters. To allow access and interaction with the Hive, we built a web interface, which we refer to as the WASP toolkit. The mapping between structures and various properties is an essential element, or feature, of the Hive database, which is generated by algorithms that form part of a separate piece of code that we refer to as the Bee soware. The Bee soware runs on dedicated computing facilities. The WASP interface links the user, the Hive and the Bee sowaresee Fig. 1. With open access to the Hive, a number of security measures have been employed in order to protect the integrity of the data and the computing facilities from malicious attacks (to complete the analogy, we refer to unwanted visitors to the Hive as hornets). Datasets within the Hive are organised as follows: (a) published atomic structures, the atomic coordinates of which were originally used to generate a gure (e.g. ball and stick models) or were explicitly given in a table as part of a published paper (or electronic supplementary information) that has a DOI; and (b) atomic structures generated using the Bee soware. For the former, the atomic structures are labelled using the DOI of the published article they were taken from, and are uploaded as one or more concatenated xyz le(s) using an extended format that contains both the metadata saved on the comment line and the atomic structure, which includes atomic labels: Cartesian coordinates; one additional scalar and one vector record per atom (for example, charges, spin, dipole on atom). Searchable metadata are vital for the use of a database. Values for metadata that can be provided include the denition of energy and soware, total charge, energy ranking, total spin, etc. For example, the comment line: "Name¼drum; Symmetry¼D 3h ; Definition¼{FHI-aims, PBE0/PBE, tight}; Ener-gy¼210Hartree; Size¼6; Atoms¼12; Charge¼0; Spin¼0; Dipole¼(0,0,0)" for the cluster (ZnO) 6 indicates that the user refers to the local minimum conguration as a "drum", the atomic coordinates of which have D 3h point group symmetry aer geometry relaxation using the FHI-aims soware with the generalised gradient approximation in density functional theory in the form of the PBE exchange and correlation density functional and the tight basis set, an energy of 210 Ha with the same basis set and the hybrid PBE0 exchange and correlation density functional, a total charge and spin of zero, and no resultant dipole. If not specied upon upload to the Hive, some of these will be calculated along with, for example, stoichiometry, topology, total mass, centre of mass, and principal moments of inertia. Non-searchable metadata like, for example, thumbnail ball and stick images, are generated on-the-y. The dataset for each DOI string will also contain timestamp metadata (when it was uploaded or last modied) and publication metadata (authors and journal name, volume and page numbers). Generated datasets are given a DOI string by the Bee soware that is based on the chosen energy denition, and the atomic congurations result from structural relaxations of all the published datasets. The essential search and comparison features of WASP enable the user to investigate structural motifs and physical properties. The comparison of clusters can be quite expensive and, therefore, comparison-based pre-searches are performed by the Bee soware upon the upload of new datasets, both published and generated. A description of the algorithms employed in these comparisons is provided in the next section. The results of pre-searches are saved as links between (thus establishing) related structures. These links, or new metadata generated by the Bee soware, form a map linking different structures in the database. The map can be readily exploited by the user through the WASP interface to ascertain the uniqueness of newly found congurations of clusters of a certain compound and size or to compare clusters of different compounds. Moreover, as we will demonstrate below, this map can also help to reduce the effort needed to explore the energy landscapes of a compound that has yet to be investigated. The computational work and the interaction of the three complementary codes (WASP, Bee, and the Hive) are supported by appropriate hardware solutionsas illustrated in Fig. 1 and related operating system server soware (including task scheduler, etc.). In the near future, we plan to expand the solution shown in Fig. 1 to include the exploitation of third party computing platforms. Uniqueness and similarity Being able to quickly recognise similar structures, or measure their similarity, has always been a challenge in materials modelling. 32 Consider comparing the atomic structures of two nanoclusters that are essentially the same but have either small random perturbations (noise) resulting from the applied optimisation tolerances or slight differences because of the different, but similar, density functionals employed. In the comparison procedure, the rst task is to correctly align these two congurations: the translation and rotation of each cluster is xed by positioning the centre of mass at the origin and aligning the principal axes of rotation with the chosen Cartesian axes. Hopefully, upon alignment, a one-to-one match is found for each atom in one conguration with the equivalent atom in the other. If not, then there is a combinatorial problem to solve: which combination of atom pairs minimises the sum of the distances between all pairs (a sum of zero implies a perfect match, with each atom in one conguration positioned exactly on top of the equivalent atom in the other conguration). Minimising this measure of likeness for two dissimilar nanoclusters may also require optimising the relative rotation and translation of the two nanoclusters. The efficiency of stochastic search algorithmsparticle swarm, basin hopping, and genetic or evolutionary algorithmsthat are employed to locate local minima (LM) on the energy landscape can be improved if there is a computationally cheap method that provides a measure of how similar two structures are. For example, this could be used to check whether a newly found/generated conguration is unique, whether the starting points are sufficiently spread apart for different random walkers on the energy landscape, or whether the candidate structures in the current population are sufficiently diverse for the evolutionary algorithm (otherwise inbreeding results in the population not evolving, or improving, any further). One may also want to distinguish between enantiomorphic clusterstwo clusters that are mirror images of each other. One half of such a pair can easily be lost if the comparison of nanoclusters is simply based on their relative energy of formation (since both enantiomorphic clusters have identical energies). There are several approaches in the literature designed to measure the similarity between structures, 33-45 which can be classied in two groups: direct one-toone comparison or an indirect approach that requires the generation of labels, also known as ngerprints or hashkeys, which are then compared. One-to-one comparison algorithms are typically based around a cost function that measures the degree of similarity between two structures. As introduced above, the cost function will depend on the successful superimposition of the two structures, i.e. the translation and rotation of one cluster with respect to the other. Where Dirac delta functions are used to describe the position of an atom, the cost function will also depend on the matching of atomic pairs between the structures. This in itself can pose a formidable task (see for example ref. 33, which employs the Hungarian algorithm). [34][35][36] This problem is reduced for compounds or alloys if pairs are restricted between like species. Alternatively, where a Gaussian, or a similar function, is centred on each atom, the cost function is typically based on the degree of overlap of atom-centred Gaussians between the two clusters. For compounds and alloys, the overlap of Gaussians can be determined for each species type; there is no explicit need to match pairs of atoms. Goedecker employed a similar scheme, but based on atomic orbitals (see ref. 37). Both types of cost function can also be employed to nd out whether, or how well, a smaller cluster matches a fragment of a larger cluster. In this article, we only compare pairs of clusters that have the same composition, and use only the species type and atomic coordinates as the input. One of the most straightforward and widely used metrics for the comparison of molecular structures is the root-mean-square deviation (RMSD) of the coordinates of equivalent atoms. 38,39 Following a similar idea, the metrics suggested by Ali Sadeghi et al. 37 use congurational ngerprints based on eigenvalues of matrices of interatomic distances. The structural ngerprints are then compared by measuring the distances between them, as small ngerprint-based distances correspond to small RMSD distances. The H-FORMS (a hierarchical algorithm for molecular similarity) 46 approach estimates a rigid transformation that aligns structures and computes rotation-invariant descriptors, which are then used to match atoms. Similarly, R. Hundt et al. implemented an algorithm in the analysis program KPLOT 40 based on the mapping of atomic patterns constructed using three-atom frame matches. An alternative approach to the problem of structure comparison exploits the properties of the nanoclusters, 41 such as radial distribution functions, vibrational frequencies 42 or principal moments of inertia. Whichever method is used, when a structure needs to be efficiently compared with vast data for thousands or millions of congurations, the chosen approach needs to be both robust and computationally affordable. The second class of comparison methodsbased on comparing unique labels that are generated for every congurationally unique structuremay address this big data challenge. Within our database, we implemented the approach rst adopted in the KLMC soware 47 to address the challenge of maintaining the diversity of structures during a genetic algorithm search. The approach relies on the NAUTY soware package (No AUTomorphisms, Yes?) written by McKay and Piperno, 48 which can generate canonical labels for graphs and compute automorphisms between them. NAUTY labels graphs canonically by providing a string consisting of three 8-digit hexadecimal numbers depending on the graph, i.e. a set of vertices and edges, and, in general, every unique graph will have a unique NAUTY string, also known as a hashkey, or ngerprint. By exploiting the feature of uniqueness, we have incorporated NAUTY in the Bee soware in the following way: each cluster is converted to a coloured graph by treating the atoms as vertices and the bonds between them as edges. The number of colours of vertices (atoms) is determined by the number of species in the structure. Thus, (MgO) n clusters will have two different colours (species), whereas Ti n clusters will have only one. It is important to note that (KF) n clusters will also have two different colours, therefore graphs of (MgO) n and (KF) n clusters of the same size can be compared explicitly. The edges of the clusters' graphs are generated from the calculated interatomic distances between the atoms (vertices) of a cluster and can be thought of as "bonds" between atoms. The radial cut-off by which the "bonds" are determined depends on the species and is slightly longer than the expected actual bond length. A owchart of the implemented hashkey generation is given in Fig. 2, where the (MgO) 5 GM cluster is used as an example. Here, the (MgO) 5 GM cluster (shown as a ball and stick model in Fig. 2a) is transformed into a coloured graph (shown in Fig. 2b). This graph is then processed using the "NAUTY" soware package, which in turn generates a unique hashkey for the cluster. An example of a hashkey is shown in Fig. 2d. Given that the comparison of hashkeys is orders of magnitude faster than comparing atomic structures explicitly, each cluster within the Hive database is labelled with a hashkey. As described above, the hashkeys enable a rapid check of the database for duplicate structures by both the WASP and Bee soware and are used in the generation of maps connecting similar structures (the network of links between clusters entered into the database is updated as soon as the atomic coordinates of generated and published LM nanoclusters are uploaded to the Hive)a feature that is not currently implemented in other structural databases. This feature has proven to be essential when the WASP interface is used to nd out whether a newly discovered cluster is already within the Hive. To demonstrate one of the utilities our database provides, we have used the generated hashkeys to identify unique structural motifs for a particular stoichiometry (1 : 1) and size (24 atoms). We then data-mined from this set, rather than a set of LM congurations of one or all compounds in the Hive. Data normalisation Published LM cluster structures, which can be uploaded to the database, are, by denition, dependent upon the theory and accuracy of the level of theory employed in the calculation of energy as a measure of stability. Moreover, the measure of tness may also be based on the deviation from some geometric, physical or chemical observable(s). When LM on a potential energy landscape are targeted, energy calculations at different levels of theory (quantum mechanical (all-electron or pseudopotential), semi-empirical, Hartree-Fock, DFT, tightbinding, semi-classical, or atomistic simulations) yield values that may scatter across a few orders of magnitude. Even if a similar method is chosen, e.g. DFT with identical basis sets and, possibly, effective core potentials, employing different exchange and correlation density functionals could still lead to substantially different values. The situation is just as problematic if semi-classical simulations are employed, as there are oen many different sets of parameterised interatomic potentials for the same material or compound. One trick commonly used across the eld of materials chemistry is to switch from total to binding or cohesion energies, which can be expected to behave better, and do in practice. 49 The scatter in the calculated binding energy values obtained using different approaches is usually, however, still greater than the energy separating low ranking energy minima on the same energy landscape (denition of energy). In practice, the WASP interface lets users upload their data without any restrictions on how the data were obtained, but encourages the users to provide details of the adopted computational approach as metadata. To support the comparison of individual structures obtained using different energy denitions, we introduced an internal standard attained by a data normalisation routine. In particular, when data are uploaded to the Hive database, they are automatically rened by the Bee soware, using the all-electron, full potential electronic structure code FHI-aims 50 with the PBEsol functional, 51-53 the light basis set (which is variationally equivalent to split valence double-zeta Gaussian plus polarisation basis sets but can obtain energies that are much closer to the basis set limit). Further computational parameters are provided in the ESI. † Aer normalisation, the newly obtained structure is automatically uploaded to the Hive database with a two-way link between the original and normalised congurations, along with similarity links to the whole dataset in the database. Hence, the user can search for structures that rene to the same LM on our normalised energy landscape (particularly useful for the investigation of nanoclusters of the same compound) or structures of any compound with the same connectivity (structural motif), as explained in the previous section. Data mining Starting from a known set of atomic congurations with the target stoichiometry and total number of atoms, the Data Mining (DM) module of the KLMC soware package 54 rescales each conguration to obtain an estimate of the expected nearest neighbour interatomic distances for the target compound, and then, using third party soware, relaxes the rescaled atomic structures to LM. In the results shown below, we employ GULP 55 as the third party soware, i.e. a semiclassical level of theory is used for the calculation of energies (and atomic forces). Aer the rescaling and renement procedure, KLMC is also employed to analyse the resulting congurations in terms of their energy ranking, uniqueness and geometrical properties. Global optimisation A Lamarckian genetic algorithm (GA) approach implemented in the KLMC soware package 47 was also used to locate LM on the energy landscape dened by the same set of interatomic potentials (semi-classical level of theory) as those used in the data-mining investigation. We note that the ability of the KLMC GA 47 to locate LM and GM efficiently has been proven for various types of system, and thus it is chosen here as a method for providing reliable data that we can use to assess the results obtained using the data-mining approach. The population of each GA run was set to 200 candidate structures, with the initial random structures generated within a 15Å Â 15Å Â 15Å cubic simulation box. Default values, as given in ref. 47, were used for the remaining simulation parameters. Isomorphic structures, or structural motifs As an illustration of how the connectivity maps are employed, we consider the case of a GM nanocluster reported in ref. 56 for (MgO) 7 that has the symmetry point group C 3v ; see Fig. 3a. The topological analysis tool nds that this structure has "7Mg3-7O3" topology, i.e. seven Mg and seven O atoms, each with a coordination number of three. When selected using the WASP interface for the Hive, beneath the rotatable ball and stick model of this structure are two lists; one showing the standardised entry for this conguration (as described earlier), and another showing all the "isomorphic structures" found in the Hive based on matching hashkeys (as also described above). A snapshot of the second list is shown in Fig. 4. In our chosen example, the (MgO) 7 GM structure currently has eleven isomorphic structures: eleven atomic congurations within the Hive have the same hashkey as our chosen example. The inclusion of a DOI in the entry for a candidate structure in this list indicates that it is a published LM. The remaining ve are, therefore, standardised LM (using FHI-aims). As more entries are submitted to the Hive, we would expect many more matches to be found. The six published LM show that this structural motif is also reported 54,56,57 to be the GM for (KF) 7 , (CaO) 7 , (SrO) 7 , (BaO) 7 and (CdSe) 7 . There is also another (MgO) 7 conguration, which has a different DOI 54 to that of the original chosen structure. Given that there are six different compounds with the same structural motif, we would expect six standardised LM. The two published LM entries for (MgO) 7 , the same compound, relax to the same standardised LM. To nd all the nanoclusters within the Hive that relax to the same standardised LM, the user only needs to click on the thumbnail of the standardised nanocluster. In our example, the missing standardised LM results from the standardised conguration for (CdSe) 7 relaxing to a different LM. Therefore, it has a different hashkey as it is a different structure (in fact, it has C 1 point symmetry). Efficient structure prediction The Hive contains the LM atomic structures for numerous binary compounds with 1 : 1 stoichiometry and a total charge of 0. We now concentrate on one particular size, clusters composed of 12 cations and 12 anions. To investigate a compound that is missing from the Hive database, one could data-mine structures already in the Hive for a similar compound. The success of this approach would rely on the chosen set of initial congurations; the more extensive this set, the greater the probability of nding the target LM. To maximise this probability one could data-mine all the compounds; however, this would generate many copies of each LM. Using the hashkey, which provides a unique identier for each structural motif, we were able to reduce this initial set to just over 100 unique structural motifs (which we will refer to as the DM-set). If the database contained entries for alkali halides, (XY) 12 , and alkaline earth oxides, (ZO) 12 , for X ¼ Li to Cs, Y ¼ F to I, and Z ¼ Mg to Ba, then potentially there would be a maximum reduction of 96%. The determination of this reduced set (calculation and comparison of hashkeys) is orders of magnitude faster to perform than the additional structural relaxations (using standard algorithms within an electronic structure code) that would have been necessary if we could not determine equivalent structures. Moreover, data-mining requires the evaluation of far fewer candidate structures than is typically performed in a stochastic approach. It is expected that the number of datasets within the Hive will grow, and that important unique structural motifs may be missed given our search has been performed soon aer we have created this database. Stochastic approaches may also miss important LM, and the number of unique motifs is likely to increase much more slowly than the number of entries for clusters of any particular size, charge and stoichiometry. Using our DM-set of unique LM, we now investigate three different compounds that were not included in the initial dataset taken from the Hive, namely (LiI) 12 , Table 1 Parameters for the Buckingham potential, A exp(Àr/r) À C/r 6 , applied between ions X and Y X-Y A (eV) r (Å À1 ) C (Å 6 eV) Table 2 Parameters for the shell model for ions X, where Q and Y are the point-charges of the core and shell, which are connected by a spring with constants k 2 and k 4 . The Coulomb contribution to the energy between point-charges of an individual ion X is replaced with the energy associated with the spring, 1/2k 2 x 2 + 1/4k 4 x 4 , where x is the distance between the core and shell. Note that the strontium cation is treated as a rigid ion and therefore only has one parameter (SrO) 12 and (GaAs) 12 . As the main focus of this article is the methodology as opposed to the physical and electronic properties of the predicted nanoclusters, we have chosen to present new IP-LM structures, i.e. the atomic congurations and ranks of local minima on the energy landscape are dened using interatomic potentials (IP), the parameters of which are given in Tables 1 and 2. For each compound we also perform a search of low energy IP-LM using an evolutionary algorithm; details of both methods are described in the previous section. We note that the potential parameters for LiI were taken from ref. 58. The small spring constant for the lithium cation caused problems during the global optimisation runs; during the relaxations of new candidate structures (particularly the random structures used in the initial population), the initial electric elds were sometimes strong enough that during structural relaxation the shell was stripped away from the cation. It is known that the polarisability of an ion is dependent upon the electric eld, which is much stronger for our clusters than that experienced within the bulk. Thus, in our simulations, we doubled the value of the spring constant for lithium cations, which corresponds to an apparent reduction in their coordination number compared to the bulk. The results from data-mining our DM-set of unique LM are shown in Fig. 5-7. For strontium oxide, lithium iodide and gallium arsenide, 47, 50 and 41 LM structures were generated, respectively, i.e. not all the structural motifs of one compound were locally stable for another. Moreover, a different global minimum was found for each compound. Labelled DM01 in Fig. 5, the D 3d barrel was found to be the IP-GM for (SrO) 12 , whereas for (LiI) 12 and (GaAs) 12 it was ranked fourth and second, respectively. The 2  2  6 D 2d conguration of alternating atoms, labelled DM01 in Fig. 6, was found to be the IP-GM for (LiI) 12 . One can imagine that this cuboid conguration could be cut from the NaCl rock salt structure, and thus it is not surprising that this structural motif was not generated for (GaAs) 12 . The T h sodalite cage, so named as it is a basic building block of the sodalite bulk structure (given the abbreviation SOD by the zeolite community), was found to be the IP-GM for (GaAs) 12 . This conguration was ranked h and thirty-eighth for (SrO) 12 and (LiI) 12 , respectively. Comparing the ball and stick models for different compounds but for the same structural motif, one noticeable difference between the LM for lithium iodide and those of the other two compounds is the sharper (more acute) bond angles that directly result from the greater polarisability of the iodide anion. Essentially, the iodide anions are further out from the cluster's centre of mass than the lithium cations. To check the current success of data-mining the Hive for these three compounds, we also conducted global optimisation on each of the three IP-energy landscapes for low lying LM. We present the results as three densities of LM graphs; see Fig. 8. In the panel insert for each compound it is very clear that the data-mined LM present only a sample of all the possible LM. In terms of ranking, fortunately, the missing LM tend to be mid-range rather than at the more stable end (which, typically, is where there is most interest). Looking more closely at the top ranked LM, we identied which IP-LM structures are missing; these are shown in Fig. 9. For strontium oxide clusters, the rst six missing LM were ranked 6, 7, 8, 9, 13 and 16. The rst three of these are basic rock-salt cuts that could have been included in our data-mined set if we had included the structures from ref. 54 (we did not as this paper includes data-mined structures for alkaline oxides, one of which is one of the compounds we chose to investigate). The GA08 cuboid conguration was in fact found as the IP-GM for (LiI) 12 . Generating this LM during the data-mining process was fortuitous given that this structural motif was not included in the DM-set of unique LM. GA09 and GA13 are composed of a n ¼ 6 drum (typically the IP-GM for (XY) 6 ) and 2  2  m cuboids. More interesting is the GA16 conguration, which we have previously seen; it has an unusual distorted planar four-coordinated oxygen anion site. For lithium iodide clusters, the rst six missing LM structures were ranked 3, 4, 5, 7, 8 and 9. Unlike our DM-set, these congurations, which we will refer to as HC, have at least one highly coordinated (greater than 4) anion site and are not one of the possible cuboid cuts from the NaCl rock salt phase. Given the stability of this type of structure, quite a few of the better ranked structures were missed. As already seen, any unstable LM in the DM-set can lead to new structural LM and thus we did not miss all of the HC structures; the enantiomer of GA03 was found (labelled as DM03 in Fig. 6 and ranked equal third). For gallium arsenide clusters, data-mining the DM-set was much more successful in that only four additional IP-LM structures were found in the top thirty; the rst four missing LM structures were ranked 7, 14, 21 and 29. Of these, GA07 is the result of merging IP-GM for n ¼ 6 (a drum) and n ¼ 9 (bubble) across a hexagonal face; GA14 is very similar to the GA16 LM that was missed for (SrO) 12 ; GA21 has the same structural motif as DM18, but with all the anions switched for cations, and vice versa, cf. DM23 and DM24 and also GA06 and GA07 for strontium oxide. We note that the DM and GA runs found different chiral versions of DM23 and DM24. Finally, we should reiterate that the structures reported above for LiI, SrO and GaAs were obtained on the interatomic potential landscape. These potentials were originally parameterised for bulk compounds, where atoms are typically in higher coordinated environments, and therefore such parameterisations are very limited in scope. For example, arsenide anions are highly polarisable, and more realistic structures should be expected to have more buckled shapes, as seen above in LiI congurations. The latter proved to be easier to optimise due to the relatively low charges on Li and I. Notwithstanding this, the structures obtained here will be uploaded to the Hive and rened using our chosen ab initio approach, which will both give the actual ndings more credence for future applications, but will also allow the parameters of the interatomic potentials to be rened. The latter is an important element of machine-learning techniques that have been particularly successful in studies of metallic clusters. 59,60 Conclusions We have presented, for the rst time, details of our database of published atomic congurations of nanoclusters. We have described the algorithms employed within this database to establish whether two entries are equivalent LM for Fig. 9 Ball and stick models of (XY) 12 IP-LM configurations obtained by the genetic algorithm that were missing from the IP-LM found using the data-mining approach. The colour scheme is shown in the lower right hand panel and is the same as that employed in previous figures. The numbers in the GA** labels indicate the rank found for the nanocluster, where 01 indicates the IP-GM, whereas in the previous labels, DM**, they indicate the rank before the missing IP-LM were found using the GA. a particular compound and whether congurations of different compounds are equivalent when judged using connectivity arguments, and have shown how to exploit these data in order to predict structures for three new compounds. The database provides initial model structures that were traditionally obtained from experiments, congurations that can be employed in structure prediction using a data-mining approach, and a way of checking whether a candidate structure is indeed new. Data-mining the set of congurations for (XY) 12 structures that have a unique hashkey proved relatively successful in that the top two LM congurations for each of three compounds were found. However, global optimisation techniques are still required for compounds that are chemically distinct enough that their low energy LM structures do not match congurations already in the database, using our connectivity arguments. This will of course change with time, as more data is entered into the database. Lessons learnt in the creation of the Hive and the associated WASP interface as a toolkit will be of direct use for further work on nucleation and crystallisation processes, 61 crucially the nucleation and growth of small particles on or in solid supports and liquid environments. The LM atomic congurations in the database are also readily usable as secondary building units (SBU) for constructing crystal structures. 6,8,10,[62][63][64][65][66][67][68] Here, using low energy SBUs that do not resemble cuts from the main phases of the chosen compounds will produce more interesting results. Conflicts of interest There are no conicts to declare.
8,066.8
2018-10-25T00:00:00.000
[ "Computer Science" ]
THE SCIENTIFIC LITERACY ENABLES POLICYMAKERS TO LEGISLATE ON ARTIFICIAL INTELLIGENCE : This research emphasises the significance of scientific literacy for policymakers about the future trajectory of artificial intelligence. The ethical concerns surrounding the development of artificial intelligence are of utmost importance due to its potential social effect. Integrating AI systems into many sectors of society, such as healthcare and banking, necessitates adherence to ethical principles. Strict ethical frameworks must be implemented alongside the development of AI to safeguard against biases, privacy infringement, and ethical shortcomings. Researchers, developers, and policymakers must exercise constant vigilance to address concerns about transparency, accountability, and justice in AI systems. The ethical ramifications of artificial intelligence (AI) transcend technology, including significant ethical considerations for both people and society. Active engagement in ethical deliberations among stakeholders involved in AI development is of utmost importance to guarantee AI's responsible and sustainable deployment. This is a pivotal element in realising the whole potential of AI for the betterment of society. Politicians must comprehensively understand the scientific ideas behind AI to enact legislation in this field effectively. Introduction The field of Artificial Intelligence (AI) has a diverse and extensive history, with its roots dating back to the 1950s when influential figures such as Alan Turing (Muggleton, 2014) and John McCarthy (Andresen, 2002) established the fundamental principles that underpin its advancement.Artificial intelligence aims to develop intelligent devices that imitate human cognitive abilities, resulting in progress in several domains like automation, healthcare, transportation, and others.The complex interaction of science, ethics, and politics becomes more evident as AI technologies advance.A comprehensive grasp of artificial intelligence's historical backdrop and theoretical foundations (AI) is essential for policymakers to traverse the intricate terrain of regulating this revolutionary technology (Dwivedi et al., 2021).Scientific literacy is paramount in providing legislators with the requisite information and competencies to enact legislation about artificial intelligence (AI) that effectively reconciles innovation with ethical deliberations and social ramifications. The influence of legislation on the ethical and practical aspects of artificial intelligence (AI) use is of utmost importance (Wirtz et al., 2020).Governments may tackle problems such as bias in AI algorithms, privacy concerns, and responsibility for AI decision-making by implementing legislation and regulations.The Act establishes a structure for reducing possible hazards linked to AI technology and guarantees their responsible development and use (Chiappetta, 2023).In addition, legislation can stimulate innovation by offering corporations clarity and assurance when they allocate resources towards AI research and development.The establishment of data protection and privacy requirements in AI applications by the General Data Protection Regulation (GDPR) in the European Union has significantly impacted worldwide conversations around AI governance and accountability (Kronivets et al., 2024).Legislation plays a crucial role in establishing rules that facilitate the appropriate and secure use of artificial intelligence (AI) technology across many sectors of society. According to Duncan et al. (2020) and Weingart et al. (2021), the importance of scientific literacy cannot be overstated when it comes to influencing policy-making choices.In artificial intelligence (AI), policymakers must comprehensively understand AI's fundamental scientific concepts, prospective advantages, drawbacks, and ethical implications to formulate efficacious legislation (Stahl et al., 2021).According to Kotsis (2024), possessing scientific literacy enables politicians to effectively interact with specialists, evaluate the credibility of scientific assertions, and make educated choices that align with society's needs and ideals.Governments may foster innovation and mitigate risks associated with AI technology by making policy choices grounded on robust scientific data (Meneceur, 2023).In order to be effective, regulations must strike a balance between promoting developments in artificial intelligence (AI) and tackling the accompanying issues, like data privacy, prejudice, and job displacement.Scientific literacy provides policymakers with the essential resources to effectively traverse the intricate realm of artificial intelligence (AI) and implement policies that foster responsible research and use of AI.Adopting a comprehensive strategy may guarantee that AI technologies positively impact society while minimising adverse consequences (Vinuesa et al., 2020). Understanding Artificial Intelligence The idea and use of artificial intelligence (AI) span various definitions and scopes.Artificial intelligence (AI) encompasses replicating human cognitive processes via computers, including learning, reasoning, and problem-solving (Fogel, 2022).The domain of artificial intelligence encompasses a spectrum that spans from rudimentary rule-based systems to intricate neural networks that emulate the human brain's operations.The field encompasses several areas of study, including machine learning (Soori et al., 2022), natural language processing (Nagarhalli et al., 2021), computer vision (Ayub Khan et al., 2021), and robots (Vrontis et al., 2022).Comprehending AI's extensive scope and profound nature is essential for legislators seeking to enact legislation efficiently in this swiftly developing domain.By comprehending the capabilities and constraints of artificial intelligence (AI) technology, policymakers may implement wellinformed regulations that foster innovation while mitigating possible hazards.In order to properly traverse the complexity of AI legislation, it is essential to possess a strong basis in scientific literacy.Furthermore, possessing a comprehensive understanding of scientific concepts may assist policymakers in effectively managing the promotion of AI progress while simultaneously addressing ethical and social issues (Wróbel, 2022). Artificial intelligence has grown pervasive in society, with a wide range of applications from recommendation systems to autonomous vehicles.Artificial intelligence (AI) plays a crucial role in healthcare by assisting in diagnoses, medication exploration, and tailored treatment strategies, improving patient outcomes (Sankarnarayanan et al., 2023).In addition, artificial intelligence algorithms are used to examine market patterns within the banking sector, enhance trading techniques, and identify fraudulent behaviour, assuring financial institutions of efficient and secure functioning.Artificial intelligence (AI) in social media platforms facilitates the implementation of targeted advertising, user customisation, and content moderation, enhancing the digital environment's dynamic character.Nevertheless, ethical considerations emerge about data privacy, algorithmic prejudices, and the influence of AI on the labour market.To effectively address these intricacies, policymakers must implement rules that strike a harmonious equilibrium between innovation and the welfare of society (Dirgová Luptáková et al., 2024).By comprehending AI's many and complex uses, legislators can formulate well-informed regulations that foster technical progress while upholding ethical issues. Role of Politicians in AI Legislation The regulation of artificial intelligence (AI) is paramount to politicians, given its extensive societal implications (Tinnirello, 2022).First and foremost, they need to comprehend the technical facets of AI to formulate efficacious policies that tackle possible hazards while promoting innovation (Taeihagh, 2021).A certain degree of scientific literacy is required to comprehend the complexities of artificial intelligence systems and their ramifications.Additionally, it is essential for lawmakers to actively include professionals, researchers, and stakeholders in order to get a wide range of viewpoints and ideas into the legal frameworks required for artificial intelligence (AI).The involvement of technologists and ethicists is essential in formulating legislation that effectively reconciles innovation with ethical deliberations.Finally, politicians must provide regulatory openness to preserve public confidence and responsibility in using AI.According to Feijóo et al. (2020), legislators can effectively negotiate the complex terrain of AI governance and establish a legal framework that fosters AI technology's secure and ethical development by meeting these obligations.In order to effectively regulate AI, politicians need to possess a comprehensive comprehension of AI principles and actively participate in well-informed decision-making processes. The challenges legislators encounter in AI policy are diverse and intricate.A significant obstacle is the fast advancement of artificial intelligence technologies, which often surpass the progress of legal frameworks (Zekos, 2022).This situation gives rise to a situation in which policymakers have difficulty staying abreast of the most recent innovations and their possible ramifications on society.Moreover, the multifaceted character of artificial intelligence necessitates lawmakers to possess a comprehensive comprehension of technological, ethical, and legal dimensions, hence presenting a formidable obstacle for those without a scientific foundation.Furthermore, the worldwide scope of AI requires international collaboration and uniform legislation to properly tackle difficulties that transcend national borders (Ala-Pietilä & Smuha, 2021).In order to surmount these challenges, policymakers must actively include specialists from other domains, provide resources towards ongoing education, and cooperate internationally to establish flexible and resilient legal structures that guarantee the responsible use of AI technology for the betterment of society. The societal implications of AI regulation are a complex matter that needs meticulous examination (Aizenberg & van den Hoven, 2020).Given governments' global regulation of AI technologies, it is imperative to comprehensively analyse their possible societal impacts (Wischmeyer & Rademacher, 2020).Legislation determines AI development, deployment, and use in many fields.Appropriate rules are needed to address ethical problems, including algorithmic prejudice, privacy infringement, and employment displacement (Molina et al., 2024).Furthermore, given the rapid progress in AI, it is essential for legislation to possess the flexibility to adapt to evolving technical environments while maintaining social principles.Policymakers may effectively address the challenges of AI integration and mitigate adverse social effects by implementing wellconsidered and proactive legislation.The ethical and social implications of AI adoption in future research and policymaking agendas will be strongly influenced by the effect of AI laws on society. Importance of Scientific Literacy in Policy Making Scientific literacy is comprehending, analysing, and assessing scientific information to make well-informed choices on scientific matters (Valladares, 2021).The complex nature of this term encompasses a comprehensive understanding of fundamental scientific concepts and the ability to engage in analytical thinking, logical reasoning, and effective communication about scientific matters (Janou̡ ková et al., 2023).Scientific literacy encompasses many vital elements, including comprehension of the scientific process, differentiation between correlation and causation (Osborne, 2023), identification of bias in scientific research, and assessment of the credibility of scientific sources (Osborne & Pimentel, 2023).Moreover, scientific literacy encompasses using scientific knowledge to address practical issues effectively and the aptitude to participate in decision-making grounded on empirical evidence (Ben-Horin et al., 2023).Within artificial intelligence legislation, politicians with a considerable degree of scientific literacy can formulate laws firmly rooted in scientific data.This enables them to cultivate advancements in AI technology while simultaneously minimising societal hazards. According to Tasquier et al. (2022), acquiring scientific literacy provides policymakers with the requisite information and analytical abilities to traverse the intricate landscape of developing technologies such as artificial intelligence (AI).Comprehending the fundamental concepts behind artificial intelligence (AI) enables policymakers to formulate rules grounded in empirical facts, thus fostering innovation while mitigating adverse consequences.Policymakers may enhance the effectiveness of decision-making processes by possessing scientific literacy, which enables them to participate in well-informed conversations with experts and stakeholders (Kuziemski & Misuraca, 2020).Moreover, scientific literacy cultivates a more profound understanding of AI's ethical and social consequences, enabling legislators to foresee and tackle the broader ramifications of their legislative measures.In order to ensure that the governance of artificial intelligence (AI) is in line with the collective welfare of society, it is imperative to prioritise the development of scientific literacy among policymakers.This entails balancing technical progress and ethical concerns and effectively managing risks (Michal et al., 2021). An exemplary case study that offers valuable insights into the direct influence of scientific literacy on policymaking in artificial intelligence (AI) is the General Data Protection Regulation (GDPR) of the European Parliament.This regulation was significantly shaped by scientific reports examining AI's consequences on data privacy and security.Policymakers may develop comprehensive legislation to protect personal data in the digital era by possessing a scientifically literate comprehension of artificial intelligence (AI).Furthermore, research conducted by the National Academies of Sciences et al. (2016-10-14) revealed that lawmakers with a considerable degree of scientific literacy had a greater propensity to suggest and endorse artificial intelligence (AI) regulations that effectively reconciled advancements with ethical deliberations.The case studies highlighted the crucial significance of scientific literacy in influencing the development of AI policies that yield societal benefits.Scientific literacy is a fundamental framework for legislators to implement well-informed and progressive laws within the dynamic realm of artificial intelligence. Enhancing Scientific Literacy for Effective AI Legislation Implementing strategies to enhance scientific literacy among politicians is crucial in facilitating their ability to make well-informed judgments about intricate matters (Kotsis, 2024).The subject matter at hand pertains to artificial intelligence (AI).One potential strategy is the establishment of collaborative alliances among scientific organisations, policymakers, and educational entities to enhance the dissemination of precise and easily understandable scientific knowledge.According to Starke and Lünich (2020), implementing customised training programs and seminars focused on essential scientific principles about artificial intelligence (AI) can potentially augment lawmakers' comprehension and capacity to participate in legislative procedures actively.Promoting the inclusion of scientific advisers inside political teams may provide significant contributions in terms of scientific expertise and assistance.Moreover, facilitating transparent communication between scientists and policymakers helps cultivate a climate of policymaking based on empirical data, guaranteeing that choices concerning AI are firmly rooted in scientific expertise rather than conjecture or false information (Dwivedi et al., 2024).The enhancement of scientific literacy among politicians may be achieved by implementing these tactics, resulting in enhanced governance in AI legislation. Establishing appropriate regulations for developing technologies, such as artificial intelligence (AI), necessitates collaboration between scientists and politicians (Luan, et al., 2020).Scientists possess the requisite skills and data to provide policymakers with the information to make well-informed choices, considering AI progress's scientific and social ramifications.These partnerships may serve as a connection between technical expertise and the execution of policies, guaranteeing that laws are based on factual data and scientific agreement (Bhandari, 2023).Through collaboration, scientists and politicians can effectively tackle the intricate ethical, legal, and social dilemmas posed by AI, resulting in decision-making processes that are more informed and more equitable.These collaborative efforts contribute to the improvement of legislative quality and the advancement of responsible innovation in the field of artificial intelligence, eventually yielding societal benefits.Therefore, it is crucial to cultivate these connections in order to develop AI policies that are both efficient and morally upright (Udvaros & Forman, 2023).Training programs focused on artificial intelligence (AI), and scientific ideas are crucial in equipping legislators with the necessary skills to enact legislation within the dynamic and ever-changing technological environment.These systems can assist legislators in comprehending the consequences and prospective social repercussions of artificial intelligence (AI), empowering them to make well-informed judgments about regulatory frameworks and policies.According to Ulnicane and Aden (2023), legislators may enhance their ability to tackle AI law's intricacies by offering educational resources on scientific concepts, including machine learning algorithms, data protection, and ethical issues in AI research.Moreover, training programs have the potential to cultivate cooperation between policymakers and field specialists, resulting in informed and equitable decision-making.Integrating artificial intelligence (AI) and scientific literacy into political education gives politicians the tools to effectively traverse the complex interplay between technology, society, and government.This integration eventually enhances the effectiveness and adaptability of policy-making processes (Straub, 2023).An all-encompassing training program customised to the individual requirements of politicians may enable them to successfully tackle difficulties and capitalise on possibilities associated with artificial intelligence. Nations that have implemented stringent legislation about artificial intelligence (AI), such as the European Union (EU), Canada, and Singapore, have implemented extensive policies aimed at tackling the ethical and legal complexities associated with AI (Huang & Peissl, 2023).These policies guarantee openness, accountability, and equity in developing and implementing AI systems.An illustration of this may be seen in the General Data Protection Regulation (GDPR) of the European Union, which sets out requirements for data protection and privacy in AI applications.Simultaneously, Canada's Directive on Automated Decision-Making underscores the need for AI algorithms, which can be explained.The Model AI Governance Framework in Singapore delineates rules that govern the proper use of artificial intelligence (AI), emphasising the significance of human supervision and responsibility in AI systems (Umer & Adnan, 2024).By implementing this legislation, these nations are establishing a universal benchmark for the governance of artificial intelligence (AI), which has the potential to provide valuable guidance to policymakers throughout the globe about the ethical and legal considerations associated with AI technology. The development of efficient policy addressing artificial intelligence (AI) necessitates the presence of scientific literacy among politicians (Stolpe & Hallström, 2024).According to Hudson et al. (2023), politicians who comprehensively comprehend scientific concepts are more adept at effectively addressing the intricate challenges associated with AI technology.Proficient policymakers with scientific knowledge may accurately perceive the intricacies of AI policies, effectively managing the trade-off between innovation and ethical concerns and possible hazards.According to Buhmann and Fieseler (2023), individuals possess the capacity to critically evaluate scientific data, enabling them to make well-informed judgments that contribute to the betterment of society.Studies suggest that legislators with a strong understanding of scientific principles are more inclined to adopt evidence-based ways of crafting policies, resulting in more thorough and progressive laws in artificial intelligence.By promoting the incorporation of scientific literacy into political leadership, policymakers have the potential to cultivate a more favourable atmosphere for the formulation and execution of AI policies that place equal emphasis on public welfare and technical progress. An essential element is the need for interdisciplinary cooperation among professionals in technology, ethics, law, and policy to formulate comprehensive and progressive rules.Furthermore, comprehending the social ramifications of AI applications is essential to developing legislation that is both effective and advantageous to the public interest.By examining AI legislative efforts, policymakers may pinpoint optimal strategies and potential drawbacks to guide forthcoming regulatory frameworks that foster AI technology's responsible development and use. Future Prospects and Recommendations With the rapid advancement of AI, the future of AI law has become a central concern.One hypothesis posits the need to implement extensive regulatory measures to effectively tackle AI technology's ethical and social ramifications, guaranteeing that its use is by human values and rights.This law needs to strike a delicate equilibrium between promoting innovation and mitigating possible hazards, including but not limited to prejudice, discrimination, and privacy issues.Furthermore, legislators will encounter difficulty staying abreast of the fast advancement of AI systems, necessitating flexible frameworks that can promptly adjust to emerging technologies.Developing solid regulations that support ethical AI deployment and nurture responsible innovation will need collaborations across governments, industry experts, and stakeholders.The future of artificial intelligence (AI) laws will profoundly impact both the technical domain and the societal structure. Several proposals might be proposed to improve scientific literacy among policymakers and facilitate the development of more effective laws for artificial intelligence (AI).In order to make well-informed judgments, policymakers need to engage in ongoing education and training about pertinent scientific ideas and breakthroughs in artificial intelligence (AI) technology.This may include various educational activities such as workshops, seminars, and engagement with domain experts.Furthermore, policymakers must emphasise fostering multidisciplinary cooperation among scientists, engineers, ethicists, and policymakers to formulate allencompassing AI regulations that consider both technological and ethical ramifications.Finally, policymakers must give precedence to openness in the decision-making procedures about AI laws, guaranteeing that the general public is adequately informed and actively engaged in the deliberations (Brauner et al., 2023).Policymakers may effectively traverse the intricate terrain of AI law by incorporating these principles, establishing a robust scientific literacy basis. A more profound comprehension of AI technology's fundamental principles and possible consequences will empower legislators to make well-informed choices that balance innovation and ethical issues (Coulthart et al., 2024).Enhanced scientific literacy among legislators facilitates a deeper understanding of the intricate nature of artificial intelligence (AI) processes, enabling the development of more comprehensive legislative frameworks to tackle bias, privacy, and transparency (O'Shaughnessy et al., 2023).Moreover, a deeper understanding of artificial intelligence (AI) principles will enhance the exchange of information between policymakers and industry professionals, promoting cooperation and facilitating well-informed decision-making.According to König et al. (2023), integrating scientific literacy into the legislative process enables policymakers to take proactive measures in addressing the developing issues presented by AI technologies.This approach ensures that legislation is both adaptable and thorough. Conclusion In summary, the level of scientific literacy plays a crucial role in influencing the capacity of legislators to enact legislation that is both effective and relevant to matters about artificial intelligence (AI).A comprehensive examination of the available scholarly works demonstrates that a solid grounding in scientific principles empowers policymakers to comprehend the intricacies of artificial intelligence (AI) technologies, assess prospective hazards, and arrive at well-informed perspectives about regulatory frameworks. Politicians with advanced scientific literacy are more adept at interacting with experts, evaluating AI policies' consequences, and communicating with the public.The results above highlight the need to augment scientific literacy among government officials to effectively tackle the problems presented by artificial intelligence within a swiftly changing technological environment.Incorporating scientific education into policymaking may result in superior decision-making that is well-informed and grounded in facts, hence fostering responsible development and use of AI.In summary, our study underscores the significant importance of scientific literacy in influencing the development of efficient solutions for AI governance. The profound consequences of scientific literacy in artificial intelligence (AI) have significant ramifications for future governance.As legislators confront the complexities associated with regulating artificial intelligence (AI) technologies, it becomes essential to comprehend the scientific concepts behind these advancements thoroughly.Policymakers with scientific literacy can formulate more efficient and well-informed policies, balancing technical progress, ethical concerns, and social consequences.Governments may effectively traverse complex AI ecosystems by basing policy choices on scientific understanding and promoting innovation while mitigating possible hazards.Furthermore, adopting a scientifically literate approach to policymaking can foster multidisciplinary cooperation among scientists, engineers, ethicists, and policymakers.This collaboration may contribute to creating comprehensive and forward-looking regulatory frameworks for developing and implementing artificial intelligence.Incorporating scientific literacy into policy-making processes is crucial to maintaining the relevance and adaptability of laws within the dynamic realm of artificial intelligence. Further study on scientific literacy and its influence on policymaking for artificial intelligence should concentrate on the efficacy of educational programs in improving legislators' knowledge of AI.Investigating these programs' exact content and distribution methods might give significant information about optimising their effect.Furthermore, investigating the function of multidisciplinary cooperation in increasing scientific literacy among legislators might provide a comprehensive strategy for resolving the complexities of AI law.Furthermore, longitudinal studies that follow the long-term impact of scientific literacy on AI policy choices would help us better understand its long-term consequences.Future studies on these topics may give practical advice for establishing educational efforts that successfully educate legislators with the information they need to negotiate the complex environment of AI policy. To summarise, it seems clear that increasing scientific literacy among legislators is critical to properly legislating for artificial intelligence.The complexity of AI technology needs a comprehensive understanding of its ramifications for society, the economy, and ethics.By providing policymakers with the essential information and abilities, they can make informed choices that determine the future of AI governance.Furthermore, establishing multidisciplinary cooperation among scientists, politicians, and the general public is critical for drafting comprehensive legislation considering all stakeholders' viewpoints.While problems persist, such as the quick rate of technology breakthroughs and varying degrees of scientific literacy, it is critical to prioritise constant learning and communication to traverse these complexities effectively.Governments must continue improving their scientific literacy to meet the diverse problems and possibilities artificial intelligence presents. Creative Commons licensing terms Author(s) will retain the copyright of their published articles agreeing that a Creative Commons Attribution 4.0 International License (CC BY 4.0) terms will be applied to their work.Under the terms of this license, no permission is required from the author(s) or publisher for members of the community to copy, distribute, transmit or adapt the article content, providing a proper, prominent and unambiguous attribution to the authors in a manner that makes clear that the materials are being reused under permission of a Creative Commons License.Views, opinions and conclusions expressed in this research article are views, opinions and conclusions of the author(s).Open Access Publishing Group and European Journal of Social Sciences Studies shall not be responsible or answerable for any loss, damage or liability caused in relation to/arising out of conflicts of interest, copyright violations and inappropriate or inaccurate use of any kind content related or integrated into the research work.All the published works are meeting the Open Access Publishing requirements and can be freely accessed, shared, modified, distributed and used in educational, commercial and non-commercial purposes under a Creative Commons Attribution 4.0 International License (CC BY 4.0).
5,461.4
2024-04-13T00:00:00.000
[ "Computer Science", "Political Science" ]
Diffractive digital lensless holographic microscopy with fine spectral tuning We experimentally demonstrate an all-diffractive optical setup for digital lensless holographic microscopy with easy wavelength line selection and micrometric resolution. In the proposed system, an ultrashort laser pulse is focused with a diffractive lens (DL) onto a pinhole of diameter close to its central wavelength to achieve a highly spatially coherent illumination cone as well as a spectral line with narrow width. To scan the complete spectrum of the light source the DL is displaced with respect to the pinhole plane. The proposed microscopy setup allows us to spectrally separate contributions from different sections of a sample, which may be attractive for several applications in life sciences. © 2013 Optical Society of America OCIS codes: (050.1940) Diffraction; (090.1995) Digital holography; (110.0180) Microscopy. http://dx.doi.org/10.1364/OL.38.002107 Recent advances in digital holographic microscopy show that 3D images of static or moving objects with micrometer resolution are possible by using only a CCD-or CMOS-based camera within a simply in-line lensless optical setup.The technique known as digital lensless holographic microscopy (DLHM) [1] combines both in-line lensless geometry proposed by Gabor in the late 1940s [2] and digital holography approaches for data processing [3].Basically, in DLHM the light scattered from the object/sample interferes with the reference incident light to generate a holographic pattern at the camera plane.Then, the recorded intensity is numerically processed, and the object wavefront is reconstructed.By using digital in-line lensless holographic microscopy technique, several applications have been demonstrated, which include but are not limited to showing in situ organisms and their motion in plankton with micrometer resolution [4], investigating microbial life forms in the Canadian High Arctic [5], tracking micrometer sized particles with high NA [6], analyzing transparent phase objects under femtosecond illumination [7], and successfully imaging latex microspheres, optical fiber, and cancer cells with light sources of varying spatial coherence [8]. In this context, multispectral imaging has become a tool of increasing use, i.e., as a way to discriminate among several components (e.g., proteins or genes) or functions within the cell [9], or to enlarge the range of application of phase-based digital holographic techniques [10].In DLHM multispectral illumination has been achieved not only with coherent sources like a tunable He-Ne laser [11] or a set of lasers that emit at different wavelength lines [12], but also with incoherent light-emitting diodes [8].However, the above implementations allow only a discrete set of wavelength lines with full width at half-maximum (FWHM), typically on the order of some tens of nanometers.In contrast, it is well known that the ability to separate spectral contributions from multiple components of a sample depends largely on the number of images collected at different wavelength bands as well as on the width of these wavelength bands.In terms of the spectral image cube or lambda stack, it means that the finer the spectral slicing of the available broadband spectral source the better. In this Letter, we show an extremely compact and simple optical setup for all-diffractive multispectral DLHM with complete spectral tuning over the whole spectrum of the light source.The optical setup is based on the substitution of the microscopy objective given in most typical in-line DLHM geometries [1,12] by a kinoform DL.Here, it should be pointed out that a DL can be regarded as an optical element that focuses the light by diffraction.Its focal length Fλ varies inversely with the wavelength of the incident light as Fλ F 0 λ 0 ∕λ, where F 0 is the main focal length for the wavelength λ 0 .In our proposal, the pinhole that is placed at a given focal plane of the DL has a dual role.On one hand, it acts as a spatial filter of the light, removing higherorder aberrations and phase-front distortions, thus providing a secondary illuminating spherical wave.On the other hand, moving the DL with respect to the pinhole plane allows us to select very narrow wavelength bands within the spectrum of the incident light.The combination of a DL (or a Fresnel zone plate) with a small pinhole can be thought as a linear spectrometer whose resolving power R, for our experimental conditions (pinhole diameter not larger than the diameter of the Airy disc), can be estimated by the expression R D 2 ∕8F 0 λ 0 , where D is the diameter of the DL [13,14]. The proposed optical setup has several desirable features for multispectral DLHM.The present optical setup allows for recording quasi-monochromatic digital holograms even when using broadband sources (i.e., high-pressure xenon-mercury lamps, broadband lightemitting diodes, or ultrashort pulsed radiation).The inherent diffractive nature of our system makes it suitable for applications in the extreme ultraviolet (XUV) or x-ray spectral region, where microscopy objectives cannot be used owing to the absorption of materials.In addition, the small thickness of the DLs reduces considerably the dimension of the microscopy device. In Fig. 1 a schematic diagram of the optical setup is shown.For the experiment a Ti-Sa femtosecond laser that emits pulses of about 12 fs intensity FWHM at a 75 MHz repetition rate, centered at λ 0 800 nm with a 100 nm FWHM spectral bandwidth is the light source. The light is focused with a kinoform DL down onto a pinhole with a diameter d 1 μm slightly larger than the largest wavelength, which is 0.88 μm in this experiment.The DL has a diameter D 30 mm, whereas its main focal length is F 0 50 mm for λ 0 800 nm.To cover the whole working area of the DL, the ultrashort pulse was previously expanded by means of a 16× all-mirror beam expander.In addition, suited neutral filters are used to attenuate conveniently the laser radiation. The spherical wavefront illuminates the sample that is placed at a fixed distance z from the pinhole.On the surface of a digital screen (CCD camera, model Balser A120F) located at a distance L 15 mm from the pinhole, the waves scattered by the sample are superimposed with the portions of the spherical waves that travel with no perturbation.The intensity with no sample is pixelwise subtracted from the intensity of the in-line hologram.Then, the obtained intensity is transferred to a computer for further processing.Owing to the high transmittance of the samples used for DLHM, the intensity of the scattered wave can be neglected.Furthermore, as spherical illumination is utilized, the presence of the twin images does not introduce any nuisance on the reconstructed images [1].The reconstruction of the holograms is numerically carried out with the help of the Fresnel-Kirchhoff diffraction integral.Details of this process can be found elsewhere, see for instance [15]. In order to show the potential of our microscopy setup to gather high-quality images of the sample at different wavelengths, we filtered different spectral lines of the pulse spectrum.To do that, the DL was axially moved with respect to the pinhole position with a micrometer screw gauge (R ≅ 2800).At this point, the CCD camera was temporarily substituted by a commercial spectrometer device to record the spectral lines corresponding to each position of the DL.In particular, eight spectral lines stepped 20 nm from 740 to 860 nm were obtained.Once the positions of the DL were determined, the CCD camera was restored into the optical setup to record the intensity of each spherical wave without a sample.Finally, the sample was introduced and fixed into the optical setup (at the distance z 5.8 mm from the pinhole plane).In these conditions, after moving back the DL to the above-determined positions, the corresponding holograms were recorded. In this work, a paraffin wax section of the head of a fruit fly (Drosophila melanogaster) was used as a sample.The section of the head is about 800 μm wide and 10 μm in thickness.In Fig. 2, an image of the reconstructed hologram for the spectral line 780 nm is shown.From Fig. 2 the complex structure of the fruit fly head can be clearly seen.Some structures of the head with dimensions on the order of a few micrometers (e.g., ommatidia), or even less than one micrometer (e.g., cornea spikes) are highlighted in the top-left inset of Fig. 2.This inset is obtained by 2× magnifying the encircled region of the image of the section of the head.In the bottom-left part of Fig. 2, the profile of the pulse spectrum together with the selected spectral line of 6 nm FWHM are also given as an inset. On the other hand, having good quality images of a sample at different wavelengths could be useful, e.g., to obtain further information on certain optical properties (e.g., reflectance/absorbance of specific regions of the sample).With the proposed microscope, one can spectrally scan the sample with a thin spectral line to get a large number of high-quality images, see for instance Fig. 2. Hence, the possibility of obtaining the so-called lambda stack with several applications in multispectral imaging is apparent [9].Please note that the hologram reconstruction process guarantees the size of the image coordinates at the reconstruction plane to be independent of the wavelength [12].This is mandatory to keep the same scale of images for the different wavelengths at the reconstruction plane. To see the spectral response of our sample to multiwavelength illumination, we merged three images corresponding to the spectral lines at 860, 800, and 740 nm.From this fusing, a pseudo-colored RGB image was generated by associating the 860 nm image with the red channel, the 800 nm image with the green channel, and the 740 nm image with the blue channel.In Fig. 3, the resulting image is shown. The spectral lines from the pulse spectrum that were used to illuminate the sample are included within the color bar of Fig. 3.After a visual inspection of Fig. 3, one can see that most regions of the head of the fruit fly are wavelength dependent, with a strong preference for the 860 nm wavelength, whereas the white-colored ommatidia at the border of the section of the head show an even response. We believe that the power and simplicity of the proposed in-line lensless microscope might be suited for operating within several spectral ranges of the electromagnetic spectrum, at micrometer spatial resolution.More compact and dynamic versions of it can be implemented thanks to the use of, i.e., photolithography techniques and/or spatial light modulators.This research was funded by the Spanish Ministerio de Ciencia e Innovación and the Generalitat Valenciana through Consolider Programme (SAUUL CSD2007-00013), and Prometeo Excellence Programme (PROME-TEO/2012/021).The authors are also very grateful to the SCIC of the Universitat Jaume I for the use of the femtosecond laser.J. Garcia-Sucerquia gratefully acknowledges the Visiting Scholar Fellowship from the Universidad de Valencia, Colciencias Grant No. 110205024, and UNAL Grant Nos.110201003 and 110201004. Fig. 2 . Fig. 2. Reconstructed hologram of a section of the head of a fruit fly for the spectral line centered at 780 nm.Top left: details of the head structure.Bottom left: profile of the pulse spectrum and the selected spectral line. Fig. 3 . Fig. 3. Pseudo-colored RGB image of a section of the head of the fruit fly obtained from the merging of images corresponding to the spectral lines at 860, 800, and 740 nm.
2,569
2013-06-15T00:00:00.000
[ "Physics" ]
Development and Characterization of Ceramic Matrix Composite ( CMC ) from Nigerian Kankara Kaolin and Gray Cast Iron Filling This study developed a ceramic composite material (CMC) for use as a refractory material from “Kankara” clay (kaolin) as a matrix material mixed with gray cast iron (GCI) as reinforcement. The CMCs were prepared by varying the percentage by weight of the gray cast iron using 5, 10, 15, 20, 25, 30, 35, 40 and 45 wt%. Tests were conducted on the developed CMC, using standard test techniques, to determine physical and the mechanical properties of the produced composites. The results for mechanical properties showed improvement in the hardness value from 47% at 5% GCI content to 94% at 45% GCI content; the compressive strength improved from 3.11% at 5% GCI to a peak of 7.15% at 25% GCI and then descended down to 3.74% at 45% GCI content while the ultimate tensile strength improved from 0.75% at 5% GCI to a peak of 1.87% at 25% GCI down to 1.34% at 45% GCI content. Equally, there is an increase in bulk density from 1.74% for 5% GCI content to 2.09% for 45% GCI contentment. The linear shrinkage reduced from 11.57% to 1.15%; water absorption also reduced from 33.68% to 15.20%; apparent porosity too reduced from 42.2% to 16.02%. However, cold crushing strength initially increased with increase in GCI content from 3.89 to a peak of 13.32 V for 25% GCI content and progressively dropped to a value of 5.25 V at 45% GCI content. All the values obtained from the blends are within the recommended values for kiln shelves. However, the CMC developed on 25% GCI content showed the best combination of both mechanical and physical properties required of a good material for the production of kiln shelves. Corresponding author. R. M. Gadzama et al. 104 Introduction With the rapid development of ceramic industry in Nigeria, the high demand for props, shelves, and shelf bars (used in day to day ceramics manufacture), makes the development of local materials for the production of kiln furniture very imperative.Ceramic matrix composites (CMC) as the name implies combine reinforcing ceramic or metal phases in a ceramic matrix to create materials with new and superior properties [1].CMCs have been and are being developed to overcome the hitherto intensive brittleness and lack of reliability of monolithic ceramics thus making them suitable for use as structural parts in varying applications and environments like rockets and engines construction; gas turbines for power plants; heat shields for space vehicles, fusion reactor first walls, aircraft brakes, heat treatment for nacres, etc.According to [1] the ceramic matrix composites possess additional qualities like high mechanical strength, at high temperatures, high toughness, light stiffness and high corrosion resistance at high temperatures. It was reported by [2] that most of the pottery establishments in Nigeria depended on foreign countries for the supply of kiln shelves and other kiln furniture.As a result of this, scarce foreign reserves are expended in purchasing these items.Apart, there is high risk of product loss/damage in transit due to the degree of fragility of ceramic wares.In addition, the time spent in bringing the foreign products contribute to the ineffectiveness of putting the equipment to use to when required. The essence of this work is to develop ceramic composites using Kankara clay (kaolin) as a matrix and grey iron powder as a reinforcement, which can be used for the production of kiln furniture that can withstand high temperature. Materials & Methods Kankara kaolin used as the aluminosilicate in the matrix was excavated from a clay pit located in Kankara village of Katsina state.Gray cast iron powder was purchased from National Metallurgical Development centre, Jos.Other materials used included water for the mixture.The equipment used in this research work include: Soil Test Compression Machine, Universal Testing Machine, Density Bottles, Furnace, Charpy Impact Machine, Shore A Durometer hardness testing machine, Vernier Caliper.Digital balance and Thermometer. Preparation of Samples After collecting the kaolin from its deposit in Kankara, Katsina State of Nigeria and grey iron powder from National Metallurgical Development Centre, Jos.The kaolin was beneficiated by wet screening it through a 75 µm mesh [3]- [5].Other processing routes that followed include calcining, crushing, grinding, sieving and mixing of the refractory batch.The grey cast iron was also grated into the required grain sizes of 75 µm mesh [3] (see Figure 1 and Figure 2) through the process of powder metallurgy.Then the grey cast iron and kankara clay in powder form were weighed out in varied proportions and then poured into a clean basin and mixed thoroughly.Then a measured quantity of water was added followed with continuous mixing to paste form.After mixing, the composite blend was casted under pressure into wooden moulds of 100 × 100 mm dimension.Casted samples were then air dried (Figure 3 and Figure 4) in fourteen days and sun dried for two days before firing.The composite samples were fired to temperature of 1280˚C for 5 hours.Then they were allowed to cool in the furnace.This was done to cure the composite materials at the said temperature.The composition of blends is shown Table 1. Method of Characterization The type of density most commonly determined for a ceramic object is the bulk density, which is the weight divided by the bulk volume.The density and porosity of the samples were determined using Archimedes' technique (ASTM C 20-87).This method involves the determination of the following properties of sintered products: apparent porosity, water absorption, and bulk density.The dry weights of sintered samples were recorded as the initial step of this method.Each test specimen was boiled for 2 hours in water.After boiling, each specimen was cooled to room temperature while still completely covered with water, and immersed in water for a minimum of 12 hours before weighing.The suspended weight in water of each specimen was measured on the balance with Archimedes' apparatus (PrecisaXP22OA).After determining the suspended weight, the specimen was wiped with a wet sponge to remove drops of water from the surface.The damp specimen was then weighed to determine its saturated weight.The density and other properties were determined based on these three weights.In this case dry weight was noted as (D).Suspended weight was noted as (S) while saturated weight was (W).Volume of specimen = W 2 /d where d = density of mercury. The apparent porosity is then calculated as: where, W D − = actual volume of open pores of the specimen.W S − = External volume of the specimen.The micro structural features of as-received powders (Kankara clay and grey iron powder), as well as those of the developed composites obtained from them were investigated using a scanning electron microscope (model SEM-Philips XL-30 SFEG).SEM tests, which included elemental analysis of as-received powders and the ceramic composite kiln shelves and prop samples using SEM-Energy Disperse Spectrometry (SEM-EDS), were carried out in Johannesburg South Africa.The morphological characterization of the grains and pores were evaluated on polished surfaces as shown in Figures 5-14. To determine the linear shrinkage, the green samples after press moulding, were allowed to dry until they can be removed conveniently from the dry mold and their dimensions were taken in that condition.They were then fired in the furnace to a temperature of 1600˚C upon cooling from which the new dimensions were measured, using the vernier caliper, and recorded.The linear shrinkages were then calculated as a percentage of the original green length as shown below: where, L B = Dimension of green samples, L D = Dimension of fired samples.The cold crushing strength of the composite was determined using an asbestos board of about 5 mm thickness was placed between the platens of the press and bearing faces of the test pieces which were placed centrally on the platen.Load was applied at a rate of 20 kN/minute using hydraulic press.The maximum load was recorded as the crushing load.The crushing strength was calculated using the following formula. ( ) The hardness test of composites is based on the relative resistance of its surface to indentation by an indenter of specified dimension under a specified load.Hardness of the composites were determined, according to ASTM D2240 ISO 7619, by a direct reading durometer manufactured by franscisco Munoz Irles, C.B. (model: 5019 and Serial No: 01554).The durometer measures in shores.This test was carried out in the Nigerian Institute for Leather and Science Technology (NILEST), Samaru, Zaria. The bulk densities of the developed samples were determined using the ratio of their weight to their volume.The volumes of the samples were determined by the products of their length, width and height.Then their masses were obtained using a digital balance.The densities were then calculated using Equation ( 5) below.[6]: The water absorption was determined as the percentage of water absorbed by the samples.After firing, the samples were weighed and their weight recorded.They are then soaked in water at room temperature for a period of 24 hours after which their weights are again taken.Equation ( 6) below is then used to obtain the percentage water absorbed.% . where; W d : dry weight before soaking, W w : wet weight after socking.Apparent porosity was determined using three different weights.The weight of the sample after air drying was measured and recorded as dry weight (D), the sample was then oven dried to a steady weight (S).After this, the sample was then soaked in water for a period of 24 hrs.Then the new weight was measured and recorded as wet weight (W).The Apparent porosity was then calculated using Equation ( 7) below The tensile strength the test was also carried out using soil test compression machine Model 4839 S/No 1482.For the test, the specimen was placed diagonally between the platens of the machine.The maximum loads reached before failure by the samples were measured and the tensile strengths were calculated using equation below; The compressive test was also carried out on soil test compression machine model 4839 S/No 1482.The test was similar to the cold crushing strength testing, but the samples were not soaked before testing.The maximum compressive loads reached by the samples before failure were measured and the compressive strengths were calculated using the equation below: The X-ray diffraction patterns of composites were determined by X-ray analysis which was carried out to determine the various element and phases distribution in the samples.The test was carried out on a Philips X-ray diffractometer.Some X-ray diffractograms was taken using CoKα radiation at scan speed of 3 m/min. Thermo gravimetric analysis studies of samples were carried in a nitrogen atmosphere on a thermal analyzer (Perkin Elmer) at a heating rate of 10˚C/mm.The objective of thermal analysis is to study the effect of heating on the materials so that the stability of the materials at elevated temperature could be known for its applications in various fields.In this method a change in thermal stability was examined in terms of percentage weight loss as a function of temperature.At the same time DTA involves comparing the precise temperature difference between a sample and an inert reference material.The particle sizes of the samples have been calculated employing the Scherres equation: cos where θ is the angle between the incident and diffracted beams (degree), β the full with half maximum (rad.),D the particle size of the sample (nm) and λ is the wave length of the X-ray. Results and Discussion The results of the research work are presented in Figures 15-23. Bulk Density and Porosity The results of the study of bulk density and apparent porosity of the developed composite material are shown in Figure 15 and Figure 16 respectively.From the results, it is seen that the bulk density of the developed material increased from 1.72 to 2.09 g/cm 3 (amounting to about 22% increase) with increase in the percentage of the grey cast iron added.This increment is associated to the high density of the cast iron powder as compared to that of the clay.The variation in densities of the composite material is in agreement with the earlier work carried out by [7]. Figure 16 reveals that apparent porosity of the developed material was decreasing with increase in the percentage of cast iron added to the material.The apparent porosity decreases from 45.33% to 16.07%.This can be associated with the role played by the melting of the cast iron powder into solids within the composite materials which reduces the number of pores present in the composite material. Water Absorption Water absorption values obtained for the developed composite materials are presented in Figure 17, this shows that the rate at which the developed composite materials absorb water decreased, from 36.07% to 16.94%, with increase in percentage of grey cast iron added.As expected, the introduction of grey cast iron reinforcement into the clay matrix lowers the rate at which it absorbs water. Linear Shrinkages Figure 18 presents the results of linear shrinkage calculations for the developed composite materials the figure shows a general decrease in linear shrinkage with increase in the percentage of cast iron added.The linear shrinkages decreased from 12.77% to 1.55%, amounting to about 81% improvement in linear shrinkage performance.It is worthy of note that the new lengths recorded do not take into account the outpour of excess cast iron melt, therefore negative values were avoided as they were recorded in the pilot study.The length then begins to increase with further addition of cast iron to the material.This can be explained by the principle of saturation of a mixture.The mixture of clay and cast iron reach saturation level which lead to the outpour of excess cast iron melt, thereby increasing the samples length.The amount of outpour increases with increase in the percentage of cast iron added to the composite material which accounts for increase in length of the composite materials after firing. Refractoriness Figure 23 shows that the refractoriness of the developed composite materials decrease with increase in the percentage of cast iron.Although the melting point of cast iron is high and the refractoriness of pure kaolin was also high, the resulting mixture of the two materials produces material with lower refractoriness.This can be associated to thermal instability resulting from the mixture of the two materials. Thermal Properties The DTA/TGA analysis revealed that initial weight loss (~10%) observed between 500˚C and 600˚C is attributable to the vaporization of the water from the samples, while degradation of the composites started at higher temperature, precisely after 550˚C above which thermal stability of samples gradually decreased and eventual degradation of the samples ensued. DTA curve shows that the temperature of maximal decomposition/ destruction was 520˚C -600˚C.The presence of endothermic effects in samples are results of two processes-dehydrogenation and evaporation of some non-cellulosic materials.This conclusion was confirmed by the decreased mass of the sample.The DTA curve also confirmed these results. The endothermic effects observed in the temperature range indicated above were probably as a result of the bonds formed in the clay backbone.Finally, it is worthy of note that the thermal analysis curves reveal that samples are stable until around 700˚C.This is in agreement with values of some other clay materials reported in literature. All the composites showed a small (10%) weight loss around 500˚C, which can be attributed to the evaporation of water.The weight loss rate gradually increased above 600˚C and distinct weight loss appearing between 500˚C to 600˚C.The results showed that addition of grey iron particles to kankara does not alter the thermal decomposition process since at temperatures greater than 900˚C the samples retained 80%wt.From Table 1. T 10% ˚C = temperature at which 10% of the original weight is lost T 90% ˚C = temperature at which 90% of the original weight is lost T max ˚C = temperature at maximum decomposition Conclusions From the above results and discussion, the following conclusions are made: 1. Characterization study of the Kankara clay (kaolin) and grey iron powder was successfully conducted. 2. The production of ceramic composite refractory using powder metallurgy technique, blending by varying the percentage of grey iron powder from 5 -45 wt% in the Kankara clay (kaolin) is attainable, with good refractory properties. 3. The mechanical properties test results of hardness value, cold crushing strength, ultimate tensile strength and ultimate compressive strength on the developed composite refractory shows an increase in the mechanical properties with increasing grey cast iron content up until attaining a weight content of 25%.After this wt%, it is observed that there is a decrease in these mechanical properties. 4. The physical properties test result of bulk density, porosity, linear shrinkage and water absorption on developed composite refractory shows an increase in bulk density with increasing grey cast iron content.However porosity, linear shrinkage and absorption decrease with increasing grey cast iron content. 5. Micro structural analysis (i.e.scanning electron microscopy (SEM) X-ray diffraction (X-RD) and X-ray fluorescence (X-RF) have shown an even spread of the elements of the composite. 6.The utilization of kankara clay and gray cast Iron for the production of composite refractory has been successfully carried out using 25 wt% grey cast iron developed into composite refractory. 7. The developed composite is observed to be plastic up to 25 wgt% grey cast iron content. Figure 1 . Figure 1.Mixing of the raw material according to proportion gray cast iron and kankara clay (kaolin). Figure 3 . Figure 3. Tests samples from the mixture of kankara gray cast iron this done to get the bes coposition to be use to form the kiln furniture. Figure 4 . Figure 4. Tests samples from the mixture of kankara clay (kaolin) and clay (kaolin)and gray cast iron.The above samples will show the best number which is to be used. Figure 15 . Figure 15.Variation of bulk density with GCI. Table 1 . Composition blends of composite materials produced.
4,100.6
2016-01-13T00:00:00.000
[ "Materials Science", "Engineering" ]
Electrospun fibre colorimetric probe based on gold nanoparticles for on-site detection of 17 β-estradiol associated with dairy farming effluents An on-site colorimetric probe, based on gold nanoparticles incorporated into electrospun polystyrene nanofibres, for the detection of oestrogenic compounds, as represented by 17β-estradiol, in dairy effluents is presented. The probe exhibited a significant absorption peak at 542 nm, ascribed to surface plasmon resonance of Au nanoparticles (NPs). With increasing 17β-estradiol concentration the surface plasmon resonance (SPR) band shifted to a longer wavelength accompanied by a visual colour change from shades of pink to blue. The visible cut-off concentration was 100 ng/ml. Upon exposure to cholesterol and a series of compounds known to induce oestrogenic activity, p,p’-DDE, deltamethrin, 4-tert-octylphenol and nonylphenol, only 17β-estradiol could induce a pink colour observable by the naked eye, which is indicative that the proposed gold nanoparticles–incorporated electrospun polystyrene nanofibres could be employed as highly selective colorimetric strips to detect 17β-estradiol, with minor interference from other endocrine-disrupting compounds usually present in dairy effluents. The facile nature of the colorimetric probe and potential application in monitoring water quality was demonstrated. INTRODUCTION Dairy-farming wastewater contains significant concentrations of natural steroidal oestrogens such as 17α-estradiol, 17β-estradiol, and estrone, which can potentially contaminate surface and ground water (Hanselman et al., 2006;Zheng et al., 2008).Oestrogenic contamination of surface waters is of concern for fish as it is associated with feminisation of male fish, reproductive abnormalities and skewed sex ratios (Oishi, 2010;Gadd et al., 2010).Within the oestrogenic contaminants, 17β-estradiol (E 2 ) has emerged as a target for recognition and analysis due to its dual role (Chiu et al., 2008).17β-estradiol plays an important function during the various stages of mammalian development, including growth and reproduction.However, E 2 is also the most potent naturallyoccurring oestrogen as it has the highest oestrogenic activity at ng/ℓ concentrations (Hanselman et al., 2003).Although several detectors, in particular, flame ionisation or mass spectrometric detection coupled with GC, have had significant achievements in the determination and quantification of oestrogenic steroid hormones in aqueous samples (Wang, 2011), they are expensive, sophisticated and require extensive sample handling.Owing to the current detection drawbacks, simple, rapid, reliable and cheap protocols are highly desirable.Technically, the non-invasive analytical devices based on colorimetric principles not only take advantage of colour for rapid detection but also allow for simplicity as they are cost effective (Chigome and Torto, 2011).Specifically, metallic nanoparticle-based colorimetric assays allow for on-site detection (minimising sample handling) rendering both quantitative and qualitative analysis in biological, biotechnological or environmental matrices amenable (Ma et al., 2011). Of particular interest are the gold nanoparticles (Au NPs)based colorimetric approaches as they take advantage of the surface plasmon resonance (SPR) dependent colour changes (Romeo et al., 2012;Zhao et al., 2008).Gold nanoparticles exhibit distinct and well-defined colours that are easily perceptible by the naked eye (Jones et al., 2011;Jans and Huo, 2012;Saquing et al., 2009).Generally, the colorimetric mechanisms for Au NPs are based on their aggregation or dispersion with guest molecules, i.e., analytes (Ding et al., 2012).In principle, there should be a balance between the inter-particle attractive and repulsive forces.The relationship between these forces determines whether Au NPs are stabilised or aggregated and the resulting solution colour is central to the design of Au NP-based colorimetric systems (Zhao et al., 2008).However, in solution, Au NPs have a tendency to aggregate, particularly in the presence of salts and some biological molecules such as proteins (Gao et al., 2012).To address the instability of Au NPs in solution, polymers may be integrated into the system (Mayer, 1998). Several features play an important role in the integration of polymers into nanocomposite systems.Two important aspects to consider, as highlighted in a review by Grubbs, are how the polymer ligands affect accessibility of the inorganic nanoparticle surface, and compatibility of the entire polymer-coated nanoparticle with a broad variety of other materials (Grubbs, 2007).The nanocomposites may be used in solution (Zhang et , 2010) or in a solid platform (Bai et al., 2008).Various useful methods have been developed to fabricate hybrid organicinorganic systems, for example, Langmuir-Blodgett assembly, layer-by-layer assembly, chemical deposition and spin-coating (Lu et al., 2005).However, these approaches are often challenged when it comes to producing stable composites.In this study, a facile electrospinning technique is employed for the fabrication of polystyrene-gold polymer composite fibres.The electrospun composite fibres were employed as test strips for the colorimetric detection of 17β-estradiol.Upon interaction with the analyte, the white fibre mat changed colour to either a shade of blue or pink, at higher and lower concentrations of 17β-estradiol, respectively. Synthesis of Au-PS composite In a typical procedure, predetermined quantities of PS and Au salt were dissolved in DMF/THF (8:2 v/v) in a sealed vial.The PS mixture was left to stir for 3 to 4 h to ensure complete dissolution, after which NaBH 4 was added to effect the reduction of the metal salt precursor at ambient conditions.The solution was left to stir overnight to eliminate air bubbles and then was electrospun.The electrospinning set-up included a syringe pump operated at a flow rate of 0.300 mℓ/h and a high-voltage power supply with a positive polarity.The optimum voltage applied was 22.2 kV. Characterisation of the polymer composites Polystyrene-gold composite solution was characterised employing the transmission electron microscope (TEM) prior to electrospinning.The morphology of the fibres was evaluated with a Jeol JSM -700 F, field emission scanning electron microscope (FE-SEM) operating at 30 kV after gold coating of the sample.UV-visible absorption spectra were recorded using a Perkin Elmer Lambda 25 UV/Vis spectrophotometer. Matrix spike experiments The dairy effluents were collected from a local cattle farm.Grab samples were collected from the milking shed drains and preserved by adding concentrated sulphuric acid (pH 3).The dairy water samples were stored at <4°C overnight, then thawed gradually to room temperature and used within 24 h. The sample matrix was spiked with 17β-estradiol and other environmental chemicals with oestrogenic activity, including p,p'-DDE, deltamethrin, 4-tert-octylphenol and nonylphenol, as well as with cholesterol, so as to evaluate the colorimetric response of the Au-PS NP probe strips. Synthesis of Au-PS composite Polystyrene-stabilised gold nanoparticles were synthesised by in-situ reduction of Au 3+ with sodium borohydride.Sodium borohydride has been the choice of reducing agents for Au and Ag nanoparticles because of its greater reducing power than other reducing agents, such as hydrazine and ascorbic acid, which give larger nanoparticles (Seo et al., 2009).It has been shown that in a system where species do not have a high affinity towards Au, BH 4 -ions bind to the surface of gold nanoparticles and form a negatively charged layer that contributes towards stabilising the nanoparticles (Fig. 1).However, due to secondary interactions with the environment, BH 4 -ions are generally not efficient stabilisers (Olenin and Lisichkin, 2011; Uehara, 2010).For instance, BH 4 -ions adsorbed on the surface of nanoparticles usually react with water molecules resulting in aggregation of nanoparticles.For this study polystyrene was used to stabilise the nanoparticles against aggregation. Polymers (e.g.polystyrene), including small molecules and polyelectrolytes, have been commonly used to stabilise nanoparticles through steric, electrostatic and electrosteric (a combination of electrostatic and steric) interactions, respectively (Zhang et al., 2011). Characterisation of the polymer composites The prepared PS-Au NPs showed an absorption peak at 542 nm, which was ascribed to the surface plasmon resonance of Au NPs.The interaction of E 2 with PS-Au NP-induced aggregation of PS-Au NPs and resulted in the SPR absorption band shifting to a longer wavelength (Fig. 2A).The TEM images exhibited spherical and triangular plates PS-Au NPs (Fig. 2B) while the 17β-estradiol-stimulated aggregation of PS-Au NPs was further verified in Fig. 2C. Polystyrene-gold nanoparticle composite solutions were electrospun to give white fibre mats.In order to attain intense changes in colour of PS-Au NPs in response to E 2 , the effect of concentration of Au NPs in polymer solution on the morphology of the electrospun fibres was investigated.The nanoparticle 29 to polystyrene weight ratios were 1:40, 1:20 and 1:10.The morphology of the PS-Au NP electrospun fibres was observed with SEM.The scanning electron microscope images showed a decrease in fibre diameter with increasing concentration of Au NPs (Fig. 3).The observed decrease in diameter of the PS-Au NP composite nanofibres with increasing Au NP content may be due to the increased conductivity of the nanocomposite solutions (Kim and Ahn, 2008).The ratio 1:40 gave the best colour changes (Fig. 3B insert), and was therefore chosen to give the optimum Au NP concentration. At a higher magnification, a high-resolution SEM image confirms the non-beaded morphology of electrospun PS-Au NP composite fibres (Fig. 4).The insert in Fig. 4 exhibits encapsulated and dispersed Au NPs within the PS electrospun fibre.The assumption is that the Au NPs are dispersed as small clusters rather than as individual nanoparticles.The instability of the gold nanoparticle clusters drove the colorimetric mechanism for the naked eye detection of 17β-estradiol.The electrospun fibres were then cut into strips that were employed as colorimetric probes.The probe strips were exposed to a series of 17β-estradiol concentrations.Figure 5 presents a profile of the colorimetric response of the probe strips towards the various E 2 concentrations.It was observed that with increasing E 2 concentrations (50 ng/mℓ to 1 000 µg/mℓ) the colour of the probe changed gradually from white to shades of pink and eventually to shades of blue at higher E 2 concentrations.With an increase in E 2 concentration the surface plasmon resonance (SPR) band shifted to a longer wavelength (Fig. 5).The surface plasmon resonance band shift is usually accompanied by a visual colour change, from pink to blue. It was also observed that upon interaction of the probe strip, even at ng/mℓ 17β-estradiol concentrations, the clusters tended to agglomerate further and became more visible on the fibre surface.Figure 6A illustrates typical agglomeration of Au NPs upon interacting with 800 µg/mℓ E 2, while the inset depicts a blue shade indicative of higher E 2 concentrations.Figure 6B, on the other hand, is a typical SEM image showing that lower E 2 concentrations induced relatively less visible Au NP agglomerates (as shown by the white clumps).Figure 6B inset is an optical photograph for the probe showing a typical pink colour for lower E 2 concentrations. To evaluate the performance of PS-Au NP probes, the strips were applied to spiked dairy effluents that were analysed without any further treatment.Individual matrix samples were spiked with 200 ng/mℓ of E 2 as well as with cholesterol, p,p'-DDE, deltamethrin, 4-tert-octylphenol and nonylphenol, as these compounds are known to induce oestrogenic activity.Although cholesterol does not have oestrogenic activity it has been found to be at higher concentrations relative to oestrogens, in effluent from wastewater treatment plants (Oishi, 2010).17β-estradiol and cholesterol were therefore selected as representatives of oestrogens and other steroids, respectively.It was observed that the probe changed from white to a brown shade on exposure to the unspiked matrix.Upon interacting with the different analytes the probe strip turned pink with E 2 , while with p,p'-DDE, deltamethrin, 4-tert-octylphenol, 31 nonylphenol and cholesterol the probe showed a similar brownish colour as with the free matrix (Fig. 7).The probe strips were exposed to the analytes for 120 s.Considering the observed colour changes, the developed probe showed good selectivity towards E 2 . CONCLUSION An electrospun simple colorimetric probe based on PS-Au NPs was developed.Taking advantage of several fascinating features of the PS-Au NP combination, the probe showed an excellent response towards 17β-estradiol and possessed high sensitivity, with a lowest naked-eye detection limit of 100 ng/mℓ.This colorimetric system does not employ any sophisticated instrumentation, therefore it guarantees user-friendly, rapid and on-site detection of 17β-estradiol.Furthermore, relative to other nanoparticle-based colorimetric assays, this probe does not entail the use of complex nanoparticle modification (e.g.biological material).The probe warrants further investigation as it has shown potential to profile oestrogenic compounds in aqueous environments.As a way forward, a thiol-functionalised polystyrene may be used to enhance stability of the probe in harsh environmental conditions that include a wide range of temperature, concentration and pH. Figure 3 Figure 2 (A) Typical absorption spectra of the PS-Au NPs in the absence and in the presence of 17β-estradiol (PS-Au NP-E 2 ); typical TEM images of PS-Au NPs (B) in the absence and (C) in the presence of 17β-estradiol (PS-Au NP-E2) Figure 7 UV-vis absorbance spectra and photographs showing colorimetric responses of strips upon exposure to 200 ng/mℓ 17β-estradiol, p,p'-DDE, deltamethrin, 4-tertoctylphenol, nonylphenol and cholesterol for 120 s at ambient temperature
2,857.4
2014-12-02T00:00:00.000
[ "Environmental Science", "Chemistry" ]
Experimental Investigation to Study the Feasibility of Fabricating Ultra-Conductive Copper Using a Hybrid Method Ultra-conductive copper (UCC) has an enormous potential to disrupt the existing electrical and electronic systems. Recent studies on carbon nanotubes (CNTs), a new class of materials, showed the ballistic conductance of electricity. Researchers around the world are able to demonstrate ultra-conductivity in micro- and millimeter-length sections using various processing techniques by embedding CNTs in the copper matrix. Although multiple methods promise the possibility of producing copper-based nanocomposites with gains in electrical conductivity, thus far, scaling up these results has been quite a challenge. We investigated a hybrid method of both hot-pressing followed by rolling in order to produce UCC wire. Cu/CNT billets of 1/10%, 1/15%, and 1/20% were hot-pressed and the conductivity results were compared to a hot-pressed pure copper billet. Our results indicated that this method is not a viable approach, as the gains in electrical conductivity are neutralized, followed by attenuation of the wire cross-section. Introduction Today, the world is heavily dependent on electricity in all aspects of life, from lighting a house to lighting the International Space Station and running a car on Earth to running a rover on Mars. Electricity and electronics are so integrated in our everyday lives that it is difficult to imagine the world without them. Not limited to current applications, more and more applications are coming forward every day, owing to the growing global electrification and advances in electrical and electronic technologies. Although the demand for electricity exceeds the supply [1], newer methods of harvesting electrical power at large are disrupting the world every day [2]. Apart from the generation of electricity, electrification also involves the transmission and distribution of electricity, which demands an efficient conductor. Traditionally, copper and aluminum are the most widely used electrical conductors. Aluminum is used for the transmission of electricity from power grids to substations and transformers. Copper, on the other hand, is used in appliances, house/industrial wiring, and other electrical connectors. Since the invention of voltaic cells, copper has played a pivotal role in electrical conduction. Copper, being a non-precious metal and having a wide range of properties, is the best material for electrical conduction. Copper is by far considered the most commonly used metal for electrical applications. According to the 2021 U.S. Geological Survey Mineral Commodities, 21% of mined copper is used for electrical or electronic products directly [3]. Meanwhile, according to the Copper Development Association, more than half of the copper produced is used for electrical or electronic applications [4]. In 1914, the United States Circular of the Bureau of Standards determined the international annealed copper standard (IACS) as 1.7241 µΩ·cm at 20 • C [5]. This is the value of the electrical resistivity of annealed copper. This value is still in effect, and IACS electrical conductivity is 58 × 10 6 S/m. Owing to the technological advancements in purifying and processing copper, oxygen-free high-purity copper guarantees 102% IACS. On the other hand, the electrical conductivity of fully cold worked copper is only 5.63 × 107 S/m, which corresponds to 97% IACS. We used IACS representation in this report to avoid misinterpretations with standard annealed copper, oxygen-free copper, and/or other forms of copper metal. Not much development has been made in this field since the earliest conductors (copper). As we have entered into new age of technology, where electricity and electronics are used almost in all aspects, better conductors that are capable of carrying higher currents and low resistance are in demand. Collins et al. [6] observed that in multi-walled carbon nanotubes (MWCNTs), all shells contribute to electrical conduction. This breakthrough discovery opened the channels for developing ultra-conductive materials by using MWCNTs as a highly conductive filler material. Frank et al. [7] reported the ability to quantize the conductance of MWCNTs and observed a high stable current density of >10 7 . They reported that nanotubes conduct electricity ballistically with no heat loss. Wei et al. [8] also recorded the ability of MWCNTs to carry electrical current of a magnitude higher than 10 9 A/cm 2 at elevated temperatures of 250 • C. They also reported that nanotubes show no observable failure and no measurable change in the resistance detected. Li et al. [9] reported, in their experimental observations, that MWCNTs have the capability to carry higher currents at a low bias voltage with perfect ohmic contacts. They reported that the behavior of MWCNTs is due to quasiballistic conductance of the inner walls of the CNT. Their experimental results showed higher conductance of MWCNT compared to the theoretical values of single-walled carbon nanotubes (SWCNTs). Hjortstam et al. [10], in their famous article "Can we achieve ultra-low resistivity in carbon nanotube-based metal composites?", explained the possible challenges in the paths of developing ultra-conductive materials. This article also guides researchers in the possible ways to work toward achieving ultra-conductive conductors. Nayfeh et al. [11] reported electrical conductivity of 113% IACS by copper nanocomposites produced by using the die-casting method. These results encourage researchers in pursuit of developing nanocomposites that exhibit higher electrical conductivity. Furthermore, other methods for fabricating ultra-conductive conductors have also been investigated [12]. Chen et al. [13], investigating a specific electrolytic co-deposition process, showed an increase in the electrical conductivity of copper/CNT composites by 200% of copper. Many others follow this route for fabricating copper/CNT composites for developing ultra-conductive materials. Cambridge University researchers are developing methods to produce copper/CNT composites by developing CNT-fiber bundles and infiltrating them into copper by using vapor deposition or electrodeposition. Recently, researchers from Shanghai Jiao Tong University [14] demonstrated the ability to produce ultra-conductive copper by embedding graphene into the copper. Graphene is applied on both sides of copper, and thus copper is sandwiched in between graphene, and a stack of such layers is pressed by creating an electron path channel. They showed that the material is able to measure 116% IACS. Researchers have been able to demonstrate ultra-conductivity, but the results are not consistent, and the length of the ultra-conductive zones are limited to the ranges of millimeter-long sections [11][12][13][14]. Although multiple methods are being investigated, thus far, scaling up these results has been quite a challenge [12,15]. In this paper, we report the experimental findings of using a hybrid method developed in order to synthesize copper/CNT nanocomposites with enhanced electrical conductivity. Materials and Methods The CNTs used in this study were MWCNTs, obtained from Applied Sciences Inc., Cedarville, Ohio. The CNTs were functionalized with magnesium, as described in U.S. Patent # 8,347,944. Copper powder of 99.5% purity was supplied by Alfa Aesar Inc., USA (lot no.: H05U037). The magnesium functionalization process involved the mixing of anhydrous MgCl 2 with deionized water in a ratio of 1:4 by weight to form MgCl 2 (H 2 O) x . Graphitized MWCNTs (25% (w/w)) were added to the MgCl 2 (H 2 O) x solution. This suspension was rigorously agitated using a mechanical agitator (Cole-Parmer, Vernon Hills, IL, USA) at 150 rpm for 6 h. Following mechanical stirring, the MgCl 2 (H 2 O) x /CNT suspension was agitated by the 20 kHz ultrasonication technique. The 20 kHz ultrasonication was performed with a solid probe (25 mm diameter, titanium alloy) (Sonics & Materials Inc., Newtown, CT, USA) connected to a 20 kHz oscillator (750 watt; Vibra-cell VCX 750, Sonics & Materials Inc., Newtown, CT, USA) for 6 h. The sonicator was operated in a pulsed mode, 10 s on and 20 s off. Following ultrasonication, the slurry of CNTs and MgCl 2 was heat-treated in two phases. During the Phase 1 heat treatment, the slurry was held at an elevated temperature of 200 • C for 4 h in a Isotemp Vacuum Oven 282A. The resultant product after the first stage of heating was a dense material, which was broken down into smaller fragments and prepared for the Phase 2 heat treatment. Phase 2 heating was conducted in a high-vacuum chamber. A two-step heat treatment method was programmed for this operation. In step one, the temperature of the furnace was increased at a rate of 5 • C per minute until it reached 300 • C and was held for 1 h; this was necessary for the MgCl 2 to decompose into magnesium and chlorine. After holding for an hour, step two heating took place, where the temperature of the furnace was programmed to increase until it reached 900 • C at 20 • C per minute. The decomposed chlorine evaporated during this phase. The material obtained after the second stage of heating was a softer material and easy to sift. The final powder-like product after sifting was the CNT precursor material, which was used in the hot-pressing operation. The CNT precursor material obtained after magnesium functionalization was mixed with the copper powder at different ratios. The CNT precursor material was mixed with copper powder in the ratios of 0%, 1/10% (w/w), 1/15% (w/w), and 1/20% (w/w). The mixtures of different concentrations were hot-pressed at 750 • C with a pressing pressure of 2000 psi. The billet obtained after hot-pressing had a diameter of ϕ15 mm and a height of 10 mm. The billets were later subjected to the rolling operation. The ϕ15 mm billet was rolled down to ϕ2 mm and ϕ1 mm wires. Furthermore, the ϕ1 mm wire was rolled down to a 0.1 mm thickness and a 2.74 mm-wide ribbon. A commercially available oxygenfree billet of ϕ15 mm billet was also rolled for calibrating our test equipment. This was essential to eliminate the errors in measurements. Table 1 describes the list of billets and the concentration of CNTs. Furthermore, in this current study, the electrical conductivity of copper was measured using the four-point resistivity measurement technique. The four-point resistivity measurement technique involves flowing a fixed amount of current between the outer two probes/pins and measuring voltage is measured between the two inner probes/pins. The resultant electrical resistivity is calculated using the measured voltage and the current. The Keithley data acquisition system (Tektronix Inc., Beaverton, OR, USA), consisting of a Keithley ultra-sensitive current source series 6200 model 6221 and a Keithley Nanovoltmeter model 2182A, was used for this purpose. A 100 mA current was applied to the wire and the voltage measurements were recorded. The voltage measurements were recorded for each 5 cm length of wire. Equations (1)-(5) were used in computing the mass conductivity of the wire using the voltage measurements. The mass conductivity provided the true electrical conductivity of the sample by eliminating the density factor. From our density analysis, we determined that the density of the hot-pressed billets was lower than the commercially available copper rods/wire. Table 2 shows the theoretical and measured densities of the billets. Furthermore, Equation (4) mitigates the effect of temperature on the measured resistance by normalizing the readings to 20 • C. The resistance obtained from Equation (4) is the effective resistance of the wire at 20 • C. where: V = voltage measurement; I = current; R = resistance; T = temperature of the room while recording the voltage measurements; R T = resistance of the wire at temperature T; ρ v = resistivity of the wire; A = cross-sectional area of the wire; L = length of the wire segment (5 cm); σ v = electrical conductivity of the wire; σ m = mass electrical conductivity of the wire. The mass conductivity of a metal provides the conductivity of the metal after factoring it for the density factor. This is obtained by dividing the electrical conductivity by the density of the metal. At 100% IACS, the mass conductivity of annealed copper at 20 • C was 6524 Sm 2 /kg. The mass conductivity is a useful indicator, since this value compensates for the effective porosity. Metal with a mass conductivity above 100% IACS exhibits higher conductivity, and below this value indicates low conductivity or metal with impurities [16]. Figure 1 shows the relationship between the mass electrical conductivity and the size of Billet 1. It was observed that the conductivity of Billet 1 at a 2 mm thickness had a mean of 102.17% IACS and a standard deviation of 0.88. Furthermore, at 1 mm and 0.1 mm thicknesses, the conductivity had a mean of 103.26% and 103.52% IACS with a standard deviation of 1.52 and 1.71, respectively. This graph represents a more accurate representation of the conductivity measurements. The electrical conductivity at 0.1 mm should not be confused with the higher conductivity, but rather the errors due to the thickness of the 0.1 mm ribbon. The thickness of this ribbon is not uniform, and it is a limitation of the rolling process. Moreover, we looked at the big picture to see the feasibility of the rolling operation in producing UCC wire; therefore, not much study has been directed toward producing uniformly thick ribbons. Figure 1 shows the relationship between the mass electrical conductivity and the size of Billet 1. It was observed that the conductivity of Billet 1 at a 2 mm thickness had a mean of 102.17% IACS and a standard deviation of 0.88. Furthermore, at 1 mm and 0.1 mm thicknesses, the conductivity had a mean of 103.26% and 103.52% IACS with a standard deviation of 1.52 and 1.71, respectively. This graph represents a more accurate representation of the conductivity measurements. The electrical conductivity at 0.1 mm should not be confused with the higher conductivity, but rather the errors due to the thickness of the 0.1 mm ribbon. The thickness of this ribbon is not uniform, and it is a limitation of the rolling process. Moreover, we looked at the big picture to see the feasibility of the rolling operation in producing UCC wire; therefore, not much study has been directed toward producing uniformly thick ribbons. Figure 2 shows the relationship between the mass electrical conductivity and the size of Billet 2. The measurements were taken over the wires of ribbon thicknesses of 2 mm, 1 mm, and 0.1 mm. The resultant mean conductivity of the wire with decreasing thicknesses was 95.10%, 95.22%, and 96.76% IACS with a standard deviation of 0.69, 1.15, and 1.01, respectively. Although the conductivity values of Billet 2 were lower than those of Billet 1, these values are still within the expected values. Unlike the rest of the billets used as feedstock for rolling operation, Billet 1 was not hot-pressed. The hot-pressed billets consisted of higher amounts of porosity, which was reflected in the conductivity measurements. Billet 2 had 0% CNT precursor material embedded; therefore, the conductivity results from this billet were used as the benchmark for comparison with the other billets with embedded CNT precursor material. Figure 2 shows the relationship between the mass electrical conductivity and the size of Billet 2. The measurements were taken over the wires of ribbon thicknesses of 2 mm, 1 mm, and 0.1 mm. The resultant mean conductivity of the wire with decreasing thicknesses was 95.10%, 95.22%, and 96.76% IACS with a standard deviation of 0.69, 1.15, and 1.01, respectively. Although the conductivity values of Billet 2 were lower than those of Billet 1, these values are still within the expected values. Unlike the rest of the billets used as feedstock for rolling operation, Billet 1 was not hot-pressed. The hot-pressed billets consisted of higher amounts of porosity, which was reflected in the conductivity measurements. Billet 2 had 0% CNT precursor material embedded; therefore, the conductivity results from this billet were used as the benchmark for comparison with the other billets with embedded CNT precursor material. Figure 3 shows the relationship between the mass electrical conductivity and the size of Billet 3. Billet 3 showed a mean conductivity of 95.31%, 94.72%, and 94.17% IACS with a standard deviation of 1.11, 3.08, and 1.79 for wire of a 2 mm, 1 mm, and 0.1 mm thickness, respectively. Although it looks as if the conductivity decreased with decreasing thickness, it is presumptive to conclude this, since the difference between the mean values was less than 1.5%. Therefore, we considered the conductivity measurements to be uniform at standard deviation of 1.11, 3.08, and 1.79 for wire of a 2 mm, 1 mm, and 0.1 mm thickness, respectively. Although it looks as if the conductivity decreased with decreasing thickness, it is presumptive to conclude this, since the difference between the mean values was less than 1.5%. Therefore, we considered the conductivity measurements to be uniform at different thicknesses. It was noticed that the results of Billet 3 were close to those of Billet 2. This indicates that the CNT precursor material did not contribute to the overall bulk conductivity and/or there was not a significant amount of the CNT precursor material within the wire to contribute to the electrical conductivity of copper. Figure 3 shows the relationship between the mass electrical conductivity and the size of Billet 3. Billet 3 showed a mean conductivity of 95.31%, 94.72%, and 94.17% IACS with a standard deviation of 1.11, 3.08, and 1.79 for wire of a 2 mm, 1 mm, and 0.1 mm thickness, respectively. Although it looks as if the conductivity decreased with decreasing thickness, it is presumptive to conclude this, since the difference between the mean values was less than 1.5%. Therefore, we considered the conductivity measurements to be uniform at different thicknesses. It was noticed that the results of Billet 3 were close to those of Billet 2. This indicates that the CNT precursor material did not contribute to the overall bulk conductivity and/or there was not a significant amount of the CNT precursor material within the wire to contribute to the electrical conductivity of copper. The results from Billet 4 show a remarkable increase in the electrical conductivity of the 2 mm wire compared to those of Billet 2 and Billet 3, but upon further rolling, the conductivity dropped to 85% IACS at a 1 mm thickness and 84.68% IACS at a 0.1 mm thickness. This observation shows that the CNT precursor material contributed to the bulk conductivity at a wire thickness of 2 mm, and upon further rolling, the CNTs were either probably destroyed or the ohmic path between the CNTs increased and Figure 4 shows the relationship between the mass electrical conductivity and the size of Billet 4. Billet 4 showed a mean conductivity of 101.91%, 93.33%, and 92.97% IACS with a standard deviation of 5.70, 4.17, and 1.96 for wire of a 2 mm, 1 mm, and 0.1 mm thickness, respectively. The results from Billet 4 show a remarkable increase in the electrical conductivity of the 2 mm wire compared to those of Billet 2 and Billet 3, but upon further rolling, the conductivity dropped to 85% IACS at a 1 mm thickness and 84.68% IACS at a 0.1 mm thickness. This observation shows that the CNT precursor material contributed to the bulk conductivity at a wire thickness of 2 mm, and upon further rolling, the CNTs were either probably destroyed or the ohmic path between the CNTs increased and thereby resulted in lower conductivity, or the CNT acted as an impurity in the copper matrix. Figure 5 shows the relationship between the mass electrical conductivity and the size of Billet 5. Billet 5 showed a mean conductivity of 88.32%, 91.22%, and 90.19% IACS with a standard deviation of 1.92, 4.33, and 2.29 for wire of a 2 mm, 1 mm, and 0.1 mm thickness, respectively. The deviation in electrical conductivity from the rest of the billets was clearly observed. The lower conductivity in Billet 5 can be attributed to the improper dispersion of CNT precursor material in the matrix. When rolling the wire from a 2 mm thickness to a 1 mm thickness, the electrical conductivity improved, which can be explained by the breaking of agglomerants and dispersion of CNTs, or looking at the standard deviation, there could be a non-uniform thickness, resulting in an error in determining the thickness. Figure 5 shows the relationship between the mass electrical conductivity and the size of Billet 5. Billet 5 showed a mean conductivity of 88.32%, 91.22%, and 90.19% IACS with a standard deviation of 1.92, 4.33, and 2.29 for wire of a 2 mm, 1 mm, and 0.1 mm thickness, respectively. The deviation in electrical conductivity from the rest of the billets was clearly observed. The lower conductivity in Billet 5 can be attributed to the improper dispersion of CNT precursor material in the matrix. When rolling the wire from a 2 mm thickness to a 1 mm thickness, the electrical conductivity improved, which can be explained by the breaking of agglomerants and dispersion of CNTs, or looking at the standard deviation, there could be a non-uniform thickness, resulting in an error in determining the thickness. Figures 6-8 show the graphs that summarize the electrical conductivity results on all billets at different thicknesses. At a 2 mm thickness, the conductivity of Billet 4 is closer to that of Billet 1. This phenomenon is not possible unless the CNTs contribute to the overall bulk conductivity of the wire. In all other cases, the hot-pressed billets failed to exhibit higher conductivities close to Billet 1. It was also noticed that the electrical conductivity decreased as the content of the CNT precursor material increased. Furthermore, Billet 2 with 1/20% (w/w) of the CNT precursor material showed no significant change in Figure 5 shows the relationship between the mass electrical conductivity and the size of Billet 5. Billet 5 showed a mean conductivity of 88.32%, 91.22%, and 90.19% IACS with a standard deviation of 1.92, 4.33, and 2.29 for wire of a 2 mm, 1 mm, and 0.1 mm thickness, respectively. The deviation in electrical conductivity from the rest of the billets was clearly observed. The lower conductivity in Billet 5 can be attributed to the improper dispersion of CNT precursor material in the matrix. When rolling the wire from a 2 mm thickness to a 1 mm thickness, the electrical conductivity improved, which can be explained by the breaking of agglomerants and dispersion of CNTs, or looking at the standard deviation, there could be a non-uniform thickness, resulting in an error in determining the thickness. Figures 6-8 show the graphs that summarize the electrical conductivity results on all billets at different thicknesses. At a 2 mm thickness, the conductivity of Billet 4 is closer to that of Billet 1. This phenomenon is not possible unless the CNTs contribute to the overall bulk conductivity of the wire. In all other cases, the hot-pressed billets failed to exhibit higher conductivities close to Billet 1. It was also noticed that the electrical conductivity decreased as the content of the CNT precursor material increased. Furthermore, Billet 2 with 1/20% (w/w) of the CNT precursor material showed no significant change in Figures 6-8 show the graphs that summarize the electrical conductivity results on all billets at different thicknesses. At a 2 mm thickness, the conductivity of Billet 4 is closer to that of Billet 1. This phenomenon is not possible unless the CNTs contribute to the overall bulk conductivity of the wire. In all other cases, the hot-pressed billets failed to exhibit higher conductivities close to Billet 1. It was also noticed that the electrical conductivity decreased as the content of the CNT precursor material increased. Furthermore, Billet 2 with 1/20% (w/w) of the CNT precursor material showed no significant change in electrical conductivity from Billet 2, indicating that the CNT precursor material is not of a significant quantity to contribute either constructively or destructively to the overall conductivity. The only deviation between Billet 2 and Billet 3 was observed at a 0.1 mm thickness. This could be a result of the errors in determining the thickness of the 0.1 mm-thick ribbon. The probability of this error is high, as we noticed the 0.1 mm thick ribbon's thickness changes dramatically at different sections of the ribbon. Billet 3 and Billet 4 exhibited decreasing conductivity, which can be attributed to the increasing content of the CNTs. significant quantity to contribute either constructively or destructively to the overall conductivity. The only deviation between Billet 2 and Billet 3 was observed at a 0.1 mm thickness. This could be a result of the errors in determining the thickness of the 0.1 mmthick ribbon. The probability of this error is high, as we noticed the 0.1 mm thick ribbon's thickness changes dramatically at different sections of the ribbon. Billet 3 and Billet 4 exhibited decreasing conductivity, which can be attributed to the increasing content of the CNTs. electrical conductivity from Billet 2, indicating that the CNT precursor material is not of a significant quantity to contribute either constructively or destructively to the overall conductivity. The only deviation between Billet 2 and Billet 3 was observed at a 0.1 mm thickness. This could be a result of the errors in determining the thickness of the 0.1 mmthick ribbon. The probability of this error is high, as we noticed the 0.1 mm thick ribbon's thickness changes dramatically at different sections of the ribbon. Billet 3 and Billet 4 exhibited decreasing conductivity, which can be attributed to the increasing content of the CNTs. Summary and Conclusions It is evident from the results that the conductivity of the CNT-embedded copper changed dramatically at different reduction ratios. Although a remarkable improvement in electrical conductivity was not observed, a trend of low-and high-conductive zones in the CNT-embedded copper wire was observed. A consistent conductivity measurement for both oxygen-free copper and hot-pressed billets with no CNT precursor material at various stages of the rolling process was observed. The same effect was not observed on the CNT-embedded copper billets. Although we did not observe high-conductive zones in all of the billets, Billet 4 showed both high-and low-conductive regions. Especially with Billet 4, the conductivity changed dramatically from an average of 101.91% IACS at a 2 mm thickness to 93.33% and 92.97% IACS at 1mm and 0.1 mm thicknesses, respectively. Even though this is still lower than that of oxygen-free copper, it certainly adds to the excitement for further exploration of UCC wire. Overall, the current method needs improvements at various stages to achieve UCC wire. A more extensive study with collaborators from different fields is required. As of now UCC wire is still in the early stages of scaling up the results. Future Works UCC wire has been gaining attention in recent years, and researchers are investigating feasible methods to achieve UCC wire. There are many areas in our study that need improvement and more thorough research. Some of the areas that need to be focused on based on this study are as follows: The Effect of Different Grain Sizes of Copper Powder (Raw Material) This was not investigated in this study, but in future works, it is advisable to compare the results with multiple sizes of copper powder. This study assisted in lowering the porosity of sintered CNT-embedded copper billets. Using Hot Rolling/Hot Extrusion This is vital in developing UCC wire. We know that fully cold-worked material tends to exhibit lower electrical conductivity compared to hot-worked and annealed material. In this study, we used cold rolling due to the fact that we had no other options available during the final stage of this overall program. In future work, it is essential to work with Summary and Conclusions It is evident from the results that the conductivity of the CNT-embedded copper changed dramatically at different reduction ratios. Although a remarkable improvement in electrical conductivity was not observed, a trend of low-and high-conductive zones in the CNT-embedded copper wire was observed. A consistent conductivity measurement for both oxygen-free copper and hot-pressed billets with no CNT precursor material at various stages of the rolling process was observed. The same effect was not observed on the CNT-embedded copper billets. Although we did not observe high-conductive zones in all of the billets, Billet 4 showed both high-and low-conductive regions. Especially with Billet 4, the conductivity changed dramatically from an average of 101.91% IACS at a 2 mm thickness to 93.33% and 92.97% IACS at 1 mm and 0.1 mm thicknesses, respectively. Even though this is still lower than that of oxygen-free copper, it certainly adds to the excitement for further exploration of UCC wire. Overall, the current method needs improvements at various stages to achieve UCC wire. A more extensive study with collaborators from different fields is required. As of now UCC wire is still in the early stages of scaling up the results. Future Works UCC wire has been gaining attention in recent years, and researchers are investigating feasible methods to achieve UCC wire. There are many areas in our study that need improvement and more thorough research. Some of the areas that need to be focused on based on this study are as follows: The Effect of Different Grain Sizes of Copper Powder (Raw Material) This was not investigated in this study, but in future works, it is advisable to compare the results with multiple sizes of copper powder. This study assisted in lowering the porosity of sintered CNT-embedded copper billets. Using Hot Rolling/Hot Extrusion This is vital in developing UCC wire. We know that fully cold-worked material tends to exhibit lower electrical conductivity compared to hot-worked and annealed material. In this study, we used cold rolling due to the fact that we had no other options available during the final stage of this overall program. In future work, it is essential to work with either hot rolling/hot extrusion. Extrusion is preferred due to the ability to produce wire/rods by applying immense pressure, which aids in the mechanical bonding of CNTs with copper and greatly assists in maintaining intimate electrical contact between nanotube ends and the matrix material. Impact of Grain Size on Electrical Conductivity This study was conducted with an intention to produce commercial-scale UCC wire by investigating favorable manufacturing methods. A rigorous microstructural analysis with respect to how grain size impacts the electrical conductivity needs to be performed. Determining a Favorable Reduction/Extrusion Ratio This is possible only if we control the precursor material size. In this study, we are able to observe the effects of the extrusion ratio, as the conductivity of the CNT-embedded copper wire changes drastically with the reduction ratio as the size of the agglomeration's changes and the ohmic distance between the nanotubes increases. Amount of CNT Precursor Material Mixed with Copper Powder Our initial assumption that higher concentrations of CNTs enable additional streaks of ultra-conductive paths was not viable. We faced difficulty in deagglomeration and dispersing CNTs at lower concentrations. Furthermore, the billets with a higher concentration of CNT precursor material fractured in our initial works, which forced us to work with lower concentrations. This is a huge concern, since a lower concentration of CNT might deagglomerate and disperse, but their contribution to bulk conductivity is difficult to measure and their overall contribution can be futile. A rigorous study is required for estimating the optimal concentration of CNT precursor material, the size of CNT, and the electrical conductivity of the wire formed. Data Availability Statement: The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. Conflicts of Interest: The authors declare no conflict of interest.
7,247.8
2021-09-25T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
Ischemia Alters the Expression of Connexins in the Aged Human Brain Although the function of astrocytic gap junctions under ischemia is still under debate, increased expression of connexin 43 (Cx43) has been observed in ischemic brain lesions, suggesting that astrocytic gap junctions could provide neuronal protection against ischemic insult. Moreover, different connexin subtypes may play different roles in pathological conditions. We used immunohistochemical analysis to investigate alterations in the expression of connexin subtypes in human stroke brains. Seven samples, sectioned after brain embolic stroke, were used for the analysis. Data, evaluated semiquantitatively by computer-assisted densitometry, was compared between the intact hemisphere and ischemic lesions. The results showed that the coexpression of Cx32 and Cx45 with neuronal markers was significantly increased in ischemic lesions. Cx43 expression was significantly increased in the colocalization with astrocytes and relatively increased in the colocalization with neuronal marker in ischemic lesions. Therefore, Cx32, Cx43, and Cx45 may respond differently to ischemic insult in terms of neuroprotection. Introduction Astrocytes are a major cell type in the central nervous system and play an important role in regulating brain metabolism [1,2]. Moreover, astrocytes compose the frame network and communicate through gap junctions. A gap junction hemichannel consists of six connexins, and the connexin 43 (Cx43) is a main subtype of astrocytic gap junctions [3,4]. Previous studies showed that heterozygous Cx43 null mice exhibited significantly increased infarct volume [5] and enhanced apoptosis following ischemic brain insult [6]. Likewise, mice lacking Cx43 in astrocytes showed a significantly increased infarct volume and amplified inflammatory response in the penumbra compared to control littermates [7]. These results suggested that astrocytic gap junctions may play a critical role in controlling neuronal apoptosis and inflammatory response following brain ischemia. Moreover, in ischemic brain lesions of human samples, the expressed level of Cx43 was significantly increased in the penumbra compared to the intact area [8]. Therefore, astrocytic gap junctions may provide neuronal protection against ischemic insult not only in animal models but also in the human brain. However, the effects of astrocytic gap junctions on pathological conditions are still being debated [9,10]. Recently, gap junctions composed of different types of connexins have been reported to have selective permeability for different biological molecules [11]. A few reports have also reported alterations of connexin expression under pathological conditions in the human brain [12,13]. In this study, we used immunohistochemical analysis to examine the levels of Cx26, Cx32, Cx43, and Cx45 expression in neurons and astrocytes in ischemic lesions from the view point of human brain pathology. Samples. We screened postmortem human brain samples with ischemic stroke (n = 53) in our hospital. First, we carefully checked all clinical records and image examinations, such as CT, MRI, MRA, and ultrasound echo cardiography 2.2. Immunohistochemistry. The samples were fixed in phosphate buffered paraformaldehyde for one month and embedded in paraffin blocks. We observed whole brain sections with hematoxyline and eosin (H&E) staining under photomicroscope (Eclipse E800, Nikon, Tokyo, Japan). Then, smaller brain samples (approximately 3 cm × 2 cm) were sectioned from the intact and ischemic cortex and included surrounding areas as shown in the representative picture ( Figure 1). These brain sections (5 μm in thickness) were mounted on glass slides and deparaffinized with xylene and graded ethanol solutions. Sections were then immersed in methanol with 0.3% H 2 O 2 for 15 minutes to inactivate internal peroxidase. After the slides were treated with 0.1% trypsin for 5 minutes, they were irradiated with microwaves in 0.01 M citrate buffer (pH 6.0) for 5 minutes at 500 W, followed by 20 minutes cooling for antigen-retrieval. The samples were then blocked with Protein Block Serum-Free (X0909, Dako, Glostrup, Denmark) for 30 minutes at room temperature. The sections were incubated with glial fibrillary acidic protein (GFAP: monoclonal, Dako), microtubuleassociated protein 2 (MAP2: monoclonal, Chemicon, Temecula, CA), connexin 26 (Cx26: polyclonal, Chemicon), Cx32 (polyclonal, ProteinTech Group Inc., Chicago, IL), Cx43 (polyclonal, Chemicon), and Cx45 (polyclonal, Alpha diagnostic Intl., San Antonio, TX) antibodies (dilution was 1 : 500, 1 : 1000, 1 : 500, 1 : 200, 1 : 500, and 1 : 400, resp.) in 0.3% Tween20 in PBS overnight at 4 • C, then reacted with appropriate secondary antibodies (Alexa Fluor, Molecular Probes Inc., Eugene, OR) in 0.3% Tween20 in PBS for 30 minutes at room temperature. As for the primary antibodies, GFAP was used for the detection of astrocytes, and MAP2 was used as the neuronal marker. The sections were observed under a luminescence microscope (Nikon Eclipse E600) and images captured with an attached digital camera (DXM1200C, Nikon). Morphometry. Because we were not able to evaluate the exact amount of protein in tissue which was used in this study, we analyzed the amount of protein expression semiquantitatively. That is, the area of immunoreactive plaques was measured in four randomly selected regions (220 μm × 165 μm square) and comparisons made between intact and ischemic peripheral areas. The analysis was performed with the aid of computer software (Scion Image Beta 4.0.2, Scion Image Co., Frederick, MD). Pixel counts were used for calculation. All measurements were repeated on two different slides for each patient. On the pictures of double immunohistochemical staining, yellow plaques represent the coexpression of connexin proteins and neuronal or astrocytic markers. These yellow plaques were extracted with the assistance of computer software (Adobe Photoshop CS2, Adobe Systems Inc., San Jose, CA), to assess the amount of colocalized connexins. All acquired images were standardized by subtracting background density; that is, particles with sizes smaller than 50 pixels or larger than 500 pixels were excluded from images in our samples, as they are staining debris confirmed by negative controls. Statistical Analysis. All data is presented as mean ± SD values for each model. Mean values in each lesion were compared by an ANOVA posthoc test, Bonferroni/Dunn method using Stat View J 5.0 software (SAS Institute Inc., Cary, NC). Values of P < .05 were considered as significant. Journal of Biomedicine and Biotechnology 3 Results and Discussion We observed the expression of different types of connexins in neurons and astrocytes to assess whether connexins play different roles under ischemic conditions. In this study, the expression of Cx26, Cx32, Cx43, and Cx45 was investigated because of previous reports in which critical roles for these connexins were suggested. Cx26 has been reported to form a major component of astrocytic gap junctions [14,15], although its presence and function in the brain is still under debate [16,17]. Hemichannels formed by Cx26 and Cx32 have been reported to pass cGMP more effectively than Cx32 homomeric hemichannels, suggesting that the effect of Cx26 is specific in cGMP transport [18]. Cx32 is expressed not only in gap junctions of Schwann cells [19] but also in neurons and oligodendrocytes in the central nervous system (CNS) [20,21]. Cx32 was reported to participate in regulating the permeability of adenosine, a critical component of ATP [22]. Moreover, Cx32 knockout mice presented amplified neuronal loss following brain ischemia as compared to wild type mice [23]. Cx45 is reported to be expressed in neuronal and oligodendrocytic gap junctions [24,25]. Electrical synapses composed of gap junctions, in which the main subtype of connexin is Cx45 [26], could disrupt chemical neurotransmission during development and regeneration [27]. However, it has also been reported that a Ca 2+ influx has a critical impact on spreading cell damage from the core lesion to the penumbral area after ischemia [28,29]. Therefore, the role of electrical synapses composed of gap junctions against ischemic insult is still controversial. Cx43 is a major component of astrocytic gap junctions [3,4]. We have reported that the reduction of Cx43 expression extended neuronal apoptosis and amplified gliosis following ischemia [7,30]. Moreover, gap junctions composed of Cx43 are reported to be involved in the distribution of ATP between attached cells [22]. The immunohistochemical staining revealed that the gross expression of Cx32 (Figures 2(a) and 2(b)) was relatively increased in the ischemic area compared to the intact area (Figure 2(g): P = .051). As seen in Figures 2(c) and 2(d), the expression of the yellow plaques representing Cx32 showed colocalization with MAP2. Expression was visualized along neuronal axons and was more abundant in the ischemic lesion than those in the intact area. According to calculations of the amount of the yellow plaques, Cx32 expression in MAP2 positive areas was significantly increased in the ischemic lesion compared to the intact area ( Figure 2(g), P = .009). On the contrary, the expression of Cx32 on GFAP positive cells was not changed in the ischemic lesion (Figures 2(e), 2(f), and 2(g)). The expression of Cx45 (Figures 3(a) and 3(b)) was significantly increased in ischemic lesions compared to intact area (Figure 3(g), P = .0003). Cx45 expression colocalized in MAP2 positive area was observed in neuronal body and axons (Figures 3(c) and 3(d)) and was significantly increased in the ischemic lesion compared to the intact area ( Figure 3(g), P = .0047). As shown in Figures 3(e) and 3(f), the expression of Cx45 in GFAP positive cells was not changed between ischemic and intact regions (Figure 3(g)). As shown in Figures 4(a) and 4(b), the gross expression of Cx43 was significantly increased in ischemic lesions compared to intact tissue (Figure 4(g), P = .0154). The coexpression of Cx43 and MAP2 (Figures 4(c) and 4(d)) was visualized as punctate staining and relatively increased in the ischemic lesion compared to intact area (Figure 4(g), P = .0599). The coexpression of Cx43 and GFAP (Figures 4(e) and 4(f)) was significantly increased in ischemic lesions compared to intact area (Figure 4(g), P < .001). The results supported previous reports in which showed increased Cx43 expression in the ischemic lesion [8,31,32]. Meanwhile, the gross expression of Cx26 was not changed between the intact area and ischemic lesion. The amount of Cx26 expression which colocalized with MAP2 or GFAP positive area revealed no significant difference between intact and ischemic regions ( Figure 5). Since the role of Cx26 in astrocytic gap junctions is still controversial, our findings may support previous reports [17]. In the ischemic lesion, neurons and astrocytes are perishing, and their cell bodies and processes will shrink. On the other hand, some astrocytes and microglia react by expanding their body and processes as an inflammatory response. As mentioned above, both Cx32 and Cx45 exist in neurons and/or oligodendrocytes, and Cx43 exists in astrocytes. In this study, a significant increase of Cx32 and Cx45 expression was mostly observed in neuronal axons in ischemic lesions. In the CNS, the neuronal axon is covered with a myelin sheath of oligodendrocytes, and therefore, the expression of Cx32 and Cx45 could be increased in either, or both, neuronal axons and oligodendrocytes after ischemic insult. The Cx43 expression was significantly increased in astrocytes within ischemic lesions and was slightly increased in the coexpression of Cx43 and MAP2 after ischemic insult, suggesting that Cx43 may participate in the composition of gap junctions between neurons and astrocytes activated by ischemic stress. Since Cx32 relates to adenosine transport and Cx43 participates in distributing ATP, both Cx32 and Cx43 may play an important role in maintaining energy support after ischemic insult. Cx45 has been reported to play a role in spreading chemical stress against neurons from the ischemic core to peripheral area. It would require further assessment to reveal the effect of increased Cx45 expression following ischemia. We could not assess the expression of Cx36, one of the major components of neuronal gap junctions, because our samples, fixed in paraformaldehyde and embedded in paraffin blocks, were unable to be stained with any anti-Cx36 antibodies we had available. Cx30 is also a major connexin subtype in astrocytic gap junctions. However, the anti-Cx30 antibodies which were used in this study did not show reactivity against human brain samples. Since the evidence showing that Cx30 may compensate for the lack of Cx43 has been accumulating [6,17], the alteration of Cx30 expression after ischemic stress should be explored. Moreover, we had not directly observed gap junctions between neurons and astrocytes. Alternatively, double immunohistochemical staining was used for investigating the expression of specific connexins in neurons and ) † * * (g) Figure 2: Cx32 immunofluorescent stainings indicate amplified expression in the ischemic lesion (b) compared to the intact area (a). Inset pictures of (c) and (d) represent double immunofluorescent staining with Cx32 (green) and MAP2 (red). Representative pictures in second row show the extracted area as yellow region of the double immunofluorescent staining from intact (c) and ischemic area (d). The yellow region which stands for Cx32 immunopositive area colocalized with MAP2 shows increased expression in the ischemic periphery. Inset pictures of (e) and (f) represent double immunofluorescent staining with Cx32 (green) and GFAP (red). Representative pictures in third row show the extracted area as yellow region of the double immunofluorescent staining from intact (e) and ischemic area (f). The yellow region which stands for Cx32 immunopositive area colocalized with GFAP seems similar level of the expression between normal and ischemic area. Black bars in graph (g) indicate the average counts of protein expressions. The Cx32 expression was relatively increased in the ischemic area compared to the intact area ( †: P = .0510). The coexpression of Cx32 and MAP2 was significantly increased in the ischemic lesion as compared to the intact area ( * * : P < .01). However, there was no statistical difference in Cx32 and GFAP coexpression between intact and ischemic areas. Scale bar indicates 50 μm. Figure 4: Cx43 immunofluorescent stainings indicate amplified expression in the ischemic lesion (b) compared to the intact area (a). Inset pictures of (c) and (d) represent double immunofluorescent staining with Cx43 (green) and MAP2 (red). Representative pictures in second row show the extracted area as yellow region of the double immunofluorescent staining from intact (c) and ischemic area (d). The yellow region which stands for Cx43 immunopositive area colocalized with MAP2 shows amplified expression in the ischemic periphery. Inset pictures of (e) and (f) represent double immunofluorescent staining with Cx43 (green) and GFAP (red). Representative pictures in third row show the extracted area as yellow region of the double immunofluorescent staining from intact (e) and ischemic area (f). The yellow region which stands for Cx43 immunopositive area colocalized with GFAP also shows increased expression level in ischemic lesion as compared to intact area. Black bars in graph (g) indicate the average counts of protein expressions. The Cx43 expression alone and the amount of coexpression with Cx43 or GFAP were significantly increased in the ischemic lesion compared to the intact area. The coexpression of Cx43 and MAP2 was increased in ischemic lesion compared to intact area ( †: P = .0599). Scale bar indicates 50 μm. * * : P < .01, * : P < .05. Figure 5: Cx26 immunofluorescent stainings show similar finding between intact area (a) and ischemic lesion (b). Inset pictures represent double immunofluorescent staining with Cx26 (green) and MAP2 (red) in (c) and (d) and with Cx26 (green) and GFAP (red) in (e) and (f). Representative pictures in second row show the extracted area as yellow region of the double immunofluorescent staining from intact (c) and ischemic area (d). The yellow region which stands for Cx26 immunopositive area colocalized with MAP2 shows the same amount of expression. The yellow region which stands for Cx26 immunopositive area colocalized with GFAP shows no difference of expression between intact and ischemic area. Black bars in graph (g) indicate the average counts of protein expressions. The Cx26 expression was not changed in the ischemic lesion as compared to the intact area. No statistical alteration was observed in the amount of coexpression of Cx26 and MAP2 nor Cx26 and GFAP between intact and ischemic regions. Scale bar indicates 50 μm. Intact astrocytes. In the future, gap junctions between different cell types should be directly observed through an electron microscope. Gap junctions of oligodendrocytes or microglia need further investigation in order to confirm the role of gap junctional intercellular communication following ischemic insult. In conclusion, this is the first report that has revealed amplified expressions of Cx32 and Cx45 in MAP2 positive areas within the ischemic lesion from the view point of human brain pathology. Furthermore, Cx32 and Cx43 may have a neuroprotective role under ischemic conditions in the human brain.
3,777.2
2009-09-23T00:00:00.000
[ "Biology", "Psychology" ]
Classification of Imbalanced Datasets using One-Class SVM, k-Nearest Neighbors and CART Algorithm —In this paper a new algorithm, OKC classifier is proposed that is a hybrid of One-Class SVM, k-Nearest Neighbours and CART algorithms. The performance of most of the classification algorithms is significantly influenced by certain characteristics of datasets on which these are modeled such as imbalance in class distribution, class overlapping, lack of density, etc. The proposed algorithm can perform the classification task on imbalanced datasets without re-sampling. This algorithm is compared against a few well known classification algorithms and on datasets having varying degrees of class imbalance and class overlap. The experimental results demonstrate that the proposed algorithm has performed better than a number of standard classification algorithms. I. INTRODUCTION Classification is a task of categorizing the instances of a specified class from amongst the given set of classes. This task is done by a classifier that is demonstrated on a dataset of training cases. Most of the classification algorithms expect balanced class, i.e. there will be practically equivalent number of cases from all classes in the preparation dataset. But in many real world domains, like fraud detection, medical diagnosis, etc., the number of examples that belong to one class may severely outnumber the instances that belong to another class/classes. Such datasets, in which significant differences in the proportion of cases having a place with various classes are possible, called imbalanced datasets. The imbalance in class distribution could prompt high misclassification rates of minority class cases. One of the real explanations for this is the majority of the classification algorithms deal with the objective of enhancement of accuracy. As the majority class instances are much higher in number than the minority class ones, the classifier would give high accuracy, even if it classifies all instances as majority class and misclassifies all the minority class instances. This is called class imbalance problem. Besides the imbalanced datasets, other data intrinsic characteristics like overlapping between classes, presence of small disjuncts and lack of density of the minority class in training datasets could also impact the performance of the classifier significantly. The issue of class imbalance becomes more serious in the presence of one or more of such data on intrinsic characteristics. A few arrangements have been proposed in the past to manage these issues independently. In this paper, we have proposed a new algorithm, namely, OKC classifier (hybrid of One-class SVM, K-nearest neighbor and CART) to overcome this problem. A. Imbalanced Datasets In many real life applications, the situation of imbalanced datasets every now and again shows up. A dataset in which one class extremely outnumbers other can be considered as an imbalanced dataset. The class with moderately less number of cases in a dataset is called 'minority class' and alternate class is called 'majority class'. The minority class usually represents the most essential idea to be learned, and it is hard to distinguish it since it may be related to huge and remarkable cases, or because the data acquisition of these cases is costly [1][2]. The imbalance of data distribution between different classes is known as between-class imbalance [3]. Such imbalance could be a consequence of the intrinsic nature of the data. For example, in the fraud detection domain, it is difficult to get the data related to the fraudulent transactions than the data that belong to legitimate transactions. Within a class imbalance is said to happen when a class is comprised of various sub-groups and the quantity of cases having a place with each sub-bunch is altogether not quite the same as those of other sub-bunches inside a similar class [4]. B. Class Overlapping The class overlaps problem appears when a region in data space contains training data from more than one class. In such case, there is no clear partition between various classes causing difficulty in the classification process. The performance of a classifier is extraordinarily influenced when the issue of class overlapping is present along with an imbalance in the dataset. It has been proved that for the datasets that have clean clusters, i.e. no overlapping and are linearly separable, classifier performance on such datasets is not influenced by any degree of imbalance [1,5]. In other works, it has been proved that if the data in the overlapping region are imbalanced, then the imbalance ratio affects the performance more than the size of overlap [1]. C. Lack of Density The issue of lack of density emerges when there is almost no information accessible to represent the minority class concept. In the event that the cases of the minority class are less, then it becomes difficult to distinguish between minority class and noise. The majority of standard classifiers aim to obtain a good generalization capability. In case of lack of density of a minority class, the classification rules that predict the minority class are highly specialized whereas due to the large number of majority class cases, the classification rules that predict the majority class seem to be more general to the classifier as their coverage is very high as compared to the minority class ones [6]. So, in this case the rules that predict the minority class are discarded by the classifier leading to high misclassification of a minority class. II. BACKGROUND Verma et al. [7] used median filter, Gaussian filter and unsharp masking J for the image enhancement. Entropy based segmentation is used to find the region of interest and then KNN and SVM classification techniques for the analysis of kidney stone images. The accuracy of KNN was found 89% and that of SVM was 84%. Li and Wang [8] used SIFT (Scale-invariant feature transform) algorithm to extract feature and the extracted features are clustered by K-means clustering algorithm. After clustering BOW (bag of word) of each image is constructed and multi-class classifier is trained using SVM (Support Vector Machine) to classify images. Authors reviled that SVM gave better results in small sample training set. Accuracy of image classification was about 90% with this method. Guo et al. [9] proposed SVM-based sequential classifier training (SCT-SVM) approach for remote sensing image classification. This technique help in reducing required number of training samples for classifier training. Different experiments were conducted with Sentinel-2A multitemporal data and accuracy of 76.18% to 94.02% achieved with the proposed technique. McDermott et al. [10] in this study investigate Support Vector Machine (SVM) classifiers for detecting brain hemorrhages using Electrical Impedance Tomography (EIT) measurement frames. A 2-layer model of the head with series of hemorrhages is designed by means of numerical models and physical phantoms. Authors reported that phantom models are more challenging with maximum specificity of 75% when used with the linear SVM. The detection are was increased when radial basis function (RBF) SVM classifier and a neural network classifier were applied. Badgujar and Deore [11] proposed a hybrid algorithm using Migrating Bird Optimization and Support Vector Machine (MB-SVM) classifiers. Gaussian filters are used to eradicate the noise from the fundus retinal image. Experimental validation on a publicly available STARE data-set demonstrates the improved performance of the proposed method over existing method. Ma et al. [12]. Presented weight-KNN the KNN-based model acquire the test image's k-nearest neighbors and get the prediction of the image according to the contribution of its neighbors. Hu et al. [13] combine color, texture and shape feature towards multi-type feature. These features were integrated with k-nearest neighbor classifier. Experiment were conducted on 4500 aerial images and recognition rate of 99% was achieved using this multi-type feature. Gul et al. [14] propose an ensemble of subset of k-NN classifiers, (ESkNN) for classification. Experiments were conducted on benchmark data sets and results are compared with usual k-NN, bagged k-NN, random k-NN, multiple feature subset method, random forest and support vector machines. The proposed ensemble gives better classification performance than the usual k-NN and its ensembles, and performs comparable to random forest and support vector machines. Guo et al. [15] proposed a guided filter-based method and used two fusion methods for spectral and spatial features. Hyperspectral images were classified using SVM. The proposed method were fast in execution and easy to implement. A. Proposed OKC Classifier The proposed algorithm is a hybrid of one class SVM, k-Nearest Neighbour and CART (Classification and Regression Tree) algorithms. In this algorithm, Hellinger distance and Gini impurity are used as splitting criteria for choosing the best feature and best value to split, respectively. Hellinger distance has been proved to be skewed insensitive [16] i.e. it is not affected by the situation of class imbalance. On each leaf node of this tree where the illustrations have diverse classes, feature selection is done to choose two features that could best discriminate among the classes and then k-Nearest Neighbours is trained on all examples and one class SVM is trained on the minority class samples. When a new prediction is to be done, it is first classified to the leaf node and then it is categorized as inlier or outlier by the one class SVM. If it is predicted as inlier, it is assigned the minority class otherwise after feature selection it is assigned the class predicted by the k-Nearest Neighbor algorithm with k=1 i.e. the class of its nearest neighbour. This algorithm is designed to handle the class imbalance problem even if other data intrinsic characteristics like class overlap and lack of density is also present. As the feature selection is done at each leaf, only those features that play a significant role in classification are selected. It means that overlapping features will be discarded and thus the class overlapping problem can be handled to a great extent. The feature selection is done by using Hellinger distance [17]. The one class SVM algorithm is trained on the minority class tests at each leaf with mixed samples, so it is ensured that all minority class illustrations are learnt by the classifier. B. One Class SVM Support Vector Machine (SVM) is a discriminative classifier formally defined by a separating hyper-plane. The conventional 2-class classifier finds a hyper-plane that isolates one class from another. The one-class SVM finds the hyperplane that separates all of the in-class points from the origin; it is essentially a two-class SVM where the origin is the only member of the second class. So, basically it separates all the data points from the origin and maximizes the distance from this hyper-plane to the origin. This results in a binary function, which captures regions in the input space and returns +1 in the region capturing the training data point & -1 elsewhere [18]. C. K-Nearest Neighbors In the K-Nearest Neighbour algorithm, an object is classified by a majority vote of its neighbors, with the object being classified to the class most common among its k nearest 2 | P a g e www.ijacsa.thesai.org neighbors. If k=1, the object is simply classified to the class of that single nearest neighbour. It is typically in light of the Euclidean separation between a test sample and the specified training samples. For n-dimensional space, the Euclidean distance between two points x and y is calculated as following:- It has been observed that the k-NN algorithm suffer from the curse of dimensionality [19] i.e. it cannot perform well when the number of features of the dataset is large. To deal with this issue, we are doing feature selection to select the best features that could best discriminate among the classes before applying k-NN. This feature selection is done at each leaf, with mixed class samples, separately so that the problem of class overlap could be minimized as different features may be prevalent in different places in the data space. D. CART Classification and Regression Tree (CART) is a binary recursive partitioning algorithm that is fit for handling nominal and continues attributes both as targets and predictors [20]. The classification tree is built by recursively splitting parent nodes into two child nodes that have maximum homogeneity. This homogeneity is determined by an impurity function. CART searches through all values of the attributes to find the best value to split. There are several impurity functions like Gini index, Towing splitting rule, etc. The process of splitting is stopped when a node becomes pure. Otherwise, it is repeated until a split result into a child node with less number of observations than a predefined number, or when the change in impurity function is less than the predefined minimum change number. Classification of a new observation is made by assigning the dominating class of the leaf node to which the new observation belongs to. In case of imbalanced datasets, when there is the problem of absolute rarity or lack of density of the minority class, the dominating class at the leaf nodes is usually the majority class. This results into misclassification of the observations that belong to the minority class. To sort out this problem, we are using the One-class SVM and k-NN at the leaf nodes with mixed classes instead of voting. One-class SVM is trained on the minority class, to cover all minority class examples so that the problem of lack of density of minority class can be handled to at least some extent. Then after selecting two features using Hellinger distance, k-NN is trained on all samples of the leaf. E. Splitting Criteria for OKC Classifier In the proposed algorithm, the splitting criteria used for the choice of the best features, is Hellinger distance and the criteria used for the selection of best value of the chosen feature is Gini impurity. Hellinger distance is a good criterion to be used with imbalanced datasets as it is not affected by the class distribution skew [16]. Assuming a binary class problem (class + and class-), let x+ be class+ and x-be class-, + is the number of positives in bin j and − is the number of negatives in bin j. For a feature that has p number of bins, the Hellinger distance is given below: The Hellinger distance for all features is calculated before each split and the feature with maximum Hellinger distance is chosen to split. After that, the choice of best value, of the selected feature, to split is made by using Gini impurity. Gini impurity is the expected error rate if one of the results from a set is randomly applied to one of the items in the set [20]. Gini impurity can be computed by summing the probability of each item being chosen times the probability of a mistake in categorizing that item. To compute Gini Impurity for a set of items, suppose i {1, 2,…m}, and let the fraction of the items labeled with the value i in the set, the Gini impurity as given below:- The value with the lowest Gini impurity is selected for split. F. Stopping Conditions for OKC Classifier The process of splitting of nodes is done recursively until some stopping condition is met. In the proposed algorithm, there are three stopping conditions: 1) When the node becomes pure i.e. all samples on that node belongs to a single class. 2) If the change in impurity functions, i.e. Gini index, after splitting, would be less than the predefined minimum value. 3) If the split would result into a child node with less number of samples than the predefined minimum number of samples. G. Algorithm Input: A set S of labeled instances, threshold values for minimum number of samples at leaves and minimum change in utility function i.e. Gini Impurity. Output: A binary tree with class labels and/or one class SVM, list of selected features and k-NN classifiers at leaves. Step 1 If all samples at the current node have the same labels, assign that label to the current node and return. Step 2 For each attribute, evaluate the hellinger distance and choose the attribute A with a maximum value of the hellinger distance. Step 3 For each distinct value of A, evaluate Gini impurity and choose the value V, with the lowest value of Gini Impurity. Step 4 Evaluate the difference between the utility of the current node and the utility that would result after split is performed on value V of attribute A. Step 5 If the difference in utility is less than the threshold value or if the split would result into nodes with the less number of samples than the threshold value, fit a one-class SVM on the minority class samples and calculate hellinger distance on all attributes to choose two attributes with highest and the second highest value of hellinger distance. On the chosen attributes fit a k-NN classifier and return. Step 6 Partition S with value V and attribute A. For each child node, call the algorithm recursively. III. EVALUATION AND DISCUSSION In this work, we have considered public dataset of five categories, namely, Yeast, CTG, Wilt, Fraud, and Semiconductors. Brief information about these databases also depicted in Table I. A. Experimental Results In order to evaluate the performance of the proposed algorithm, we have considered five different public datasets as described in Section 3. These five different datasets are normalized and taken from the UCI repository [21]. The results of the proposed algorithm are compared with standard machine learning algorithms decision tree, neural network, SVM, Naïve Bayes, k-Nearest Neighbors, Naive Bayes tree and CART. The proposed algorithm is also compared against random over-sampling, random under sampling, hybrid overunder sampling and meta-cost techniques applied to all the standard algorithms discussed in this section. In meta-cost, the cost of misclassification of minority class is set to double than the cost of misclassification of the majority class. The results obtained after performing various experiments without sampling, after random under-sampling and after random over-sampling are depicted in Tables II to IV, respectively. Experimental results based on hybrid of random undersampling and random over-sampling are presented in Table V. In Table VI, we have presented experimental results achieved after setting meta-cost double for misclassification of minority class than the misclassification of the majority class. We have seen that proposed classification algorithm, namely, OKC performs better than existing algorithms. IV. CONCLUSION In this paper, a new classification algorithm based on a hybrid combination of one class SVM, k-NN and CART algorithms has been proposed. This algorithm is outlined to such an extent that it could perform well in classification of imbalanced datasets that are non-linearly separable without any need of resampling. Also, it can deal with the circumstances of class overlap and lack of density of the minority class in imbalanced datasets. Our experiments have shown that the proposed algorithm could outperform a number of standard classification algorithms. However, this work is focused only on the binary classification tasks. The task of multiclass classification in the presence of class overlaps, lack of density of the minority class in imbalanced datasets is left for future scope.
4,380.6
2020-01-01T00:00:00.000
[ "Computer Science" ]
Multisource Smart Computer-Aided System for Mining COVID-19 Infection Data In this paper, we approach the problem of detecting and diagnosing COVID-19 infections using multisource scan images including CT and X-ray scans to assist the healthcare system during the COVID-19 pandemic. Here, a computer-aided diagnosis (CAD) system is proposed that utilizes analysis of the CT or X-ray to diagnose the impact of damage in the respiratory system per infected case. The CAD was utilized and optimized by hyper-parameters for shallow learning, e.g., SVM and deep learning. For the deep learning, mini-batch stochastic gradient descent was used to overcome fitting problems during transfer learning. The optimal parameter list values were found using the naïve Bayes technique. Our contributions are (i) a comparison among the detection rates of pre-trained CNN models, (ii) a suggested hybrid deep learning with shallow machine learning, (iii) an extensive analysis of the results of COVID-19 transition and informative conclusions through developing various transfer techniques, and (iv) a comparison of the accuracy of the previous models with the systems of the present study. The effectiveness of the proposed CAD is demonstrated using three datasets, either using an intense learning model as a fully end-to-end solution or using a hybrid deep learning model. Six experiments were designed to illustrate the superior performance of our suggested CAD when compared to other similar approaches. Our system achieves 99.94, 99.6, 100, 97.41, 99.23, and 98.94 accuracy for binary and three-class labels for the CT and two CXR datasets. Introduction to COVID-19 and Diagnosis The widespread COVID-19 pandemic constitutes a severe threat to global health. Therefore, most new research has used tools and techniques for tracking COVID-19 and discovering various infection areas to minimize the risk of its spread. Because of the massive quantity of data available every day for COVID-19 infection, spread, detection, deaths, etc., there is a need for big data analytics, storage, and security in NoSQL database management systems [1,2]. Machine learning and AI approaches can evaluate large quantities of COVID-19 data to create new models and techniques for diagnosing . Big data analysis techniques are crucial to analyze more data in less time, as time is a critical factor for treating COVID-19 infection cases. Furthermore, AI techniques enable a global visualization of the analyzed big data of COVID-19. The visualization uses AI to present an overview of global health and confirmed cases of COVID-19. In addition, the presented images of the lungs can indicate the presence of COVID-19. Therefore, the tracking of COVID-19 diseases to enhance community health needs comprehensive data and intelligent computational instruments. In a variety of approaches, numerous researchers have employed big data and AI tools to track COVID-19 disorders, as shown in Figure 1. COVID-19 is an infectious disease where coronaviruses are a large family of viruses that can affect both humans and animals and cause respiratory difficulties [1]. Historically, 2020 was considered a volatile year for humans worldwide compared to previous years because of COVID-19, as it is a massive threat to global health. As of March 2021, there have been more than 128 million confirmed illnesses and approximately 3 million deaths worldwide [2]. Therefore, the number of infected subjects is increasing, with more than 150 countries reportedly having at least one case [3]. Image scanning is helpful to diagnose COVID-19 for infected subjects. Patients that have been exposed and have terrible symptoms of the virus may not be identified by the outcome of RT-PCR tests [4][5][6] that can still be non-deterministic. Image scanning includes X-rays (CXR), and computed tomography (CT) images. CT scans have proven to be one of the most accurate methods of diagnosis for COVID-19 [7]. However, there are several significant drawbacks [8], such as the high cost and not being conducive to bedside testing [9]. Consequently, it is not usually used in COVID-19 diagnosis, and it is also not necessary for the progression of specific cases to be observed, especially in seriously ill patients [10]. On the contrary, the X-ray technique is a less sensitive method than CT for COVID-19 detection, with a reported baseline hypersensitivity of 69 percent [11]. The X-ray is also a cheaper, faster option and can be used in many healthcare centers. Positive X-ray results reduce the need for CT screening if there is a strong clinical suspicion of COVID-19 infection [11]. However, this presents limitations for patients, including pregnant women, since it can affect the fetus [12]. In both lungs, radiologists also examine multiple patchy, segmental, or sub-segmental shadows in the ground glass density when analyzing X-ray images to diagnose COVID-19 [13]. This can be automated to assist experts in making a decision [14][15][16]. Therefore, big data and AI technology offer an essential role in the battle against COVID-19. Both tools might help doctors to diagnose COVID-19 cases more quickly and accurately. Accordingly, computer-based models for predicting, foretelling, analyzing, and distributing SARS-CoV-2 drugs have been designed and developed, allowing machine learning, computer vision, and robotic technology to be applied. In addition, AI and big data tools include visualization to illustrate information that supports regional transmission and risk allocation. Different studies [17,18] were carried out based on in-depth learning technologies to diagnose and classify various diseases, such as viral pneumonia and organ tumors. Today, deep learning technologies have been used widely in the healthcare domain. This study makes the following contributions: • A deep learning sample-efficient algorithm for the diagnosis of COVID-19 based on CXR and CT scans. COVID-19 is an infectious disease where coronaviruses are a large family of viruses that can affect both humans and animals and cause respiratory difficulties [1]. Historically, 2020 was considered a volatile year for humans worldwide compared to previous years because of COVID-19, as it is a massive threat to global health. As of March 2021, there have been more than 128 million confirmed illnesses and approximately 3 million deaths worldwide [2]. Therefore, the number of infected subjects is increasing, with more than 150 countries reportedly having at least one case [3]. Image scanning is helpful to diagnose COVID-19 for infected subjects. Patients that have been exposed and have terrible symptoms of the virus may not be identified by the outcome of RT-PCR tests [4][5][6] that can still be non-deterministic. Image scanning includes X-rays (CXR), and computed tomography (CT) images. CT scans have proven to be one of the most accurate methods of diagnosis for COVID-19 [7]. However, there are several significant drawbacks [8], such as the high cost and not being conducive to bedside testing [9]. Consequently, it is not usually used in COVID-19 diagnosis, and it is also not necessary for the progression of specific cases to be observed, especially in seriously ill patients [10]. On the contrary, the X-ray technique is a less sensitive method than CT for COVID-19 detection, with a reported baseline hypersensitivity of 69 percent [11]. The X-ray is also a cheaper, faster option and can be used in many healthcare centers. Positive X-ray results reduce the need for CT screening if there is a strong clinical suspicion of COVID-19 infection [11]. However, this presents limitations for patients, including pregnant women, since it can affect the fetus [12]. In both lungs, radiologists also examine multiple patchy, segmental, or sub-segmental shadows in the ground glass density when analyzing X-ray images to diagnose COVID-19 [13]. This can be automated to assist experts in making a decision [14][15][16]. Therefore, big data and AI technology offer an essential role in the battle against COVID-19. Both tools might help doctors to diagnose COVID-19 cases more quickly and accurately. Accordingly, computer-based models for predicting, foretelling, analyzing, and distributing SARS-CoV-2 drugs have been designed and developed, allowing machine learning, computer vision, and robotic technology to be applied. In addition, AI and big data tools include visualization to illustrate information that supports regional transmission and risk allocation. Different studies [17,18] were carried out based on in-depth learning technologies to diagnose and classify various diseases, such as viral pneumonia and organ tumors. Today, deep learning technologies have been used widely in the healthcare domain. This study makes the following contributions: This paper discusses the most recent research in Section 2. Section 3 presents an overview of the methods and techniques used. The proposed model is presented in Section 4. Section 5 gives a brief description of the dataset used and explains the computer system configuration, parameter settings, and performance metrics. Section 6 presents the experiment and discussion. Finally, Section 7 concludes the paper with an outline of future work. Background on Machine Learning and Deep Learning Deep learning (DL) is a subset of the machine learning (ML) branch, the third generation of artificial neural networks. The principal objective of DL is the simulation of high-level data abstractions [19][20][21]. Different DL utilizes numerous layers to remove upper-level functions progressively from the raw data. DL produces several neuron layers, organized layer per layer. CNNs are mainly utilized for images. CNNs are new deep learning algorithms suggested by Badrinarayana [24]. CNN lines distinguish among the weights of different artifacts in the image. This approach needs less pre-processing comparing with other shallow classification algorithms [26]. For input images, a CNN uses filters to capture spatial and time dependencies [27]. In CNNs, the height, m, and width, n, and r correspond to the channel number or depth. The input is separated by m and r instead of the three input components, m × m × r. There are several kernels of size k in every convolution layer [28]. As mentioned previously, the filtering is the base of relations, along with the development of k maps of each size (m, m, 1), each with the same parameters. The convergence layer calculates the point product, similar to MLP, among weights and inputs, except for a small amount of the original volume of the input, as shown in Equation (1). In addition, an activation function for the non-linearity function activates the contribution of the convolutional layers [27]: The output of the current k-layer is denoted by h k , the kernel or weight of the current layer is indicated by W k , s presents the output of the previous layer, and b k represents the current bias of the current layer. The number of computational parameters is an essential indicator of a deep learning model's complexity. The output characteristic maps can be described according to the following formula [27]: The input map dimensions are denoted by N and filter dimensions or receptive area by F, while M refers to output map dimensions and S to the stride length. Usually, padding is used to guarantee input and output during convolution operations that are the same size. The padding number varies according to the kernel size. The number of rows and columns for padding is calculated in Equation (3) [29]. 4 of 20 where the amount of padding is symbolized by P, and F represents the dimensions of the kernels. One of the most important principles in computer engineering is the reusability of components. In turn, many architectures have been introduced, including AlexNet, ResNet-50, ResNet-101, VGG-16, and VGG-19 [27,30]. Therefore, we intend to reuse the model regards to transfer learning guidelines. The transfer learning process reuses information from the source domain in the target domain [31]. See Figure 2 for extra explanation. Parameter optimization, structural reformulation, regularization, etc., are different improvement categories that were interested by many research communities. However, the main drive in CNN performance improvement appears to have come from the rearrangement of processing units and the design of new blocks. The majority of advancements in CNN designs have been carried out in the areas of depth and spatial exploitation to develop an excellent internal representation from raw pixels without requiring extensive processing. where the amount of padding is symbolized by , and F represents the dimensions of the kernels. One of the most important principles in computer engineering is the reusability of components. In turn, many architectures have been introduced, including AlexNet, Res-Net-50, ResNet-101, VGG-16, and VGG-19 [27,30]. Therefore, we intend to reuse the model regards to transfer learning guidelines. The transfer learning process reuses information from the source domain in the target domain [31]. See Figure 2 for extra explanation. Parameter optimization, structural reformulation, regularization, etc., are different improvement categories that were interested by many research communities. However, the main drive in CNN performance improvement appears to have come from the rearrangement of processing units and the design of new blocks. The majority of advancements in CNN designs have been carried out in the areas of depth and spatial exploitation to develop an excellent internal representation from raw pixels without requiring extensive processing. AlexNet is considered as a type of feed-forward CNN with depth of eight layers and a spatial exploitation architecture [32]. It has five convolution layers (conv1 through conv5) as well as three completely connected layers (fc6, fc7, fc8) [33]. It was trained by classifying 1 million photos into 1000 different categories [23]. Each ResNet type, such as ResNet-50 and ResNet-101, has its residual block. ResNet-50 is a 50-layer network that is cascaded from a convolution layer to 16 residual blocks within the network and finally to a fully linked layer. ResNet-101 has a total of 101 layers and 33 residual blocks [35]. Table 1 shows how contemporary models compare in terms of error, network parameters, the maximum number of connections, and more. Machine learning (ML) algorithms are known for learning underlying relationships in data and making decisions without the need for explicit instructions. The capacity of a CNN to utilize spatial or temporal correlation in data is one of its most appealing features. A support vector machine (SVM) is a shallow classification algorithm developed by Vapnik [36]. The SVM is classification algorithm reduces learning steps and offers a quicker AlexNet is considered as a type of feed-forward CNN with depth of eight layers and a spatial exploitation architecture [32]. It has five convolution layers (conv1 through conv5) as well as three completely connected layers (fc6, fc7, fc8) [33]. It was trained by classifying 1 million photos into 1000 different categories [23]. Each ResNet type, such as ResNet-50 and ResNet-101, has its residual block. ResNet-50 is a 50-layer network that is cascaded from a convolution layer to 16 residual blocks within the network and finally to a fully linked layer. ResNet-101 has a total of 101 layers and 33 residual blocks [35]. Table 1 shows how contemporary models compare in terms of error, network parameters, the maximum number of connections, and more. Machine learning (ML) algorithms are known for learning underlying relationships in data and making decisions without the need for explicit instructions. The capacity of a CNN to utilize spatial or temporal correlation in data is one of its most appealing features. A support vector machine (SVM) is a shallow classification algorithm developed by Vapnik [36]. The SVM is classification algorithm reduces learning steps and offers a quicker solution than other common algorithms [37,38]. The SVM classifier is built on the concept of the most appropriate hyper-planes, which are used to differentiate between two classes, positive or negative as shown in Equation (4) [39] by including the central function. Brief Coverage of Previous Works Many researchers are currently encouraged to establish early detection models to detect COVID-19 infection before outbreak: Zhou Tao et al. [40] proposed EDL_COVID (an ensemble deep learning model) to detect COVID-19 disease from 2933 CT images. The proposed model depends on the three ensemble models AlexNet, GoogleNet, and ResNet. An ensemble strategy was proposed by Rohit Kundu et al. [41] for detecting COVID-19 in CT scan images for human lungs. They employed two datasets of CT scan images to create decision scores for the proposed ensemble model utilizing three CNN models: VGG-11, ResNet-50-2, and Inception v3. The authors proposed a deep convolutional 3D neural network called DeCoVNet to identify COVID-19 from CT images [15]. Thus, when COVID-19 was diagnosed, the algorithm worked in a black box because it focused on DL and was still at an early stage of explanatory ability. COVNET [16] has developed and tested the efficiency of COVID-19 detection utilizing chest CT. The researchers have proposed a 3D deep learning system. The robustness evaluation of the model included community-acquired pneumonia (CAP) and other nonpneumonia exams. In contrast with the RT-PCR assay of COVID-19, Yang et al. [18] assessed the agnostic and consistency value of chest CT. They suggested that chest CT should be considered, particularly in epidemic areas with a high preliminary possibility of disease for screening of COVID-19, comprehensive assessment, and follow-up. Horri et al. [32] used three different methods of physician imaging (X-ray, ultrasound, and CT) for diagnosing COVID-19 stably and automatically. They utilized a deep VGG transmission learning network to refine their analysis. The accuracy of their classification was stated to be 86 percent, 84 percent, and 100 percent for three different datasets. Ying et al. obtained a 94% accuracy and a 99% AUC with CT images utilizing a deep model based on ResNet50, known as DRE-Net [42]. They also considered an approach for target identification, i.e., indicating the areas of concern with bounding boxes [43]. VGG architecture [44] has been used to diagnose symptomatic lung regions [34]. A suggested method distinguishes cases of pneumonia (CAP) and non-pneumonia from COVID-19 (NP) in the population. Jiang et al. [15] proposed an early screening strategy using pulmonary CT imaging to distinguish COVID-19 mutations from viral influenza pneumonia and stable cases. Several CNN models were suggested and utilized to identify the CT image datasets and quantify the risk of infection with COVID-19. The results may be beneficial in deep learning technologies for the early screening of COVID-19 patients. In the classic ResNet for feature extraction, the authors have proposed a location-attention mechanism. The AIMDP model was suggested [42] for use with mutable artificial intelligent techniques to improve the model's diagnosis and predictive role. The authors [32] developed a framework focused on the deep learning of the detection of CT viral pneumonia. The authors in [44] also provide an overview of the most recent artificial intelligence systems in X-ray images for COVID-19 diagnostics. However, they used X-ray images, as their work was based on this only. To estimate COVID-19 diagnostics, Ghoshal et al. [45] presented a Bayesian convolutional neural network, differentiating between COVID-19 and non-COVID-19 cases, with a 92.9 percent accuracy. Binary classification was carried out by Narin et al. [46] for detecting COVID-19 to achieve the best accuracy of 98.0% with ResNet50 models, compared to the various deep learning (DL) models. Zhang et al. [47] submitted the COVID-19 (0.952 AUC) ResNet model to illustrate the pneumonia areas affected by applying the Grad-CAM approach for the gradient activation. These studies have provided detailed solutions for combating the pandemic COVID-19. However, there are certain drawbacks to be taken into account. In the best case, researchers used small datasets of fewer than 400 images of COVID-19. In some cases, only 10 X-ray images were used for the COVID-19 class to validate the framework. Furthermore, there was no ground for comparison or medical surveillance with the obtained results, which can suggest not only COVID-19 identification but also the location of influenced areas in the lungs. For iteratively sliced COVID-19 identification using X-ray pictures, a deep learning model ensemble is proposed [49]. This research made use of a CNN and a set of pre-trained models. The proposed algorithm enhances memory efficiency while reducing complexity. Architecture of the Smart CAD System The proposed CAD system depends on deep learning, transfer learning, and shallow machine learning. In deep learning, multi-hidden layers are stacked for learning objects. These layers require a training process including "fine-tuning" to slightly adjust the weights of the DNN found in pre-training during the backpropagation procedure. Hence, DL nets can extract, classify the features, and effectively make a precise decision after an efficient training process. Transfer learning is used in the proposed CAD system to optimize multiple CNN architectures for datasets. However, the transfer-learning methodology generates optimal fitted CNNs for the datasets capable of classifying and diagnosing infection of COVID-19 scan images. In addition, these fine-tuned models can extract the feature set usable by the different shallow classifiers. Figure 3 shows a context diagram starting from scanning the image of the inspected case until detecting the infection response using the proposed smart CAD system, in which there are key components comprising the proposed CAD system, including: process. It can be divided into:  Classification: A vital component of the smart CAD system in which different architectures can be alternately used. These models are responsible for extracting the features and the classification.  Decision Unit: This depends on the most common and powerful DL activation function, ReLu. It is a subsequent responsibility of the classification component to make a decision. Figure 4 shows different phases of the proposed system in a layered sub-black box style, in which the essential layers are briefly described for the proposed smart CAD system. According to current knowledge, all COVID-19 detection systems consist of a few significant layers: input data, model layer, activation layer, and model layer output for CXR or CT scan image analysis. In turn, the classification and decision in every CAD system using deep learning must include a collection of these different layers. Each group of these layers with a specific order is called a network architecture starting from input layer to output layer (e.g., AlexNet, VGG-16, VGG-19). Next, a brief description defining each role is discussed in detail along with its importance for the medical CAD system. Figure 4 shows different phases of the proposed system in a layered sub-black box style, in which the essential layers are briefly described for the proposed smart CAD system. According to current knowledge, all COVID-19 detection systems consist of a few significant layers: input data, model layer, activation layer, and model layer output for CXR or CT scan image analysis. In turn, the classification and decision in every CAD system using deep learning must include a collection of these different layers. Each group of these layers with a specific order is called a network architecture starting from input layer to output layer (e.g., AlexNet, VGG-16, VGG-19). Next, a brief description defining each role is discussed in detail along with its importance for the medical CAD system. Input Layer This layer reads the image data collection in advance. In other words, the CXR and the CT scan images are pre-processed independently. In the pre-processing phase, the images are reconstructed and resized. The images are taken from various sources, and their dimensions vary since the taken images from medical instruments were created from several letters, arts and crafts, and medical symbols. Moreover, the model layer of each of these products needs separate image dimensions to be managed. Therefore, the input im- Input Layer This layer reads the image data collection in advance. In other words, the CXR and the CT scan images are pre-processed independently. In the pre-processing phase, the images are reconstructed and resized. The images are taken from various sources, and their dimensions vary since the taken images from medical instruments were created from several letters, arts and crafts, and medical symbols. Moreover, the model layer of each of these products needs separate image dimensions to be managed. Therefore, the input image size was adjusted to fit the templates used in this analysis rather than cutting the lung and chest area as far as possible. Model Layer This layer represents the leading layer of the proposed smart CAD system, in which most calculations are carried out. The calculations include extracting image dataset features and preserving the spatial relationship between image pixels. Next, the data are moved from the input layer to the model layer. This layer contains four sub-black boxes. The CNN-based AlexNet was used with the aim of utilizing AlexNet's pre-trained approach to diagnose COVID-19. The second sub-layer is the CNN-based RESNET of two versions RESNET50 and RESNET101, distinguished from other architectures by adding to the model blocks that feed the values into their following layers. This value changes the device value as described by adding a block every two layers between the linear and the ReLu activation codes. However, ResNet101 architecture uses more layers than the blocks of ResNet50 with three layers. The ResNet50 model offers fast training and considerable benefit because image residuals are learned rather than functionality [35]. The third sub-layer is the VGG sub-layer based on the CNN. Although it is a single model, the main advantage related to previous versions is that the CNN models are commonly used, so they are organized more thoroughly and accompanied by two-or three-color layers. VGG has a strong representation of features, and the model can serve as a helpful extractor for new images [34]. The last sub-layer is the SVM classification. Since the SVM is a good classification algorithm, it can be used to classify features that have already extracted. The methods used for feature extraction were derived from previous sub-layers (see Figure 5). Activation Layer This layer is a non-linear map of CNN architectures that works at the end of the learning phase to replace negative pixel values with zero in the convolved functions. Output Layer Based on the output score of the activation layer, the final response of classification is provided as an output label. The resulting label can be numerically categorized or encoded; for example, "0" is marked with COVID-19 (i.e., the positive event), "1" is marked with regular cases, and "2" is marked with other cases of pneumonia, etc. Activation Layer This layer is a non-linear map of CNN architectures that works at the end of the learning phase to replace negative pixel values with zero in the convolved functions. Output Layer Based on the output score of the activation layer, the final response of classification is provided as an output label. The resulting label can be numerically categorized or encoded; for example, "0" is marked with COVID-19 (i.e., the positive event), "1" is marked with regular cases, and "2" is marked with other cases of pneumonia, etc. Experimental Result The proposed model was evaluated in-depth to assess the efficiency of the solutions and examine the impact on transfer learning and self-controlled learning. In the following subparagraphs, we describe the utilized datasets for the proposed CAD system, experimental environment, settings, and results due to performance metrics. Dataset Description Three datasets were used in these experiments, two of which have images of CXR type, and the last has CT images. The acquired dataset of CT scans was divided into 4001 COVID-19 and 15,684 non-COVID-19 images, whereas the first CXR dataset consists of 219 COVID-19 and 2686 non-COVID-19 images. The second CXR dataset comprises 3616 COVID-19 and 17,549 non-COVID-19 images. The evaluation supports the holdout procedure using 80% training set and 20% testing set. See Table 2 for a brief description of dataset details. Figures 6 and 7 show a montage preview of the CT and CXR images. Computer System Configuration The proposed CAD system was implemented using MATLAB R2020a, computer vision, image processing, neural networks, and deep learning toolboxes. The CAD system works on a HP Zbook workstation with Windows 10 64-bit, CPU: i7-6820HQ, RAM: 32GB DDR5, and GPU: 8 GB. Parameter Settings All networks were trained as follows: optimizer, SGDM initial learning rate 0.0001, and validation frequency 5. Every epoch, which is a complete cycle of training iteration, in the dataset was shuffled, and the training process stopped if the it did not change significantly. For all networks, the dataset was divided into 80% and 20% for training and validating sets, respectively. For all networks, the same training and validation datasets were chosen to facilitate the performance comparison of networks. Performance Metrics For the proposed CAD system, there are different performance metrics for evaluating efficiency and effectiveness. In such cases, the negative and positive cases were assigned to the non-COVID-19 and COVID-19 infection groups, respectively. In sequence, the number of correctly detected COVID-19 and non-COVID-19 infections is represented by N TP and N TN , respectively, whereas N FP and N FN indicate the number of incorrectly diagnosed COVID-19 and non-COVID-19 infections, respectively. Table 3 represents a brief description of the most common metrics used for evaluating the proposed CAD system. Experiment Design: Result Evaluation and Discussion The proposed CAD system was evaluated using two scenarios per single dataset; hence, six experiments were performed. The first scenario depends on optimizing parameters and fine-tuning pre-trained networks as an end-to-end CAD component. The second scenario involves employing the developed component in the first scenario as a feature extractor engine. The feature extractor engine then passes these feature sets to an SVM classifier boosted by optimizing the kernel function as a hybrid learning CAD component. The recorded results were captured per the dataset regarding the stated two scenarios, and the most effective model of the dataset was determined. In the following, the results are divided into three subsections for each dataset. CT Scan Dataset Firstly, the experiments were started with CT scan images, and as mentioned above, there are two scenarios for each dataset. The first scenario is exhibited here with the two-class label as the normal state and COVID-19. The same scenario was performed for the three-class dataset. Tables 4 and 5 show the numerical results for the first scenario for the CT scan dataset. Tables 4 and 5 show all the results for the proposed models with three metrics, namely accuracy, precision, and recall, for each of the fully deep learning and the hybrid learning solutions. The experimental analysis shows the superiority of the proposed models over various metrices. Therefore, the same experiment was repeated for two new datasets for the X-ray images, as outlined in the following two subsections. The First X-ray Dataset As shown in the previous subsection, our proposed models are either fully deep learning or a hybrid model with the SVM by applying the first X-ray dataset. This experiment starts with the two-class label then the three-class label as discussed before. Tables 6 and 7 show the results for the two-class label and three-class label, respectively. Table 6. The performance measures in applying different learning models for the X-ray dataset (two-class) where the bolded number indicates the best result among the other classification models. In this experiment, the two classes' dataset hybrid VGG19-SVM shows the best performance measures compared to both the other models. Even with the three-class dataset, the fully deep learning (enhanced TL of VGG16) method gives better results for accuracy and recall than the hybrid learning solutions. The Second X-ray Dataset Lastly, the two scenarios were applied for the second X-ray dataset. This experiment's results are shown in Tables 8 and 9 for the two-class label and the three-class label, respectively. Discussion This section discusses the superiority of the proposed models versus the related models in recent literature studies. The proposed model has multi-source scan images based on modularity, including CT scan and X-ray images. First, for the CT scan, Table 10 shows our proposed model deducted from the comparative study in Table 4 and the literature for the same dataset and inputs-results of this experiment are visualized in Figure 8. The second scenario was performed on the three-class label for the same dataset. All comparative results were replicated according to the three-class label dataset, as shown in Table 11 and Figure 9. In turn, the experiments were carried out in which the end-to-end VGG16 with the binary class demonstrated its superiority to the hybrid model. With three classes, the hybrid model achieved better results, and both showed better results than the comparative study from the literature. [50] 87.63 74.00 66.00 Figure 9. Comparative study between proposed model and other literature models using CT dataset (three-class) [50]. Second is the first X-ray dataset where the proposed model obtained accuracy lower than Muhammed E.H. et al. [51] of around 0.89%, and the proposed model achieves a much more reasonable recall rate. Consequently, our proposed model does not stick in under-fit or over-fit with regards to a specific label (see Table 12). The proposed model satisfies the balance classification rates between different labels in the given dataset. Furthermore, the proposed model achieves a notable enhancement compared to others in terms of accuracy, precision, and recall by a significant rate as illustrated in Figure 10 for [50] 87.63 74.00 66.00 Figure 9. Comparative study between proposed model and other literature models using CT dataset (three-class) [50]. Second is the first X-ray dataset where the proposed model obtained accuracy lower than Muhammed E.H. et al. [51] of around 0.89%, and the proposed model achieves a much more reasonable recall rate. Consequently, our proposed model does not stick in under-fit or over-fit with regards to a specific label (see Table 12). The proposed model Comparative study between proposed model and other literature models using CT dataset (three-class) [50]. Second is the first X-ray dataset where the proposed model obtained accuracy lower than Muhammed E.H. et al. [51] of around 0.89%, and the proposed model achieves a much more reasonable recall rate. Consequently, our proposed model does not stick in under-fit or over-fit with regards to a specific label (see Table 12). The proposed model satisfies the balance classification rates between different labels in the given dataset. Furthermore, the proposed model achieves a notable enhancement compared to others in terms of accuracy, precision, and recall by a significant rate as illustrated in Figure 10 for binary classification. Table 13 and Figure 11 demonstrate the superiority of the proposed model versus the models in the literature for three classes. In the third dataset, the hybrid learning solution provided better results than fully deep learning. For the binary label, the proposed enhanced TL VGG16+SVM demonstrated its superiority (see Table 14). Figure 12 represents the visual analysis of the proposed model for binary classifier in terms of accuracy, precision, and recall. The proposed enhanced TL VGG19+SVM showed its effectiveness for the three-class label dataset (see Table 15). Figure 13 shows a graphical bar chart analysis of the proposed model versus the models in the literature; both binary and multiclass models show improvements in accuracy compared to those in the literature. Figure 11. Comparative study between proposed model and other literature models using X-ray dataset (three-class) [51][52][53][54]. In the third dataset, the hybrid learning solution provided better results than fully deep learning. For the binary label, the proposed enhanced TL VGG16+SVM demonstrated its superiority (see Table 14). Figure 12 represents the visual analysis of the proposed model for binary classifier in terms of accuracy, precision, and recall. The proposed enhanced TL VGG19+SVM showed its effectiveness for the three-class label dataset (see Table 15). Figure 13 shows a graphical bar chart analysis of the proposed model versus the models in the literature; both binary and multiclass models show improvements in accuracy compared to those in the literature. Comparative study between proposed model and other literature models using X-ray dataset (two-class) [51][52][53][54]. [52] 91.80 --93.00 Sheetal et al. [53] 94.40 --94.50 Amira et al. [54] 91.34 91.00 88.33 Conclusions This paper proposes a CAD system for detecting COVID-19 infection. An excellent diagnostic performance was demonstrated in using both CT and CXR images. In addition, the CAD system is superior to those found in the literature. The CAD system could be a supplementary reliable analysis tool for diagnosing COVID-19 cases using CXR and CT images. Visible features in CT scan images, such as the intensity, shape, size, and nodule margins, may influence the diagnostic efficiency of the CAD system. Furthermore, junior Conclusions This paper proposes a CAD system for detecting COVID-19 infection. An excellent diagnostic performance was demonstrated in using both CT and CXR images. In addition, the CAD system is superior to those found in the literature. The CAD system could be a supplementary reliable analysis tool for diagnosing COVID-19 cases using CXR and CT images. Visible features in CT scan images, such as the intensity, shape, size, and nodule margins, may influence the diagnostic efficiency of the CAD system. Furthermore, junior Conclusions This paper proposes a CAD system for detecting COVID-19 infection. An excellent diagnostic performance was demonstrated in using both CT and CXR images. In addition, the CAD system is superior to those found in the literature. The CAD system could be a supplementary reliable analysis tool for diagnosing COVID-19 cases using CXR and CT images. Visible features in CT scan images, such as the intensity, shape, size, and nodule margins, may influence the diagnostic efficiency of the CAD system. Furthermore, junior radiotherapists lacking experience can use these helpful suggestions provided by the proposed CAD system.
8,513.4
2022-01-01T00:00:00.000
[ "Computer Science", "Medicine" ]
The Matching Polynomial of a Distance-regular Graph A distance-regular graph of diameter d has 2d intersection numbers that determine many properties of graph (e.g., its spectrum). We show that the first six coefficients of the matching polynomial of a distance-regular graph can also be determined from its intersection array, and that this is the maximum number of coefficients so determined. Also, the converse is true for distance-regular graphs of small diameter—that is, the intersection array of a distance-regular graph of diameter 3 or less can be determined from the matching polynomial of the graph. 1. Introduction. Distance-regular graphs are highly regular combinatorial structures that often occur in connection with other areas of combinatorics (e.g., designs and finite geometries) and many of their properties can be determined from their intersection numbers. These properties include the eigenvalues of the graph, and their multiplicities, and hence, we can determine the characteristic polynomial of a distance-regular graph from knowledge of its intersection numbers. It is also known that the characteristic polynomial of any graph can be determined by computing the circuit polynomial and converting it to a polynomial in a single variable via a specific set of substitutions. Since the intersection array and the circuit polynomial of a distance-regular graph both determine its characteristic polynomial, it was natural to investigate the relationship between the circuit polynomial and the intersection array for distance-regular graphs. Specifically, could the circuit polynomial be determined from the intersection array? We show that the answer to this question is no, even though a portion of the circuit polynomial (which is also part of the matching polynomial) can be computed from just the intersection array. We concerned here with the matching polynomial of a distance-regular graph, which is a constituent of the circuit polynomial. The initial portion of the matching poly-nomials (and other graph polynomials) of many common regular graphs have been computed (e.g., [7, 8, 9]). In the cases where these regular graphs are distance-regular (complete graphs, circuit, complete bipartite graphs, hypercubes), our results serve to generalize and unify the determination of the initial coefficients of the matching poly-nomials. Furthermore, the matching polynomial is also a constituent of many other graph polynomials (in addition to the circuit polynomial), so these results also apply to other more specific polynomials. 1. Introduction.Distance-regular graphs are highly regular combinatorial structures that often occur in connection with other areas of combinatorics (e.g., designs and finite geometries) and many of their properties can be determined from their intersection numbers.These properties include the eigenvalues of the graph, and their multiplicities, and hence, we can determine the characteristic polynomial of a distance-regular graph from knowledge of its intersection numbers.It is also known that the characteristic polynomial of any graph can be determined by computing the circuit polynomial and converting it to a polynomial in a single variable via a specific set of substitutions. Since the intersection array and the circuit polynomial of a distance-regular graph both determine its characteristic polynomial, it was natural to investigate the relationship between the circuit polynomial and the intersection array for distance-regular graphs.Specifically, could the circuit polynomial be determined from the intersection array?We show that the answer to this question is no, even though a portion of the circuit polynomial (which is also part of the matching polynomial) can be computed from just the intersection array. We concerned here with the matching polynomial of a distance-regular graph, which is a constituent of the circuit polynomial.The initial portion of the matching polynomials (and other graph polynomials) of many common regular graphs have been computed (e.g., [7,8,9]).In the cases where these regular graphs are distance-regular (complete graphs, circuit, complete bipartite graphs, hypercubes), our results serve to generalize and unify the determination of the initial coefficients of the matching polynomials.Furthermore, the matching polynomial is also a constituent of many other graph polynomials (in addition to the circuit polynomial), so these results also apply to other more specific polynomials. Graph polynomials. The matching polynomial is an example of a general graph polynomial, which we now describe.The first element in the construction of a graph polynomial is a family of graphs, F , such as all trees, or all circuits.Typically such a family is infinite, and often includes a single vertex and a single edge as members. To each member of this family a weight is assigned.Often this weight is an indeterminate, which is subscripted by either the number of vertices or the number of edges in the graph.Having chosen a family F and a weighting scheme, we compute the F -polynomial of a graph G by first finding the spanning subgraphs of G where each component is an element of F .Such a spanning subgraph is called an F -cover.For each cover, take the product of the weights of the components and then sum these terms over all the F -covers of the graph.The resulting polynomial is the F -polynomial of G. Throughout this paper, we consider only F -polynomials constructed by assigning the indeterminate w i to a component with i vertices.For more on the general properties of F -polynomials, see [6]. Presently, we are interested in the matching polynomial of a graph.We take F to be the family consisting of just a vertex and an edge.In this case, a cover will consist of disjoint edges and isolated vertices-a matching in the graph.The resulting matching polynomial has terms of the form cw n−2m where n is the number of vertices, and c is the number of matchings in G that have m edges.Thus, finding the matching polynomial of a graph is equivalent to finding the number of m-matchings in the graph, for all m. Many of the families used to construct interesting F -polynomials include a vertex and an edge, and therefore all of the matchings of the graph are created as F -covers.The terms of the form cw n−2m 1 w m 2 in the F -polynomial then coincide with the matching polynomial itself, in other words, the matching polynomial is a subpolynomial of the F -polynomial.For example, the circuit polynomial is formed by taking F to be the set of all circuits, with a vertex and an edge viewed as degenerate circuits on 1 and 2 vertices, respectively.So, the matching polynomial is a subpolynomial of the circuit polynomial.As mentioned in the introduction, the characteristic polynomial of a graph can be determined from the circuit polynomial.The characteristic polynomial in the single variable λ is obtained by making the following substitutions into the circuit polynomial [6]: w 1 = λ, w 2 = −1, and w m = −2, m > 2. Matching polynomials. In order to find the initial portion of the matching polynomial of a distance-regular graph, we use some results from [1,2] that give expressions for the number of certain subgraphs present in a regular graph.The coefficients of the matching polynomial are equal to the number of matchings in the graph that have a specified number of edges.For an arbitrary regular graph, we give expressions for the number of such matchings that have five or fewer edges.More details and proofs of these results can be found in [1,2]. We adopt the following notation: For a regular graph G, let n be the number of vertices, and r be the degree.We also require the number of certain types of subgraphs in G (it is important to note that these are not vertex induced subgraphs, but they are simply graphs whose edge and vertex sets are subsets of those of G).Let M i be the number of matchings on i edges, that is, subgraphs with i edges and 2i vertices (each of degree 1).Let T ,S, and P be the number of subgraphs, respectively, that are 3-cycles (triangles), 4-cycles (squares), and 5-cycles (pentagons).Finally, let N be the number of subgraphs that consist of a triangle together with a single edge joining a fourth vertex to one vertex of the triangle, and let H be the number of subgraphs equal to a complete graph on 4 vertices with one edge deleted.Theorem 3.1.Suppose that G is a regular graph of degree r on n vertices.Then, using the notation given above, we have ) ) ) ) ) ) Proof.Results from [1,2] describe how to construct a set of linear equations which when solved yield these expressions (in addition to others that specify the number of other types of subgraphs).The original linear equations were generated by a PASCAL program, and the solutions were found using the symbolic manipulation capabilities of the program Mathematica. Distance-regular graphs. Loosely speaking, a distance-regular graph has much combinatorial regularity, which in turn implies some amazing algebraic properties.Examples include complete graphs, complete bipartite graphs, hypercubes, Petersen's graph and some of its generalizations, line graphs of some distance-regular graphs, and graphs related to other incidence structures such as designs and finite geometries.A good introduction to distance-regular graphs can be found in [3], and a more advanced treatment is given in [4].Here, we list some pertinent facts that are needed later. Let ∂(u, v) denote the distance between two vertices u and v.Then, given vertices u and v, at a distance i apart in a graph, the intersection number of (u, v) is defined as A graph is distance-regular if the intersection numbers depend on the choice of i, j, and k, but not on the choice of the particular pair of vertices, u and v, that are a distance i apart.A distance-regular graph is usually described by referencing the following special cases of the intersection numbers (where the subscripts make sense), It can be shown that knowledge of these numbers is sufficient to compute all of the intersection numbers of a distance-regular graph.A distance-regular graph must be regular, and if the degree is b 0 = r , then a i + b i + c i = r .Thus, if the graph has diameter d, the intersection numbers are determined by the intersection array, It can be shown that the intersection array of a distance-regular graph is sufficient to determine the characteristic polynomial of the graph (see [3]).First term.Since there is only one matching with no edges, the first term of the matching polynomial has the form w n 1 , where n is the number of vertices in G. Let G i (v) denote the set of vertices at distance i from vertex v, and let Then, by counting the edges joining the vertices in G i−1 to the vertices in G i , we find that b i−1 k i−1 = c i k i .Repeated use of this equation, together with the initial condition k 0 = 1, allows us to find n via n = d i=o k i , and, thus, we can obtain the first term from the intersection array. Second term.The number of matchings that have a single edge is equal to the number of edges in the graph.If we let r denote the degree of the vertices in the graph, then the coefficient of w n−2 1 w 2 is nr /2 (3.3).However, r = b 0 , so we are able to obtain this coefficient from the intersection array. Third term.The coefficient of w n−4 1 w 2 2 is given by (3.4), and depends solely on n and r which we have already seen, can be determined from the intersection array. Fourth term.The coefficient of w n−6 1 w 3 2 is given by (3.5) and depends on n, r and the number of triangles T in the graph.How many triangles does a distance-regular graph have? Let v be a vertex of G, and let T v be the number of triangles that pass through v.The edge opposite to v in any such triangle joins two vertices of G 1 (v), and any edge joining two vertices of G 1 (v) yields a triangle that has v as a vertex.Now, G 1 (v) induces a subgraph that is regular of degree a 1 on k 1 vertices and, therefore, has a 1 k 1 /2 edges.Since each of these edges corresponds to a triangle, we have T v = a 1 k 1 /2.Then, if we sum over all the vertices of G, we count each triangle three times, once for each vertex.Also, Thus, we can obtain the number of triangles from the intersection array.Together with n and r , we can then find the fourth coefficient.Fifth term.The coefficient of w n−8 1 w 4 2 is given by (3.6) and depends on n, r , T , and S. We count the number of squares that have v as a vertex.Let v be a vertex of G, and let S v be the number of squares that contain v as a vertex.Let w be the vertex opposite to v in a square, that is ∂(v, w) = 2 in the square.However, the distance between v and w may be 1 or 2 in G, so we consider two cases. Case 1. Suppose that ∂(v, w) = 1.The vertex w is adjacent to a 1 vertices in G 1 (v) and, for each pair of these vertices, there corresponds a square through v with w as the opposite vertex.Thus, we get a 1 2 squares through w and there are k 1 choices for w, yielding a total of k 1 a 1 2 squares in this case.Case 2. Suppose that ∂(v, w) = 2.The vertex w is now an element of G 2 (v) and has c 2 neighbors in G 1 (v).For each pair of these neighbors, there corresponds a square through v with w as the opposite vertex.Thus, we get squares in this case.Combining these two cases and using k 2 c 2 = k 1 b 1 and k 1 = r , we get If we sum S v over all the vertices, we can count each square 4 times, so Now, since we see that the number of squares can be determined from the intersection array, we see that we can also determine the fifth coefficient.Sixth term.The coefficient of w n−10 1 w 5 2 is given by (3.7) and depends on n, r , T , S, P , and H. Fix a vertex v, and let H v be the number of graphs counted in H that have vertex v as one of the vertices of degree 3. The other three vertices must be in G 1 (v) and induce a subgraph in G 1 (v) which is a path of length 2. There are k 1 ways to choose the central vertex of the path and ways to choose the adjacent endpoints.To each such triple of vertices from G 1 (v) corresponds a graph counted in H v , so 2 .If we sum H v over all the vertices of G, then we can count each graph twice, so Fix a vertex v, and let P v be the number of graphs counted in P which have v as a vertex.To obtain an expression for P v , we consider the two vertices x and y, which are a distance 2 from v in the pentagon.These two vertices may be a distance 1 or 2 from v in G, which gives rise to the following three cases. Case 1. Suppose that ∂(v, x) = ∂(v, y) = 2.In this case, x and y are adjacent vertices in G 2 (v).Then G 2 (v) induces a regular graph of degree a 2 on k 2 vertices and, thus, has k 2 a 2 /2 edges.Each such edge determines a pair of vertices x and y.Now we must connect both x and y to vertices w and z, respectively, in G 1 (v)(w and z may not be distinct).In each case, this can be done in c 2 ways.So, we can create c 2 2 k 2 a 2 /2 apparent pentagons, except when w = z.In this case, we have a graph that is composed of a triangle, with an additional vertex of degree 1 adjacent to one of the vertices of the triangle-a graph of the type counted by N. Here, v is the vertex of degree 1, and x and y are the two vertices of degree 2. Since x and y are a distance 2 from v in G, we denote the number of such graphs as N v,22 . Case 2. Suppose that ∂(v, x) = ∂(v, y) = 1.In this case, x and y are adjacent vertices in G 1 (v).Then, G 1 (v) induces a regular graph of degree a 1 on k 1 vertices and, thus, has k 1 a 1 /2 edges.Each such edge determines a pair of vertices x and y.Now we must connect both x and y to new vertices w and z, respectively, also, in G 1 (v).In each case, this can be done in a 1 −1 ways.So, we can create (a 1 −1) 2 k 1 a 1 /2 apparent pentagons, except that, again, we have not prevented the possibility that we have chosen w = z.In the case that w = z, we again obtain graphs counted by N that we count as N v,11 since the two vertices of degree 2 are a distance 1 from v in G. Case 3. Suppose that ∂(v, x) = 1, ∂(v, y) = 2.We can assume, without loss of generality, that y is the vertex in G 2 (v)-there are k 2 ways for this to occur.There are c 2 vertices in G 1 (v) which are adjacent to y, and which could be x.Now we need to choose two more vertices in G 1 (v).First, z should be adjacent to y and should not be equal to x.This can be done in c 2 − 1 ways.Second, w should be adjacent to x, which can be done in a 1 ways.So, it appears that there are k 2 c 2 (c 2 − 1)a 1 pentagons, but we have not ruled out the possibility that w = z.In the case that w = z, we obtain graphs that we count as N v,12 since the two vertices of degree 2 are at distances 1 and 2 from v in G. To consolidate these three cases, notice that if N v is the number of graphs counted in N that have v as the single vertex of degree 1, then 22 .Then, we have If we sum this expression over all the vertices v in G, we count each pentagon 5 times and each graph counted in N just once.Using (3.1), we have Since both H and P can be determined from the intersection array, we can also determine the sixth coefficient from the intersection array. The following example illustrates that this result is the best possible.We begin by considering the following pair of distance-regular graphs, which are not isomorphic, yet have the same set of intersection numbers.The first graph is the Hamming scheme, H (2,4), which has vertices that are strings of length 2 over an alphabet with 4 letters.Vertices are adjacent if the corresponding strings are different in just one position.The resulting graph on 16 vertices is regular of degree 6 and has the intersection array {6, 3; 1, 2}.In [5], Egawa describes a distance-regular graph found by Shrikhande [10] that has the same intersection array as H(2, 4), yet is not isomorphic to H (2,4). Since these two graphs have identical intersection arrays, they have identical characteristic polynomials.However, their circuit polynomials are not equal.This can be seen quickly by comparing their matching polynomials, which are sub-polynomials of their respective circuit polynomials.The matching polynomials are H (2,4) Of course, the first six coefficients of these two polynomials are equal, yet they are different in their seventh coefficient.Since their matching polynomials differ, their circuit polynomials cannot be equal and, thus, it is clear that we cannot determine the circuit polynomial (or for that matter, the matching polynomial) of an arbitrary distance-regular graph by knowing just the intersection array. An expression analogous to (3.2), (3.3), (3.4), (3.5), (3.6), and (3.7) can be found for M 6 , the seventh coefficient of the matching polynomial.It depends on n, r , T , S, P , H, and the numbers of each of six other subgraphs.These six subgraphs are the subgraphs with six edges and no vertices of degree 1, one of which is the complete graph on four vertices.The number of subgraphs of a distance-regular graph that are complete graphs on four vertices cannot be determined from the intersection array.This can be demonstrated in the previous example since the Hamming graph has eight subgraphs that are complete on four vertices, while the Shrikhande graph has none.This partially explains the discrepancy in the seventh coefficient of the matching polynomials of these two graphs. The previous theorem shows how to determine a portion of the matching polynomial of a distance-regular graph from its intersection array-the converse is possible for graphs of sufficiently small diameter.Theorem 4.2.Suppose that G is a distance regular graph of diameter 3 or less.Then its intersection array can be found from its matching polynomial. Proof.We give the proof for graphs of diameter 3. It should be obvious how to shorten the proof for the cases of smaller diameter.Suppose that we have the matching polynomial of a distance-regular graph of diameter 3. The first term of the matching polynomial, w n 1 , allows us to determine the number of vertices in the graph, n.The coefficient of the second term is nr /2, and together with n, this is sufficient to determine the degree of the graph r .Because b 0 is the degree of the graph, we have the first element of the intersection array. The coefficient of the fourth term depends on n, r , and the number of triangles, T (3.5).Thus, we can determine the number of triangles in the graph.In turn, T depends on n, r , and a 1 (4.3).Because we know n and r and the dependence on a 1 is linear, we can then find a 1 .Because c 1 = 1 for any distance-regular graph, and r = a 1 + b 1 + c 1 , we can also determine b 1 . The coefficient of the fifth term depends on n, r , T , and the number of squares, S (3.6).Thus, we can determine the number of squares in the graph.In turn, S depends on n, r , a 1 ,b 1 , and c 2 (4.5).Because we know n, r , a 1 , and b 1 and since the dependence on c 2 is linear, we can then find c 2 . We can compute H with the information at hand since we need only n, r , and a 1 (4.6).The coefficient of the sixth term depends on n, r , T , S, H, and the number of pentagons, P (3.6).Thus, we can determine the number of pentagons in the graph.In turn, P depends on n, r , a 1 ,b 1 ,c 2 , and a 2 (4.8).Because we know n, r , a 1 ,b 1 ,c 2 and since the dependence on a 2 is linear, we can then find a 2 . With the current collection of intersection numbers, we can in turn find b 2 ,c 3 , and a 3 with the following sequence of equations: (4.11) Thus, the matching polynomial of a distance-regular graph of diameter 3 determines its entire intersection array. Recall that the intersection array of a distance-regular graph determines the characteristic polynomial of the graph and, thus, determines the eigenvalues and their multiplicities, for the graph.In the case of small diameter distance-regular graphs, because the matching polynomial determines the intersection array, it also determines the eigenvalues and their multiplicities, for the graph.So, the first six coefficients of the matching polynomial carry enough information about small diameter distance-regular graphs to determine the entire spectrum. In summary, we have seen that while both the intersection array and the circuit polynomial determine the spectrum of a distance-regular graph, in general, they are independent of each other.However, the initial portion of the matching polynomial (and hence, also the initial portion of the characteristic polynomial) of a distanceregular graph can be found from the intersection array of a distance-regular graph.Thus, for those classes of regular graphs that are also distance-regular, we can easily calculate the initial portion of the matching polynomial and some initial terms of other graph polynomials that contain the matching polynomial as a sub-polynomial. Theorem 4 . 1 . The first six terms of the matching polynomial of a distance-regular graph are determined by the intersection array of the graph.Proof.Suppose that G is a distance-regular graph with intersection array{b 0 ,b 1 ,..., b d−1 ; c 1 = 1,c 2 ,...,c d }.We take each term of the matching polynomial in turn. c 1 2 squares through w and there are k 2 choices for w, yielding a total of k 2 c 2 2
5,845
2000-01-01T00:00:00.000
[ "Mathematics" ]
Predictors of RBD progression and conversion to synucleinopathies Purpose of review Rapid eye movement (REM) sleep behaviour disorder (RBD) is considered the expression of the initial neurodegenerative process underlying synucleinopathies and constitutes the most important marker of their prodromal phase. This article reviews recent research from longitudinal research studies in isolated RBD (iRBD) aiming to describe the most promising progression biomarkers of iRBD and to delineate the current knowledge on the level of prediction of future outcome in iRBD patients at diagnosis. Recent findings Longitudinal studies revealed the potential value of a variety of biomarkers, including clinical markers of motor, autonomic, cognitive, and olfactory symptoms, neurophysiological markers such as REM sleep without atonia and electroencephalography, genetic and epigenetic markers, cerebrospinal fluid and serum markers, and neuroimaging markers to track the progression and predict phenoconversion. To-date the most promising neuroimaging biomarker in iRBD to aid the prediction of phenoconversion is striatal presynaptic striatal dopaminergic dysfunction. Summary There is a variety of potential biomarkers for monitoring disease progression and predicting iRBD conversion into synucleinopathies. A combined multimodal biomarker model could offer a more sensitive and specific tool. Further longitudinal studies are warranted to iRBD as a high-risk population for early neuroprotective interventions and disease-modifying therapies. Introduction Rapid eye movement (REM) sleep behaviour disorder (RBD) is a parasomnia clinically characterized by active dream enactment, including screaming, flinging, or falling off from bed, that may cause injuries to patients and their bed partners [1]. According to the American Academy of Sleep Medicine, the diagnosis of RBD relies on the confirmation, with video-polysomnography (PSG), of decreased muscle atonia and sudden movements during sleep [2]. RBD is considered a common disorder. The pooled prevalence of PSG-confirmed RBD has been estimated at 0.68% of the general population, and that of probable RBD at 5.65% [3]. Isolated RBD (iRBD) is now considered the expression of the initial neurodegenerative process underlying synucleinopathies, including Parkinson's disease (PD), Dementia with Lewy Bodies (DLB) and Multiple System Atrophy (MSA) [4] and constitutes the most important marker of the prodromal phase of these neurodegenerative diseases [5]. Large longitudinal cohort studies have demonstrated that 81-91% of iRBD patients, followed-up for at least 14 years, will develop either a definite neurodegenerative disease or a mild cognitive impairment [6,7]. iRBD is considered a marker of neurodegeneration with strong predictive value and scarce sensitivity. The likelihood ratio of PSG-proven iRBD for development of a synucleinopathy has been estimated at 130, more than three times higher than that of detecting striatal dopamine loss on molecular imaging [8]. By contrast, only about 50% of all PD patients have experienced iRBD in their prodromal stage [9]. In the future, iRBD could be considered a condition which is useful to be targeted with new disease-modifying therapies, currently in development aiming to interrupt the pathological processes towards the development of synucleinopathies at an earlier stage. The study of iRBD pathology can allow the understanding of biological alterations preceding the clinical manifestation of a synucleinopathy, thus anticipating personalized treatments at a stage where cellular damage could be reversible [10]. To do so, we need to understand which characteristics of iRBD are associated with pathological progression, and with faster or slower phenoconversion [11]. The past few years have seen an increase of longitudinal studies aimed at establishing the value of multiple clinical, genetic, neurophysiological, fluid, and imaging biomarkers in the prediction of the progression from iRBD to PD, DLB, and MSA [12]. This article reviews the most relevant results from recent longitudinal research studies in iRBD, with the intent of describing the most promising progression biomarkers of iRBD and to delineate the current knowledge on the level of prediction in iRBD patients at diagnosis and their future outcome (Table 1). Clinical Markers of Progression Prodromal PD is a clinical entity characterized by the presence of motor and non-motor symptoms, such as alterations in motor dexterity, autonomic dysfunction, mood, olfaction, and cognition, that reflect the progressing neuronal damage in the brain [50]. Many of these clinical symptoms co-occur in iRBD and have been extensively studied in large, multicentre, longitudinal studies to assess whether they could represent a sign of progressing degeneration and a predictor of short-term diagnosis of a neurodegenerative disease. Motor An akinetic-rigid syndrome is a hallmark feature of synucleinopathies and current dopaminergic therapy is principally directed at improving these symptoms. In future PD converters with iRBD, performance on tasks assessing motor dexterity can be altered as early as 12.9 years from diagnosis [13••]. Additionally, an increase in the Unified Parkinson's disease Rating Scale part III (UPDRS-III) score is first detected at around 6.5 years before diagnosis and accelerates in the final 1-2 years before that [13••]. The increase of UPDRS-III score is initially driven by speech and voice alterations, followed in time by bradykinesia, rigidity and, lastly, rest tremor [13••]. In a large multicentric study performed by Postuma and colleagues, the hazard ratio of both UPDRS-III score and of performance on quantitative motor tests was comparable, in entity, to that of altered striatal uptake on [ 123 I]FP-CIT SPECT, a marker of presynaptic dopamine transporter (DAT) availability [14••]. In iRBD patients converting to DLB, motor symptoms appear earlier than in PD converters but, differently from the latter, they progress at a slower pace [14••]. Overall, alterations in motor performances yield similar degrees of prediction towards future development of either dementia or parkinsonism [14••]. Autonomic Up to 94% of iRBD patients report symptoms of autonomic dysfunction [51]. Studies of autonomic function in iRBD have employed specific scales and questionnaires (such as Scales for Outcomes in PD-Autonomic Dysfunction (SCOPA-AUT) and the Non-Motor Symptoms Questionnaire (NMSQ)) as well as instrumental tests (heart rate variability, cardiovascular reflex testing, cardiac scintigraphy, etc.) to assess autonomic alteration [52]. Symptoms due to sympathetic dysfunction in iRBD can be detected through administration of clinical scales as early as 16 years before clinical diagnosis and total scores of these scales become statistically different from controls at around 4-6 years from diagnosis [13••]. High scores on specific autonomic symptoms are found more frequently in iRBD converting to a specific synucleinopathy. MSA converters show more severe urinary symptoms, whereas DLB converters show faster declines of systolic blood pressure. In a recent small study on 18 iRBD patients, in which the severity of alterations of preganglionic and postganglionic sudomotor, cardiovagal, and cardiovascular adrenergic function on instrumental tests was converted to a composite score (CASS score), it was found that the iRBD who converted to DLB had a longer duration of autonomic dysfunction and a higher degree of impairment of cardiovagal and, to a lesser degree, adrenergic autonomic dysfunction, compared to those who converted to PD [15]. A decreased heart rate variability (HRV) recorded on fullnight PSG is an early feature of prodromal PD [53] and is also found in iRBD patients [16]. Alterations in beat-to-beat variability in a cohort of iRBD patients studied longitudinally for an average 6.7 years, however, did not discriminate between patients who eventually converted from those who did not [17]. Very recently, presence of low HRV in a cohort of 47 iRBD was associated with severity of the quantified tonic REM Sleep without Atonia (RSWA), an electrophysiological marker of severity of RBD and possible predictor of phenoconversion [18]. A number of recent studies have investigated the cognitive profile, and the trajectory of cognitive impairment progression in iRBD patients who eventually convert to DLB, as opposed to PD. DLB converters examined up to six years before diagnosis, already show alterations on attention, executive function, and verbal memory with subsequent development of deficits in episodic verbal learning and memory and a faster progression compared to PD converters [13 ••, 19, 20]. PD converters, by contrast, display cognitive performances within normal limits until 1-2 years before diagnosis [19]. Overall, presence at baseline of multidomain cognitive dysfunction in iRBD patients is the main clinical characteristics able to predict whether a patient will end up developing DLB or PD [13••]. Hyposmia Olfactory impairment is a frequent symptom in iRBD [64]. Odour identification tests have been widely employed in clinical studies to evaluate olfactory impairment in iRBD patients. Alterations in odour identification scores can be spotted, in iRBD patients, as early as 22 years before diagnosis of a neurodegenerative condition and become significantly impaired compared with controls nine years before phenoconversion [13••]. Hyposmia in iRBD, however, does not seem to progress at a faster pace than in normal ageing [13••, 22]. Alterations of odour identification has been linked with a 7.3-fold increased risk of developing a synucleinopathy within five years [24]. These results have been replicated in a recent larger study of 140 iRBD patients that underwent odour identification test and were followed up for an average 5.6 years. Here, hyposmia was associated to a higher risk of developing either PD or DLB in the short term, without however discriminating prospectively between the two conditions [22]. Studies on MSA converters are hindered by the small sample sizes but suggest that olfactory dysfunction at baseline does not predict the future conversion to MSA. In one study of twelve iRBD patients tested four years before conversion to MSA, hyposmia was present in 50%, a percentage higher than controls but significantly lower than in PD [25]. In the study by Iranzo and colleagues, the three iRBD patients eventually diagnosed with MSA after follow-up were all normosmic [22]. These findings are consistent with the low prevalence of olfactory dysfunction in the clinical picture of MSA [23]. Visual Dysfunction Patients with iRBD exhibit different degrees of visual dysfunction. These encompass abnormal colour discrimination and stereopsis and illusions [65]. Studies on visual dysfunction have employed a range of tests, from contrast sensitivity tests to colour vision discrimination tests. Abnormal colour vision in iRBD is associated with a higher risk of developing a neurodegenerative synucleinopathy [26]. Colour vision testing performed at baseline can identify iRBD patients who will later convert to DLB as opposed to those who will later convert to PD [14••]. In addition, the trajectory of colour vision impairment in DLB converters progression is steeper than that of PD converters [14 ••]. However, the clinical test used to assess colour vision discrimination has a visuoperceptual cognitive component that may bias this result [66]. In a recent small study, the visual acuity and the contrast sensitivity of 12 iRBD has been found to be reduced compared to controls, and further declined after a one-year follow-up [67]. Further tests would be needed to assess whether this could constitute a possible marker of progression in iRBD. Genetic Markers of Progression Around 5-10% of all PD cases can be ascribed to single gene mutations. In the last twenty years, several rare, highlypenetrant mutations with Mendelian inheritance, as well as frequent variants with smaller effects, have been discovered [68]. Mutations of the GBA gene, encoding for Glucocerebrosidase, are associated with higher risk of PD and DLB [69,70,30]. In these cases, the frequency of RBD is higher than in non-GBA cases, and the severity of the phenotype is influenced by the type of the GBA mutation [71]. Patients with iRBD display higher frequency of GBA mutations compared to healthy controls, and comparable to that of PD patients [27,72,73,28]. Within iRBD patients, GBA mutation carriers tend to have an earlier age at onset, but do not present any other distinctive phenotypic characteristics compared to iRBD patients negative for GBA mutations [28]. Three recent studies have attempted to establish whether GBA mutations in iRBD confer with higher risk of phenoconversion. In one study with 8 iRBD GBA carriers, no such association was detected [27]. In another study with 13 iRBD with GBA mutations, a 3.2-fold higher rate of 1 3 phenoconversion towards parkinsonism and/or dementia was detected [28]. In 2020, Krohn and colleagues gathered a large multicentre longitudinal database of 1061 patients with iRBD, of which 9.5% carried a GBA mutation, and stratified them according to severity of the gene variants. These Authors found that severe variants (L444P, D409H, W291X, H255Q, and R131L) were associated with higher risk for iRBD compared to the mild N370S variant. Additionally, there was a trend for severe variants to drive towards faster conversion to a neurodegenerative disease [29••]. However, the number of severe variant carriers was very low and further studies are needed to confirm this finding. Recently, mutations in the TMEM175 gene have gained academic attention for their relationship with PD risk [74]. Krohn and colleagues have identified the p.M393T variant on TMEM175 as strongly associated with the risk of both PD and iRBD [75]. The p.Q65P variant was then associated with an increased rate of phenoconversion to a synucleinopathy [31 ••]. Recent cross-sectional studies on large cohorts have also detected genetic associations between iRBD risk and variants in the genes SNCA, BST1, and LAMP3, which could further expand our knowledge on the links between genetics and development and progression of iRBD [76,77]. Epigenetic mechanisms have also been recently studied in relation to their progression risk from iRBD to neurodegeneration. In a recent small, preliminary study on 78 patients with iRBD of which 16 converted to PD after 3.75 years, hypomethylation at the Cytosine-phosphate-Guanine (CpG) 17 of the SNCA intron 1 has been associated with increased risk of clinical phenoconversion, and hypomethylation to the CpG 14, 15, and 16 was associated with progression of iRBD symptoms [32•]. Neurophysiological Markers of Neurodegeneration Electrophysiology is an essential tool to diagnose and characterize iRBD and has long been employed to identify changes in sleep structure and brain electrical activity with potential to predict evolution of iRBD into a neurodegenerative disease. Various patterns of REM and non-REM sleep, and wake activity have been studied in relation to disease severity and progression, and to their prediction of conversion to a synucleinopathy [78]. REM Sleep without Atonia The finding of an abnormal electromyographic activity on PSG during REM sleep is denominated REM Sleep Without Atonia (RSWA) and is a pathognomonic feature of RBD. According to its characteristics, RSWA can be tonic, or phasic. The severity of RSWA in iRBD increases over time and this arguably reflects the progression of the brainstem damage induced by the neurodegenerative process [79]. The percentage of tonic RSWA at baseline, in iRBD, has been established as a strong predictor of future conversion to PD [33]. Tonic and phasic RSWA are thought to represent the electrophysiological expression of different pathophysiological alterations taking place in the brainstem [80,81]. Recent studies have focused on the possible different predictive role of either tonic or phasic RSWA towards neurodegeneration. One large study assessed 216 patients with iRBD who were followed-up for five years, and 26.9% of these iRBD patients developed a neurodegenerative disease [34•]. Baseline tonic RSWA showed a stable predictive capacity of future development of PD over time, whereas baseline phasic RSWA was only predictive of future conversion to DLB at long follow-up [34•]. This was confirmed in another recent study in which percentage of tonic RSWA was predictive of a more rapid conversion to parkinsonism, but not of cognitive impairment, thus suggesting that distinction between RSWA subtypes could predict future development of neurodegeneration in iRBD patients [35]. Isolated RSWA (iRSWA) is the detection of RSWA in absence of other symptoms ascribable to RBD [82]. It can be an incidental finding in up of 5% of PSG, and its frequency increases with age. iRSWA can be associated with other electrophysiological, clinical, imaging, or autonomic findings [16,83,36,84]. A few studies have tested the hypothesis that iRSWA could represent an initial manifestation of neurodegeneration, yielding however conflicting results. Stefani and colleagues did not report any progression of iRSWA patients towards neurodegeneration after a 8.6-year follow-up [36], and in another study, iRSWA was not correlated with striatal dopamine levels as assessed with [ 123 I]FP-CIT SPECT [37]. By contrast, Dede and colleagues, studying 67 iRSWA patients for at least 4 years, reported that 26.8% developed RBD and 8.9% developed a neurodegenerative disorder. This study, however, lacked a control group to ascertain whether the progression was due to aging or by a genuine increased risk of iRSWA [38]. Electroencephalography Electroencephalography (EEG) studies in iRBD show a diffuse slowness of cortical activity [85], which correlates with cognitive tests exploring attention, executive functions, and verbal memory [86,39]. Two longitudinal studies have assessed the predictive value of EEG alterations in iRBD towards neurodegeneration. One study enrolled 54 iRBD patients to perform quantitative EEG and to a 3.5-year follow-up. The iRBD patients who converted after follow-up showed higher δ and θ power in the cortex, with higher slowto-fast power ratio. Most importantly, detection of diffuse 1 3 cortical EEG slowing was predictive of conversion to DLB, whereas EEG slowing restricted to temporal and occipital lobes was predictive of conversion to PD [39]. In a second study on 121 patients with iRBD, of which 27 converted to either PD or DLB after four years, diffuse bursts of θ band together with a decrease of bursting in the α band could distinguish iRBD converters compared to controls [40]. EEG during non-REM sleep has also been studied as potential marker of neurodegeneration in iRBD. Cyclic Alternating Pattern (CAP) is a spontaneous, physiological rhythm of non-REM sleep composed of transient electrocortical events of arousal, followed by retrieval to background EEG activity that is interpreted as an expression of arousal instability [87,88]. The number and architecture of CAP is significantly altered in iRBD [89]. Melpignano and colleagues studied 67 iRBD patients and found that CAP cycles were longer, and their rate significantly decreased. In addition, they found that CAP rate was most reduced in those patients who converted earlier to a neurodegenerative disease [41]. Further, confirmatory studies will establish the potential of microstructural alterations of non-REM sleep architecture as potential markers of progression of neurodegeneration in iRBD. Fluid Biomarkers Fluid biomarkers including cerebrospinal fluid (CSF) markers, such as oligomeric, total and phosphorylated α-synuclein, total and phosphorylated tau, amyloid-β 42 and neurofilament light chain, have become increasingly investigated as a source of potential biomarkers providing insights into the pathogenesis of neurodegenerative diseases. PD patients with RBD have been shown to have higher CSF and serum levels of oligomeric α-synuclein compared to PD patients without RBD [90]. Furthermore, the presence of RBD in PD patients has shown to be a predictor of motor progression in patients with both low α-synuclein CSF levels and reduced striatal DAT [ 123 I]FP-CIT uptake, and a predictor of cognitive decline in patients with low CSF levels of both α-synuclein and low amyloid-β 42 [91]. A longitudinal study illustrated that lower baseline amyloid-β 42 levels were predictive of cognitive decline at three-year follow-up only in PD patients with RBD [92]. Increased CSF prion protein levels have also been reported in PD patients with RBD compared to PD patients without RBD [93]. CSF inflammatory markers, including interleukin 1β and nitric oxide, as well as serum prostaglandin E2 have also been shown to be elevated in PD patients with RBD [90]. A recent study in probable iRBD patients illustrated that a reduced ratio of phosphorylated tau to total tau was associated with phenoconversion to a synucleinopathy disease at a 5-year followup highlighting the potential use of fluid biomarkers to track progression in iRBD [31••]. Future studies investigating fluid biomarkers in iRBD patients, prior to the clinically diagnosis of a synucleinopathy disease, are warranted to fully elucidate the potential utility of CSF and blood biomarkers to monitor the progression of RBD. Neuroimaging Biomarkers The last decade has seen an increasing volume of neuroimaging studies, employing Positron Emission Tomography (PET), Single Photon Emission Computed Tomography (SPECT), Magnetic Resonance Imaging (MRI) and transcranial sonography techniques, to investigate the pathophysiology of iRBD and to help identify potential biomarkers to predict the progression of RBD and the conversion of iRBD to a synucleinopathy disease. [ 123 I]FP-CIT SPECT and transcranial sonography have been shown to detect subclinical changes in iRBD patients, similar to pathology seen in early PD [42][43][44]. Four longitudinal studies have used dopaminergic SPECT to investigate the progression of presynaptic striatal dopamine pathology as a biomarker in iRBD patients [42][43][44][45]. Iranzo and colleagues demonstrated that lower striatal presynaptic DAT availability, using [ 123 I]FP-CIT SPECT, combined with hyperechogenicity of the substantia nigra, using transcranial sonography, had a predictive value of 100%, with 55% specificity, after 2.5 years to predict the conversion of iRBD patients to a neurodegenerative synucleinopathy [44]. Repeated [ 123 I]FP-CIT SPECT scans show progressive loss of striatal DAT in iRBD patients over three years, with iRBD patients who converted to PD showing the greatest level of nigrostriatal dopaminergic dysfunction at baseline [42]. Furthermore, a reduction of [ 123 I]FP-CIT SPECT greater than 25% in the putamen has been shown to discriminate iRBD patients, with DAT deficits, who converted to a synucleinopathy from iRBD patients who did not convert after three-year follow-up [43]. At a five-year follow-up [ 123 I], FP-CIT SPECT had 75% sensitivity and 51% specificity to predict iRBD conversion to a synucleinopathy with a likelihood ratio of 1.54 [43]. Li and colleagues further highlighted the predictive value of decreased DAT in the putamen and striatum, using [ 99m TC]TRODAT-1 SPECT, in iRBD patients over 5 years with greater DAT deficits in those patients at high risk of progressing to a synucleinopathy [45]. Together these studies provide evidence to suggest that presynaptic nigrostriatal dopaminergic dysfunction, detected using SPECT imaging, could offer a valuable biomarker to monitor the progression of nigrostriatal deficits in RBD patients with the potential ability to aid the prediction of phenoconversion to a neurodegenerative synucleinopathy. DAT SPECT imaging is already used in clinical practice [94] therefore the platform to implement this tool in iRBD could be feasible. To aid potential translation into clinical practice, large, multicentre, longitudinal studies are 1 3 warranted to further validate the use of SPECT imaging as a tool to predict phenoconversion in iRBD patients. Glucose metabolism and perfusion changes have been reported in iRBD patients with spatial covariance analysis identifying abnormal PD-related metabolic brain networks [95,96,97,98]. A longitudinal [ 99m Tc]ECD SPECT study illustrated increased perfusion in the hippocampus of RBD patients who developed a synucleinopathy at a 3-year followup compared to those who did not progress [46]. A logistical regression model combing increased PD-related covariance pattern expression with age was predictive of phenoconversion from iRBD to a synucleinopathy at an average followup of 4.6 years [47]. Furthermore, recent work by Kogan and colleagues supports the potential use of multiple [ 18 F]FDG PET scans to measure progressive changes in PD-related brain pattern expression measures as a prodromal PD biomarker to predict phenoconversion in iRBD patients [48••]. Cardiac [ 123 I]metaiodobenzylguanidine (MIBG) scintigraphy has also been investigated in iRBD patients. While cross-sectional studies have revealed abnormalities [99,100,101], a longitudinal study reported no changes in RBD patients at a 2.5-year follow-up [102]. These findings suggest that while sympathetic denervation may be abnormal in early RBD patients, [ 123 I]MIBG might not be a sensitive biomarkers for the progression and phenoconversion in RBD patients. Structural and functional MRI techniques have demonstrated changes in deep grey matter, cortical grey matter, microstructural white matter and disrupted functional connectivity networks in patients with RBD which can be associated with clinical symptoms [103]. Isolated RBD patients who converted to a clinically defined synucleinopathy, at a three-year follow-up, showed greater cortical thinning in frontal, parietal and occipital cortices compared to iRBD patients who did not convert [49••]. Pereira and colleagues reported cortical thinning as a predictor of phenoconversion in iRBD [49••]. Furthermore, grey matter atrophy in the inferior frontal gyrus has been associated with phenoconversion at 5-year follow-up [31••]. Together these studies suggest that structural neuroimaging could act as a predictive biomarker for increased risk of progression to a synucleinopathy. Diffusion-weighted MRI has been employed to define longitudinal brain connectome progression scores, using interpretable machine learning algorithm, to evaluate the progression patterns in iRBD patients as a prodromal phase of PD [104•]. The longitudinal connectome progression pattern in iRBD patients was similar to that of de novo PD patients, highlighting the potential of this tool as a biomarker for the neurodegenerative prodromal phase of synucleinopathies [104•]. This study highlights the potential future use of MRI-based computational biomarkers to predict the progression and conversion of RBD with high sensitivity and specificity. Longitudinal studies, such as the Oxford PD Centre Discovery Cohort MRI substudy (OPDC-MRI) [105], are ongoing to validate the use of structural and functional MRI techniques, combined with clinical data, as biomarkers to predict the progression and phenoconversion of iRBD to synucleinopathies. Conclusion As a high-risk population for conversion to synucleinopathies, iRBD offers a valuable therapeutic window for application of early neuroprotective interventions and diseasemodifying therapies. Recent longitudinal studies have highlighted a variety of potential biomarkers, including clinical, neurophysiological, genetic, CSF, serum, and neuroimaging, for monitoring disease progression and predicting iRBD conversion into synucleinopathies (Fig. 1). However, the role of biomarkers as predictors of iRBD remains to be fully elucidated. A combined multimodal biomarker model could offer a sensitive and specific tool to predict the progression of RBD and conversion to synucleinopathies. Future studies are required, most notably large, multicentre, longitudinal studies, to validate these potential biomarkers and step towards their use as endpoints in future clinical trials. Conflict of Interest The authors declare that they have no conflict of interest. Human and Animal Rights All reported studies/experiments with human or animal subjects performed by the authors have been previously published and complied with all applicable ethical standards (including the Helsinki Declaration and its amendments, institutional/ national research committee standards, and international/national/institutional guidelines). Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
6,106.8
2022-02-01T00:00:00.000
[ "Medicine", "Biology" ]
Accelerating 3D scene analysis for autonomous driving on embedded AI computing platforms —The design of 3D object detection schemes that use point clouds as input in automotive applications has gained a lot of interest recently. Those schemes capitalize on Deep Neural Networks (DNNs) that have demonstrated impressive results in analyzing complex scenes. The proposed schemes are generally designed to improve the achieved performance, leading however to high performing approaches with high computational complexity. To mitigate this high complexity and to facilitate their deployment on edge devices, model compression and acceleration techniques can be utilized. In this paper, we propose compressed versions of two well-known 3D object detectors, namely, PointPillars and PV-RCNN, utilizing dictionary learning-based weight-sharing techniques. It is demonstrated that significant acceleration gains can be achieved with acceptable average precision loss when evaluated on the KITTI 3D object detection benchmark. These findings constitute a concrete step towards the deployment of high-performance networks in edge devices of limited resources, such as NVIDIA’s Jetson TX2. I. INTRODUCTION The continuously growing domain of Autonomous Vehicles (AVs) has gained interest in both academia and industry. AVs are considered as an integral component of connected intelligent transportation systems, improving performance and safety indicators of future mobility systems, providing safer transportation, efficient management of fuel consumption and upgrading the whole travelling experience. One of the most essential operations executed at the AVs, to enable the aforementioned benefits, is the perception and understanding of dynamic and complex environments from sensor data coming from various modalities (e.g., Camera, Li-DAR, etc.). Camera-based scene analysis modules are sensitive to challenging illumination or weather conditions which can significantly degrade the quality of imagery data. On the other hand, LiDAR data are less affected by environmental changes and provide depth information directly, although their distinct sparse representation characteristics, bring new challenges that need to be addressed. Traditionally, a LiDAR analysis module processes the generated point clouds for detecting objects This work has received funding from H2020-ICT-2019-2 project CPSo-Saware Cross-layer cognitive optimization tools & methods for the lifecycle support of dependable CPSoS (Grant Agreement No. 873718) via several operations including background/road subtraction, followed by spatiotemporal clustering and classification [1]. The advances in deep learning are considered as the main driving force towards fast and effective scene understanding solutions. Many recent works have studied the strengths and weaknesses of Deep Neural Networks (DNNs) in detecting objects in point clouds, generated by LiDAR sensors [2]- [4]. While the impressive performance of DNNs in LiDARbased object detection is nowadays well-established, their high storage and computational costs become problematic especially in real-time applications, like the ones related to scene understanding in AVs. In this work, we study the application of recently proposed Model Compression and Acceleration (MCA) techniques on state-of-the-art DNNs designed for LiDAR-based 3D object detection in AVs. Our comprehensive evaluation on the KITTI 3D object detection benchmark [5] involving the detection of cars, pedestrians and cyclists, demonstrates very promising results towards the goal of efficient and accurate scene understanding. Apart from a high-performance GPU-based platform, the 3D detectors were also deployed on NVIDIA's Jetson TX2, thus, revealing a promising future direction concerning the efficient design and execution of compressed models on edge devices, utilizing dictionary learning-based MCA approaches. The rest of the paper is organized as follows. In Section II, the related bibliography for 3D object detection and MCA techniques is presented, along with the main contributions of the paper. Section III contains a brief description of the DNNs under stud and the used weight-sharing MCA techniques. Our experimental evaluation is described in detail in Section IV. Finally Section V concludes the paper. II. RELATED WORK AND CONTRIBUTION A. Object detection in LIDAR point clouds 3D object detection from LIDAR point clouds is mainly a data-driven task due to the lack of apparent structure in the data. Deep fully convolutional networks have been traditionally employed in the literature since 2016, with the main evolutionary elements to concern i) the transformation of the 3D point cloud, ii) the network structure and iii) the utilization of feedback loops or abstraction layers for a multiscale feature extraction approach. Initial attempts [6] projected the 3D points in a 2D level and used traditional 2D fully convolutional networks reporting accuracy of 71.0% for moderate difficulty cars. The use of 3D fully convolutional networks [7] increased the accuracy to 75.3% but also increased the computational complexity. Yan et al. [8] proposed sparse convolutional networks improving training and inference times and reaching a reported accuracy of 79.46% for moderate difficulty cars, according to KITTI benchmarks. Pointpillars [9] proposed a novel encoder that utilizes PointNets to learn a representation of point clouds organized in vertical columns (pillars) presenting an accuracy of 74.31% in the same category. 3D object detection with region proposals was introduced in PointRCNN of Shi et al. [10]. They used two stages, where the first stage generates 3D proposals and the second refines them reporting accuracy of 75.64%. An extension of PointRCNN is the Part-A 2 Net [11] encompassing a part-aware proposal part and an aggregation part reporting an of 78.49%. PV-RCNN [12] is another novel approach that combines 3D sparse convolutions in a voxelized space with region proposals and an abstraction layer reporting an average precision of 81.43%. B. Model compression and acceleration Model Compression and Acceleration (MCA) refers to a family of approaches to produce efficient deep models in terms of the required resources (computational, storage) during the inference phase of its operation. This is achieved by transforming an original, pre-trained deep model of high complexity to a new reduced version without affecting substantially the final performance, depending, of course, on the application. The MCA literature is thoroughly described in several recent survey papers like [13], [14] and [15], where the presented techniques can be utilized on the algorithmic or the hardware level, while, currently, there are also methods that take into account both algorithmic and hardware aspects. Many of the proposed techniques focus on pruning unimportant parts of deep networks like filters [16]. Other approaches limit the representation of the involved parameter by reducing their bitwidth or increasing common representations via scalar, vector and product quantization [17], [18]. In another direction, the involved quantities are modelled mathematically as tensors or matrices and decomposed into factors by exploiting inherent properties such as low-rankness [19]. Focusing on the utilization of MCA techniques on 3D object detection using point clouds, the literature is quite limited. An example can be found in [20] where the proposed deep model is transformed via a pruning technique that sets to zero unimportant parameters by exploiting the alternate direction method of multipliers [21]. C. Contribution The efficacy/impact of more elaborate and high-performing MCA techniques has not been considered yet. In this paper, we focus on weight sharing approaches. In particular, the highlights of the paper are as follows: (i) Two recently proposed weight sharing techniques [22], [23] are utilized on two well-known 3D point-cloud object detection frameworks, namely, PointPillars [9] and PV-RCNN [12]. (ii) The results obtained on the KITTI 3D object detection benchmark [5] reveal considerable acceleration gains, with limited accuracy loss, on both examined models. (iii) Full-model acceleration of up to 9.2× in the case of PointPillars and up to 6.3× regarding the targeted part (namely, its 2D con-layers) of PV-RCNN, has been achieved. (iv) Relative performance drop of PointPillars ranges from being negligible for the class "car" (1 − 5%) to acceptable for "pedestrian" (18 − 21%), and "cyclist" (11 − 16%), across the difficulty levels of the KITTI dataset. (v) For the case of PV-RCNN, mostly negligible losses across the range of classes and difficulty levels, even proving beneficial in specific cases, were observed. Here, the two object detection schemes that will be considered, namely, PointPillars and PV-RCNN, are briefly presented. The PointPillars network [9] introduces the notion of a Pillar. Based on those Pillars, this network removes the need for 3D convolutions, which have been central to networks like VoxelNet [2] and Second [8], by utilizing strictly 2D convolutions, thus, achieving both high precision and fast inference. The architecture of PointPillars consists of three stages as depicted in Fig. 1(a). The first stage transforms the point cloud into a pseudo-image. By grouping the points of the cloud into vertical columns, called pillars, that are positioned based on a partition of the x − y plane, this stage summarizes the information of the points per pillar into 1D vectors. These vectors are rearranged appropriately to construct the pseudoimage that will feed the next stage. The second stage consists of a feature extraction backbone network that provides a highlevel representation. This representation is subsequently processed by the third stage which is the adopted object detector, producing 3D bounding boxes and confidence scores for the classes of interest. In terms of computational complexity, the backbone network of the second stage, consisting of a number of 2D convolutions and 2D transpose convolutions, requires more than 95% of the involved operations and, thus, we will focus on this stage in the following for its acceleration. The second object detector for point clouds that will be considered in this paper, is the recently proposed PV-RCNN [12]. On one hand, this system capitalizes on ideas from grid-based object detectors that transform the irregular point clouds into regular representations that can be processed efficiently by ordinary convolutional layers, limiting, however, their performance by the resolution of the adopted grid. On the other hand, PV-RCNN also exploits ideas from point-based object detection, which operates directly on the points of the cloud, removing the need for point cloud discretization with increased, however, computational complexity. In more detail, the architecture of PV-RCNN is depicted Fig. 1(b). PV-RCNN, first, transforms the point cloud into voxels which are subsequently processed by a voxel backbone network consisting of 3D sparse convolutions. Based on its output, 3D region proposals are produced using the BEV backbone network whose structure is depicted in Fig. 1. PV-RCNN also samples a subset of the points, named key points, and associates summary features by appropriately concatenating information extracted by the key points themselves and corresponding information extracted from different layers of the voxel backbone network. The information of the 3D region proposals along with the key-point features are appropriately merged over equispaced grid points defined in each 3D region proposal. This enhanced information is eventually processed to procedure the improved final confidence scores for the classes and the 3D bounding boxes. In the following, we will focus on the BEV backbone for studying the impact on the performance of the PV-RCNN. Viewing the convolution operation as a summation over dot-products between input and kernel (depth-or channelwise) vectors, Product Quantization (PQ) aims at reducing the required number of dot-products by limiting the number of allowed representations for the kernel vectors (thus sharing results between them), using a Vector Quantization (VQ) framework. To be more specific, PQ first obtains a suitable partition of the kernel vector space into a predefined number of subspaces and then applies VQ in each of them, that is, it estimates a codebook of representatives that "best" approximate the original sub-vectors, according to some appropriate metric. Following this approximation scheme, only the dot-products between the codewords and the input need to be calculated. The obtained results are subsequently shared (substituted) accordingly among the sub-vectors represented by each codeword. It becomes obvious that the size of the used codebook represents a trade-off between the achieved acceleration/compression and the induced approximation error. Typically, codebook design is addressed by applying the k-means algorithm on the original sub-vectors, namely by using the k centroids obtained via kmeans as the members of the desired codebook. However, treating the problem in a Dictionary Learning framework can lead to significant improvement over the conventional approach, as it was recently shown in works concerning both the acceleration of image classification DNNs (e.g. VGG, ResNet, SqueezeNet) [23], as well as for image-based object detectors (SqueezeDet and ResNetDet) [24]. More specifically, the conventional approximation scheme (referred to as VQ hereafter) can be defined, as follows: where W, C, denote the matrices holding the original subvectors (of a particular subspase), and the codebook (cluster centroids), respectively, while the columns of Γ are one-hot vectors indicating the codewords in C. On the other hand, the Dictionary Learning based approach (referred to as DL hereafter) imposes a decomposition of the obtained codebook, as follows: where W and Γ are as in (1), D denotes the dictionary while Λ is a sparse matrix, with a hyperparameter controlling the sparsity of its columns. The main advantage of the DL-based codebook design first presented in [23], lies in its ability to significantly increase the size of the employed codebooks, without affecting the achieved acceleration, compared to conventional techniques. This is due to the decomposition of the codebook defined in (2), which decouples the size of the codebook from the number of operations it incurs. To be more specific, the size of the codebook is determined by the size of Λ, whose sparsity limits the number of required operations, while the main bulk of the operations is due to the dense dictionary D, whose size can be controlled separately. This advantage results in a better quantization error for the same target acceleration, which is ultimately translated into better performance by the accelerated network (compared to the VQ approach). A. Training and evaluation Both networks were trained with the KITTI 3D object detection [5] benchmark consisting of 7481 training images and 7518 test images as well as the corresponding point clouds, comprising a total of 80.256 labelled objects In our study, three classes are mainly examined, cyclists, pedestrians and cars annotated with bounding boxes containing the objects in the 3D scene. 3716 annotated Velodyne point cloud scenes were used for training and 3769 annotated Velodyne point cloud scenes were used for testing and validation. For the deployment and retraining of PointPillars and PV-RCNN, the OpenPCDet framework [25] was employed. For the initial evaluation, pre-trained instances were used, while for the retraining, the Adam optimizer was employed with learning rate l r = 0.003, weight decay rate D W = 10 −2 and a batch size B = 4. Training took place in an NVIDIA Geforce RTX 2080 with 16GB VRAM and compute capability 7.5. Furthermore, for the Pointpillars network the detection accuracy was evaluated on NVIDIA Jetson TX2, while for the PV-RCNN network, due to model size, the detection accuracy was evaluated on the NVIDIA Geforce RTX 2080. B. Acceleration scheme In our experiments, we apply the VQ and DL weight-sharing techniques to the PontPillars and PV-RCNN models, targeting their convolutional layers, and measuring the performance drop induced by the acceleration, compared to the original networks. The reported acceleration ratios are defined as the ratio of the original to the accelerated computational complexities, measured by the number of multiply-accumulate (MAC) operations. To achieve our acceleration goal, we followed the stage-wise strategy presented in [22], whereby the individual layers are accelerated progressively in stages, starting from the original network. At each stage, the parameters of one or more layers are quantized using the presented techniques and fixed, and subsequently, the remaining layers are re-trained to adapt to the newly presented changes. The process is then repeated for the convolutional layers involved in the next stage, and so on until all desired layers are accelerated. The KITTI 3D object detection dataset is employed for the fine-tuning and performance evaluation, ensuring that the same training examples that were used during the initial training are also used during the fine-tuning step. a) Accelerating PointPillars: PointPillars is a fully convolutional network with its feature-extraction part (both 2D and transposed convolution operators) being responsible for 97.7% of the total MAC operations required. In total Pointpillars network encompasses 4.835 × 10 6 parameters and require 63.835 × 10 9 MACs. For a good balance between acceleration and performance drop, we targeted the 2D convolutional layers of PointPillars (consuming approximately 47% of the total MACs), as well as the 4 × 4 transposed convolutional layer of the network (responsible for 44.4% of the total MACs), depicted with the red blocks in Fig. 1(a). Acceleration was performed in 16 acceleration stages with each stage involving the quantization of a particular layer, followed by fine-tuning. Using acceleration ratios of α = 10, 20, 30, and 40 on the targeted layers, lead to a reduction of the total required MACs by 82%, 86%, 88%, and 89%, or equivalently, to total model acceleration of PointPillars by 5.6×, 7.6×, 8.6×, and 9.2×, respectively. b) Accelerating PV-RCNN: The main bulk of the operations required by PV-RCNN are consumed by the Voxel-Backbone and the BEV-Backbone blocks shown in Fig. 1(b), with the former one being composed of Submanifold Sparse 3D-Conv layers [26], while the latter consisting of regular 2D convolutional layers. Since the Sparse convolutional layers are already specialized layers that are designed to exploit the sparsity of the input to reduce their computational complexity, and keeping in mind that the number of operations required by such layers is input-dependent, in this experiment we focused only on the BEV-Backbone block of PV-RCNN, as shown in Fig. 1(b). PV-RCNN network encompasses 12.405 × 10 6 parameters and requires 88.878 × 10 9 MACs without taking into account the sparse convolutional layers. In this case, the targeted layers (highlighted in Fig. 1(b)) are responsible for roughly 86% of the MACs required by the BEV-Backbone block. Similarly to the previous experiment, using acceleration ratios of α = 10, 20, 30, and 40 on the targeted layers, lead to a reduction of the MACs required by the BEV-Backbone block by 77%, 82%, 83%, and 84%, or equivalently, to the block's acceleration by 4.5×, 5.5×, 6.0×, and 6.3×, respectively. C. Metrics The official KITTI evaluation detection metrics include bird's eye view (BEV), 3D, 2D, and average orientation similarity (AOS). The 2D detection is done in the image plane and average orientation similarity assesses the average orientation (measured in BEV) similarity for 2D detections [27]. The KITTI dataset is categorised into easy, moderate, and hard difficulties, and the official KITTI leaderboard is ranked by performance on moderate. For the sake of selfcompleteness, easy difficulty refers to a fully visible object with a minimum bounding height box of 40px and max truncation of 15%, moderate difficulty refers to a partially occluded object with a minimum bounding box height of 25px and max truncation of 30% and hard difficulty refers to a difficult to see an object with a minimum bounding box height of 40px and max truncation of 50%. Each 3D ground truth detection box is assigned to one out of three difficulty classes (easy, moderate, hard), and the used 40-point Interpolated Average Precision metric is separately computed on each difficulty class. It formulated the shape of the Precision/Recall curve as AP| R = 1 /|R| r∈R ρ interp (r) averaging the precision values provided by ρ interp (r), according to [28]. In our setting, we employ forty equally spaced recall levels, D. Object detection In this section, the impact of the VQ and DL acceleration techniques on the performance of PointPillars and PV-RCNN is presented, following the procedure outlined in Sec. IV. Table I, summarizes the average precision (AP) for various acceleration ratios in the case of PointPillars. For each category (namely, car, cyclist, pedestrian), the three AP correspond to the three levels of difficulty (namely, easy, moderate and hard) that the evaluation dataset provides. It is observed, as expected, that the impact on performance is greater as the acceleration ratios are increased, a tendency also observed for other values of acceleration ratios that are not depicted here. The relative performance drop is at most 1%, 3% and 5% for the three difficulty levels (easiest to hardest) of "car", while it is at most 18%, 21% and 21% for "pedestrian" and 11%, 15% and 16% for "cyclist". It is observed that these results are promising when the two weight-sharing techniques are employed, as the aforementioned maximum performance drops correspond to a considerable reduction on MAC operations, while the impact is negligible for "car". Note that these acceleration gains can be further enhanced by more tailored MCA configurations involving e.g. layer-specific compression/acceleration ratios. Finally, an example depicting false positive errors from the application of the original and the accelerated networks is shown in Fig. 2. For PV-RCNN, the performance of the model remains practically unaffected by the acceleration of the targeted layers, as shown by the results summarized in Table II. This can be attributed to the limited extent of the affected part of the network, but also to the quality of the employed acceleration techniques. Moreover, it is interesting that applying the procedure described in Sec. IV-B, which involves progressively isolating and fine-tuning specific portions of the network, has even proven beneficial for the overall network's performance, for certain combinations of categories/acceleration ratios. V. CONCLUSIONS This work investigated the impact of two recently proposed weight sharing MCA techniques on the performance of two state-of-the-art 3D object detectors in the frame of automotive applications, namely Pointpillars and PV-RCNN. Specifically, this work investigated was the reduction in MAC operations and storage with respect to the deployment of the aforementioned networks to embedded AI computing platforms and more specifically NVIDIA Jetson TX2. The evaluation of the impact was performed on the KITTI 3D object detection benchmark and demonstrated significant acceleration gains while retaining to a great extent the performance of the original networks. As a next step, we are investigating their deployment on Deep Neural Network ASICs.
5,007.8
2021-10-04T00:00:00.000
[ "Computer Science", "Engineering" ]
Magnetic levitation performance of high-temperature superconductor over three magnetic hills of permanent magnet guideway with iron shims of different thicknesses Superconducting magnetic levitation performance, including levitation force and guidance force, is important for the application of high-temperature superconducting maglev. Both of them are not only affected by different arrays of superconductors and magnets, but also by the thickness of the iron shim between permanent magnets. In order to obtain the best levitation performance, the magnetic field distribution, levitation force, and guidance force of a new type of three magnetic hills of permanent magnet guideway with iron shim of different thicknesses (4, 6, and 8 mm) are discussed in this paper. Simulation analysis and experiment results show that the guideway with iron shim of 8 mm thickness possesses the strongest magnetic field and levitation performance when the suspension gap is larger than 10 mm. However, with the decreasing of suspension gap, the guideway with iron shim of 4 mm thickness possesses the best levitation performance. The phenomena can be attributed to the density distribution of flux and magnetization of iron shim. For HTS maglev vehicles, levitation and guidance force are very important parameters [13][14][15][16][17][18][19]. The improvement of levitation force can effectively enhance the carrying capacity of the maglev system. Guidance force directly determines the stability of the HTS maglev vehicles. Traditionally, permanent magnet guideway often possesses single magnetic hill [20], which cannot ensure the stability of HTS maglev vehicle running at high speed. In the meantime, researchers have paid much attention to the design of permanent magnet guideway, especially the permanent magnet arrangement in different ways [21][22][23][24]. However, little attention was paid to the iron shim inside guideway. In this paper, the levitation and guidance force of hightemperature superconductor over the redesigned three magnetic hills of permanent magnet guideway with iron shim of different thicknesses are studied through simulation and experiments. The relationship between levitation performance and the thickness of iron shim obtained may provide a valuable reference for the optimum design of permanent magnet guideway for HTS maglev system. The guideway consists of four permanent magnets with section size of 40 mm 9 80 mm, three iron shims, and two other irons used as clips to fix the whole structure, as shown in Fig. 1. The trademark of the permanent magnet is N45. The arrows in Fig. 1 indicate the direction of magnetization of magnets. Thus, three magnetic hills' distribution over the permanent magnet guideway can be obtained. Magnetic field simulation with Ansoft By simulating magnetic field with Ansoft Maxwell, we calculated the distribution of magnetic induction intensity (B), as shown in Fig. 2. However, the actual magnetic induction intensity on the guideway surface is the same as that 2 mm above guideway in simulation [21], so the reference system was moved up 2 mm. Magnetic field analysis The thicknesses of iron shim are set as 4, 6, and 8 mm; and their guideways are marked with a, b, and c, respectively. The simulation results of B Z (vertical component of B) at different heights above the three guideways are shown in Fig. 3. When the height is less than 5 mm above guideway, the B Z of the guideway a with 4 mm iron shim is the largest, and it decreases with the height increasing. When the height is larger than 5 mm, with the increase of height, B Z above guideway a is the least, while that above guideway c turns to be the largest. As we know, larger density of flux causes larger B Z , and this should be the reason that the B Z of the iron shim in 4 mm thickness is the largest when the height is less than 5 mm. However, the iron shim is magnetized by permanent magnets and presents flux. Thus the thicker iron shim should provide more flux. Furthermore, according to the magnetization curve of the iron shim, a thinner iron shim produces less flux than a thicker one in its saturation magnetization zone. This may cause B Z above guideway c to be the largest when the height is larger than 5 mm. Figures 4 and 5 show the distribution of B Z above the middle and side iron shims, respectively, which indicate similar characteristics of B Z distribution for iron shims in different place. Nevertheless, it also can be noticed that the B Z of side iron shims is a little larger than that of the middle one. As the levitation force is associated to the B Z , we use B Z to estimate the levitation force of the HTS bulks and magnet levitation system. We take the levitation force F L simply as proportional to B Z [14], namelyF L / B Z . Tests of levitation force Three YBCO superconducting bulks were placed above the three magnetic hills, as shown in Fig. 6. Superconducting bulks were located in a container and cooled with liquid nitrogen. One of the real guideways applied in experiments is shown in Fig. 7. The HTS bulks placed inside container were cooled at different heights (15,20,25, and 30 mm) and pressed to 5 mm above guideway, and then returned to the original locations, respectively. The container was moved down and up at a speed of 50 mm/min. The variations of the average levitation force with distance for HTS bulks are shown in Fig. 8. According to the experimental results, no matter what the cooling height is, the levitation force of guideway a is less than that of the others. Analyzing the experimental data in detail, we found that the levitation force of guideway b is a little larger than that of guideway c when the distance is less than 7 mm. With the increase of distance, however, the levitation force of guideway b become less than that of guideway c. We further measured the vertical distribution of the B Z in the center of middle iron shim and side iron shim from surface to the height of 30 mm. The results are shown in Tables 1 and 2. B Z distributions and their corresponding levitation force imply that higher B Z causes larger levitation force. Comparing the distribution with levitation force, we take the relationship between them as F L / B Z on the whole. It is obvious that the extra B Z contributed by magnetized iron shim plays a key role in the process. Tests of guidance force HTS bulk over the permanent magnet guideway possesses guidance force when it is cooled with liquid nitrogen and captures a certain number of fluxes [16]. Guidance force has much to do with the field cooling height [16,17], so the test height was set as the field cooling height of 15, 20, and 25 mm, respectively. The field cooling superconductor bulks were pushed along the horizontal direction by the stepper motor connected with a force sensor. The guideways a, b, and c were also used for the tests. As we know, the guidance force is related to the magnetic flux captured by high-temperature superconductor, when it achieves the superconducting state [16,17]. It can be seen from Fig. 9 that the B Z of c is the largest among the three guideways. The higher the density of magnetic flux, the bigger the B Z , and hence the larger guidance force. This rule is also reflected in Fig. 10. Figure 10 shows the average guidance force of superconductor with different field cooling heights over the same guideway. The position where the superconductor was cooled is regarded as zero in displacement. We can see that when the displacement is less than 20 mm, namely half the size of a single permanent magnet, the guidance force is almost linear with the lateral displacement. When it is larger than 20 mm, with the displacement increasing, the guidance force gradually tends to saturation and decline at last. Otherwise, with the decrease of field cooling height, the performance of guidance force gets better, which is similar to that described in literature [16,17] for the single magnetic hill guideway and Halbach twin hills permanent magnet array. The average guidance forces of superconductor bulks for different field cooling heights over different guideways are shown Fig. 11. We can see that, for the same field cooling height of 15 and 20 mm, the thicker iron shim causes a larger guidance force. Moreover, the decrease of cooling height enlarges guidance force obviously. On the other hand, the decrease of the field cooling height will reduce the performance of HTS levitation force [16]. The main reason for the phenomenon observed above is due to the increase of captured flux with the iron shim getting thicker or the decrease of cooling height. The macro equivalent current density J, which is composed of eddy currents to maintain flux quantum beam in the numerous flux pinning centers, was enhanced with increase of the initial captured flux. The guidance force of HTS bulk will thus increase according to the guidance force calculation formula [16]: where F G is guidance force, J Y stands for the horizontal component of J, and V is the volume of the HTS bulk. Conclusion In this paper, simulation of the vertical component of magnetic induction intensity (B Z ) is combined with experiments on levitation performance to discuss the variation of guidance force and levitation force with the thickness of iron shim and cooling height. Narrower iron shim may cause stronger B Z on the surface of guideway, but does not in the space. On the contrary,wider iron shim provides stronger B Z at higher height above guideway. Actually, the working height of superconducting bulks is usually not on the surface of guideway. Thus, it is very important to choose a proper thickness of iron shim to obtain the strongest B Z and the best levitation and guidance performance according to the different working height. This is meaningful for the application of HTS magnet levitation system. Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
2,364.6
2014-05-10T00:00:00.000
[ "Engineering", "Physics" ]
Artificial Neural Network-Based Prediction of the Optical Properties of Spherical Core–Shell Plasmonic Metastructures The substitution of time- and labor-intensive empirical research as well as slow finite difference time domain (FDTD) simulations with revolutionary techniques such as artificial neural network (ANN)-based predictive modeling is the next trend in the field of nanophotonics. In this work, we demonstrated that neural networks with proper architectures can rapidly predict the far-field optical response of core–shell plasmonic metastructures. The results obtained with artificial neural networks are comparable with FDTD simulations in accuracy but the speed of obtaining them is between 100–1000 times faster than FDTD simulations. Further, we have proven that ANNs does not have problems associated with FDTD simulations such as dependency of the speed of convergence on the size of the structure. The other trend in photonics is the inverse design problem, where the far-field optical response of a spherical core–shell metastructure can be linked to the design parameters such as type of the material(s), core radius, and shell thickness using a neural network. The findings of this paper provide evidence that machine learning (ML) techniques such as artificial neural networks can potentially replace time-consuming finite domain methods in the future. Introduction There is tremendous research interest in the investigation of plasmonic phenomena in and around coinage metal nanoparticles [1]. Plasmonic nanoparticles possess interesting optoelectronic properties such as tunable optical resonances, production of hot carriers through plasmon decay, local electromagnetic field enhancement and the Purcell effect, which make them useful in a gamut of applications including surfaceenhanced Raman scattering (SERS), light emission, imaging, sensing, photovoltaics and photocatalysis [2][3][4][5]. The optical properties of nanoparticles made of coinage metals (Au, Ag and Cu) are known to be sensitive to the particle size, particle shape and the environment surrounding the particle. A variety of plasmonic metals with different shapes such as spheres, cubes, stars, octahedra and triangles with different optical properties have been synthesized [6,7]. Amongst all these shapes, spheres are the easiest to fabricate while simultaneously achieving a monodisperse size distribution and preventing aggregation. The usage of plasmonic coinage metal nanoparticles in catalysis and sensing is dramatically hindered by their physico-chemical properties, such as the chemical and photochemical reactivity in electrolytes and oxidizing environments, the poor adsorption of reactant atoms over noble metal surfaces, the inability to resist high temperature without melting, reaction or shape change, poor separation and utilization of hot carriers generated by plasmon decay, etc [8]. The chemical/photochemical stability, abrasion resistance and thermal resistance of coinage metals can be improved by applying a protective layer (shell) of ceramics such as metal oxides and nitrides [2,[8][9][10]. When the said ceramic protecting the plasmonic core from the environment also has semiconducting properties, a metalsemiconductor heterojunction is created. The built-in electric field associated with such a metal-semiconductor heterojunction is a highly effective method to separate oppositely charged carriers and enhance the charge transfer efficiency of a plasmonic hybrid nanoparticle [11,12]. Semiconductor shells typically have a high frequency dielectric constant higher than 5, which is complementary to the negative relative permittivity exhibited by the coinage metals over the visible and near-infrared spectra range. Metal oxides also show strong chemisorption of a number of reactant molecules including water, CO 2 , CO and a host of organic molecules important in industrial heterogeneous catalysis. Thus, the fabrication of such core-shell plasmonic meta-atoms results in a hybrid that inherits the useful properties of both of the parent materials with unique optical, chemical and mechanical properties which the individual parent materials would not possess alone [13,14]. All these advantages enable core-shell plasmonic nanoparticles to offer tunable, high Q-factor optical resonances, efficient charge transfer, efficient harvesting and utilization of hot carriers, strong interaction with reactant molecules, and good chemical and physical stability in different environments for different applications [3,15,16]. Consequently, the investigation of the optical properties of the core-shell hybrids is of great importance [15][16][17]. Since a number of degrees of freedom (choice of the core and shell materials, core radius and shell thickness) exist in the fabrication of the core-shell hybrid, the problem of inverse design of a perfect core-shell structure with a desired far-field optical response has recently attracted more attention [18,19]. The inverse design can map the required far-field optical response of a core-shell structure with the material properties. The inverse design approach accelerates the design process and obviates the need for repetitive simulations to reach the final far-field response [14,16,20]. Simulation approaches are utilized to investigate the optical properties of core-shell hybrid systems and optimize their optical properties before fabrication to ensure that resources such as researcher time, energy, fabrication, and characterization costs, etc., are used minimally and efficiently. The finite difference time domain (FDTD) method is the most convenient way to simulate the light-matter interactions [21]. In the FDTD method, the shape of the nanoparticle and its constituent materials are defined first, then the FDTD engine uses meshing grids to solve Maxwell's equations on tiny blocks of the simulation environment ( Figure 1). The far-field response of a simulation target including absorption, scattering and extinction spectra are among the most important pieces of information, which can be obtained through FDTD simulations [1,21]. Even though FDTD simulations constitute a powerful tool to obtain the near-field and far-field optical response of the nanoparticles, this tool has its own shortcomings as well. Since FDTD is a finite difference method, the computational cost of this method is extremely dependent on the mesh size and the size of the simulation region. The following equations describe the relation between the computational expense and the simulation time with the mesh size and over all size of the simulation region [22]: Where dx is the mesh step size and V is the volume of the simulation region. These two equations shed light on the extreme dependence of the finite difference methods on the mesh size, a dependence that makes the FDTD method a time consuming and computationally expensive method, which requires both time and enough memory resources to converge [20,23,24]. As a result, FDTD methods are considered slow by today's standards especially for simulation of light-matter interactions for the huge nanostructures [24]. The 21st century is likely to be the century of automation. Using prior experience from the industrialization era, humankind already knows that automating a process is the key to reducing its cost and improving its efficiency. From the advent of computing machines in the mid 19th century, computers began to have an increasing impact on civilization and the everyday lives of humans. Artificial intelligence and machine learning represent the next frontier of computers able to think and learn, whose application in the present time extends from autonomous driving to the virtual assistant in our smartphones. Recently, artificial neural network (ANN)-based machine learning algorithms were successfully utilized in applications such as computer vision, speech recognition, autonomous driving, etc [20,23]. ANN is reported to successfully predict the outputs of the computationally expensive finite difference methods to solve partial differential equations (PDEs) [25,26] in applications ranging from heat transfer [27] to stress prediction [28] with little to no computational expense [29]. Nelson and Di Vece trained a neural network using FDTD simulation results to help optimize the optical absorption of halide perovskite solar cells containing core-shell Ag nanoparticles [30]. Bravo-Abad and colleagues reviewed the use of deep learning approaches in nanophotonics to perform the nonlinear mapping of material geometry and composition with the resulting functional properties [18]. Inspired by these ideas, herein we successfully trained different ANNs with different architectures for ultrafast prediction of the far-field optical responses (including absorbance, scattering and extinction) of plasmonic core-shell materials in the first step. In the second step, we tried to address the inverse design problem using a unique ANN architecture, which can predict the material characteristics needed to obtain a desired far-field response. FDTD Simulations The Lumerical ® (Ansys, version: 8.24.2466) software package was used to obtain the input data for the ANNs. The user-friendly environment provided by the graphical user interface (GUI) of Lumerical software enables one to easily simulate light-matter interactions. In addition, Lumerical's GUI supports scripting commands, which facilitate the automatic simulation of a batch of predefined simulations. The built-in material library of Lumerical contains the optical constants of the most important semiconductors, metals and dielectrics used in optoelectronic devices. All of these have made Lumerical one of the most widely used programs for electromagnetic simulation applications. The simulations in this work were performed using various combinations of the core and shell materials with different thicknesses. For the core-shell structures, Au, Ag and Cu (3 different core materials), which are the most important plasmonic metals [31] were selected as the materials constituting the core while TiO 2 [32], ZnO [33], InAs, InP and GaAs (5 different shell materials), which are the most important semiconductor materials used in optoelectronic devices were chosen as the shell materials. The materials used in this study along with their indices are reported in Table 1. Since TiO 2 and ZnO were not defined in the built-in materials database of Lumerical, they were added to the materials database using the wavelength-dependent refractive indices reported in references [32,33], respectively. The environment surrounding the core-shell structures was air (i.e., refractive index of the environment was set to 1). A total field scatter field (TFSF) source with a bandwidth of 250-800 nm was added to the simulation environment as a light source while absorption cross-section monitors were used to record the absorption and scattering spectra. A uniform mesh with a size of 5 nm were rendered by the GUI and a more accurate mesh override with maximum size of 2 nm on the surface of the core-shell structure was introduced later to increase the accuracy of the calculations. Since the core-shell geometry is symmetrical, a symmetrical boundary condition in the x direction and anti-symmetrical boundary condition in the y direction were imposed to increase the speed of the simulation. For the core-shell structure 10 different radii were chosen in the range 2.5-50 nm and 10 different shell thicknesses were chosen in the range 2.5-25 nm. This gave a total of 3 × 5 × 10 × 10 = 1500 different simulated spectra. To obtain a smooth spectrum, the results of the monitors were collected in 1000 data points, which were associated with the TFSF light source's wavelength. The simulations in total yielded 1500 × 1000 = 1,500,000 data points for each of the absorption, scattering and extinction batch simulations. These simulated data were used as the feed for the ANNs convergence in the next step. ANN Architectures Three different ANNs with three different structures were designed to predict the farfield optical responses: the absorption prediction network (APN), the scattering prediction network (SPN) and the extinction prediction network (EPN). The results obtained by FDTD simulations were used as the input for training the three ANNs. The input data in all these ANNs were the combination of both categorical data (e.g., Au and ZnO as the materials used in the core and shell simulations) and quantitative data (e.g., the size of the core radius or the shell thickness), in order to make the input data interpretable for the ANNs. Hot encoding was used to convert the categorical data into binary (0 or 1) format. Since the simulations were performed using 3 different core materials and 5 different shell materials, the hot encoding results converted these two categorical values into 5 + 3 = 8 binary values. In addition to this, the radius of the core, the shell thickness, and the wavelength of the TFSF light source were also given as the input features to the neural network, which made it 11 different input features in total. Figure 2 exhibits the 3 hidden layer ANN architecture used for the absorption spectrum prediction (APN). The 11 features were fed to the neural network as inputs, the 3 hidden layers had 40, 40 and 30 neurons (3381 parameters in total) respectively and the output was a value between zero and 1 corresponding to the value of the absorption in that specific wavelength. The architecture of ANNs for scattering and extinction was the same with different neurons in each hidden layer, the three hidden layers in SPN had 80, 80 and 80 nodes (14,001 parameters in total) and the three hidden layers in EPN had 80, 80 and 120 neurons (17,281 parameters in total), respectively. Since the outputs of neural network should be compared to the actual values from the simulations, the problem is one of regression. The error functions for all the ANNs were set to the mean squared error (MSE, Equation (3)) where y predict is the value predicted by the neural network and y actual is the value simulated by the neural network. The tanh function was chosen as the activation function in all three layers of ANNs and the activation function of the last layer was a linear activation function. To facilitate the convergence, the input features were normalized to the values between 0 to 1 prior to training the ANNs. In addition, 20% of the data was set aside as the test set and the training of the ANNs was conducted on the 80% of the available data. The architecture of all the ANNs and their hyper-parameters, such as the number of hidden layers, number of neurons in each layer, batch size, the ratio between the train set and the test set and the active function were optimized for each of the ANNs, and the final architectures are reported above. The inverse design problem is slightly more complicated than the far-field optical response prediction. While optical response prediction can simply be categorized as a regression problem, the inverse design should successfully predict both the categorical (i.e., type of the materials used as core and shell) and continuous numerical (i.e., core radius and shell thickness) values. The prediction of categorical values is a classification problem while prediction of the numerical values of the geometric parameters of the core-shell structure falls in the category of regression problems. To address this issue for the inverse design problem, we developed two different architectures-a multi-class classifier inverse design network (IDN-Classifier) and a regressor inverse design network (IDN-Regressor). The simulated absorption spectra of the plasmonic core-shell structures (1000 datapoints for each absorption spectrum) were used as the far-field optical response to feed the IDN networks and the expected outputs of the IDNs were two datapoints associated with the core radius and shell thickness for IDN-Regressor and eight categorical outputs (material of choice for the core and shell) for IDN-Classifier. Figure 3 illustrates the architecture of the IDN-Regressor. The IDN-Regressor consists of 4 hidden layers with the first three layers containing an identical number of neurons (350, 350, 350 and 120 neurons). The architecture of IDN-Classifier was the same with 250 neurons for the fourth hidden layer. The loss function for the IDN-Regressor was set to MSE (Equation (3)) and the loss function for the IDN-Classifier was set to binary cross entropy (BCE, Equation (4)), where σ y predict is the sigmoid function. The number of trainable parameters were 639,138 and 685,808 for the IDN-Regressor and IDN-Classifier respectively. The tanh activation function was chosen as the active function of each layer, except for the last layers for both classifier and regressor inverse design networks. For the last layer, the activation function of the IDN-Regressor was set to the linear activation function and the activation function for the IDN-Classifier part was set to a sigmoid activation function. The optimizer was set to Adam optimizer with a learning rate of 0.001. All the hyperparameters, such as the number of hidden layers, the number of neurons in each layer, optimizer type and learning rate of the optimizer, were optimized to minimize the error with proper caution to avoid overfitting. BCE = − y actual log σ y predict + 1 − y actual log 1 − σ y predict . (4) Predicted Absorption, Scattering and Extinction Spectra by ANNs on the Previously Seen and Unseen Data The training process for each of the ANNs was performed using the optimized hyperparameters and the maximum number of epochs was set to 500. A call back function, which set aside 10% of the training data with a patience of 10 for the validation loss, was used to control the number of epochs for the training process of each of the ANNs. Figure 4 exhibits the training and validation loss as a function of number of epochs for APN, SPN and EPN. The training loss function exhibited a gradual decrease as the number of epochs increases and reached stable values in the range of 10 −4 for the absorption and extinction ANNs and in the range of 10 −6 for the scattering ANN beyond 10 epochs. The validation loss function also exhibited the same trend as the number of epochs increased and reached stable values in the range of 10 −4 , 10 −4 and 10 −6 for the APN, EPN and SPN, respectively. The fact that the value of validation loss is in close proximity of the training losses for all of the trained ANNs shows that overfitting did not occur for any of these trained ANNs. This was one of the ways to optimize the architecture of the ANNs because when the number of the hidden layers was increased beyond 3 layers, the same trend could not be seen for the validation loss function anymore. The optimized number of epochs determined by the call back function for the training process of the APN was determined to be 56 while for the SPN, the number of optimized epochs was 46 and for the EPN, the number of optimized epochs was 63. After the training was complete, the error of prediction for the test set was calculated to be 3.69 × 10 −4 , 1.12 × 10 −6 and 1.85 × 10 −4 for APN, SPN and EPN, respectively. As it has been mentioned in the ANN architecture section, the arcitecture of each ANN was optimized to reach the least amount of error for the test set. Table 2 exhibits a few different arcitectures for APN and their corresponsing errors on the test set as an example. These results show that a model with less than three layers cannot do a good job on predition on the test set. Also, when we increase the complexity of the APN from its ideal structure (increasing the number of layers or increasing the number of neurons in each layer), model tends to overfit and the test set error will increase. After the training was completed, the trained ANNs were used to calculate the absorption, scattering and the extinction spectra of the core-shell architectures. Figure 5 shows the comparison between the simulated, and ANN predicted, absorption, scattering and extinction spectra of a few randomly selected structures (with core radius and shell thicknesses already seen by the network). The inset of Figure 5 shows the materials chosen as the core and shell. In light of the fact that, on average, 20% of these data has never been used in the training or validation data sets, these results show an extremely acceptable agreement between FDTD simulation target spectra and the ANN predicted spectra. In order to examine the robustness of the trained ANNs in predicting the far-field optical responses of the core-shell structures, a few core-shell structures with features (core radii and shell thicknesses) that the trained ANNs did not get trained on at all were chosen to test the robustness of the trained ANNs. Figure 6 shows the predicted optical response of the robustness tests. The materials chosen for the core and shell and the corresponding radii and the thicknesses are reported in the inset of each figure constituting the panel in Figure 6. Interestingly, the results indicate excellent agreement between the FDTD simulated spectra and their ANN predicted counterparts. In the first instance, an Au@TiO 2 core-shell metastructure with 48 nm core radius and 6.5 nm shell thickness was chosen. In this case, both the core radius and shell thickness are different from the core and shell thicknesses the ANNs were trained on. In the second instance, an Ag@ZnO core-shell structure was chosen wherein the shell thickness (10 nm) was chosen among the 10 shell thickness values the ANNs were trained on but the core radius (16 nm) was not among the core radii values used for training the ANNs. And in the last instance, a Cu@InP core-shell structure was chosen; in this instance the core radius (13 nm) was chosen among the 10 core radii the ANNs were trained on but the shell thickness (21 nm) was not among those thickness values used for training the ANNs. The fact that in all of these instances the trained ANNs could successfully predict the absorption, scattering and extinction spectra of the coreshell metastructures evidences the robustness of the trained ANNs and proves the fact that the training process yield a network that can actually predict each spectrum while not merely remembering or interpolating the previously seen data. The Inverse Design ANN The training process for the IDN networks involved using the architecture described in previous section. The maximum number of epochs for both IDN-Regressor and IDN-Classifier were set to 500. The number of epochs were controlled using a callback function, which split 10% of the data to monitor the validation loss with a patience of 10. Figure 7a,b indicate the loss of IDNs as a function of number of epochs. Both the training loss and validation losses of the classifier and regressor networks decreased as the number of epochs increased. After 220 epochs, the classifier network's loss reached the values of 0.003 and 0.0045 for the training and validation data sets while these numbers reached values of 0.61 and 0.78 the regressor network after 240 epochs. The fact that both training loss and validation loss followed the same trend shows that overfitting on training data did not occur for any of the IDN networks. After the training process finished, the errors for the prediction of test data set were calculated to be 0.005 and 0.73 for the IDN-Classifier and IDN-Regressor, respectively. For the IDN-Classifier, only monitoring the loss as a function of number of epochs is not enough to judge its performance. The accuracy of classification is an important decision factor for evaluation of the performance of a classifier network. Figure 7c exhibits the accuracy of classification for IDN-Classifier as a function of number of epochs for the training and validation data sets. The accuracy plot shows a gradual increase as the number of epochs increase and reached an excellent value of 95% and 94% for the training and validation data sets after 220 epochs. Keeping in mind that the goal of IDN was to predict the design parameters of the core-shell structure, which can mimic a desired absorption spectrum, three randomly selected spectra from the test set were chosen to find out if the design parameters suggested by the IDN network actually resulted in an absorption spectrum similar to the input absorption spectrum. The design parameters suggested by the IDNs for these three selected spectra were fed to Lumerical software to simulate the absorption spectrum. Figure 7d-f shows these three randomly chosen spectra and the FDTD simulated absorption spectra (dashed lines) using the IDN suggested design parameters. The inset(s) of the images in Figure 7 show the predicted core radii and shell thicknesses and materials suggested by the IDN. These results demonstrate the excellent performance of the IDNs in suggesting the design parameters for a core-shell structure with an optical response resembling the desired absorption spectrum. Comparison between the Sspeed of FDTD Simulation vs Speed of ANN Prediction During the simulation process, the simulation duration was recorded for each of the 1500 simulations. Based on Equations (1) and (2) the simulation time using FDTD method increases linearly with the size of the simulation region. For each specific core and shell material, there were 10 × 10 = 100 simulations (10 different radii and 10 different thicknesses). Au and TiO 2 were chosen as the instances for the core and shell materials respectively and the duration of the 100 simulations using FDTD method as a function of the size of the combined core-shell structure (radius of the core + shell thickness) was monitored. Since the simulated radii of the cores was in the 2.5-50 nm range and the shell thicknesses were in the 2.5-25 nm range, the y axis of the graph is in the range of 5-75 nm. Keeping the core and shell materials constant (Au and TiO 2 ), the ANN's prediction duration for the same features used in the simulation, has been recorded. Figure 8a shows the relation between the FDTD simulation and ANN prediction durations with the size of the core-shell nanosphere's radius. As shown in Figure 8a, the duration of the simulation with FDTD falls within the second domain while ANN prediction is in the millisecond range. The inset of Figure 8a exhibits the relation between the ANN prediction durations with the size of the core-shell structure's radius in milliseconds domain. Furthermore, as mentioned in the introduction section, the FDTD simulation duration is dependent on the size of the simulation region (dashed blue line in Figure 8a is for guiding the eyes), which increases almost linearly with size of the core-shell structure. The inset of Figure 8a shows that a similar relation for the ANN prediction does not exist with the size of the structure since the prediction of the ANN is instantaneous and almost constant (green dashed line in the inset of the Figure 8a is for guiding the eyes.) This independence exhibits another advantage of the ANN prediction over the FDTD simulation, which is that not only is it faster than the FDTD simulation, its speed is also independent of the size of the structure. Figure 8b provides a comparison of the relative speed of the ANN predictions and the FDTD simulations and shows that for small structures, the speed of the ANN predictions can be 100 times faster than the FDTD simulations while for larger structures (75 nm) it can be almost 1000 times faster. Keeping in mind that the maximum size of the core-shell nanostructures was 75 nm in our study, these results show that for larger structures, the ANNs predictions can significantly outpace the FDTD simulation. Significance for Nanomaterials Synthesis and Optimization, and Use in Functional Devices The ability to quickly obtain the morphological parameters of a plasmonic nanomaterial with a desired optical response from an artificial neural network is a powerful enabler of cutting edge experimental research. One could envisage a number of applications where such information is decisive in design of experiments. For instance, LSPR-based sensors frequently use a metal oxide shell around an Au or Ag nanoparticle to both enable facile functionalization/chemisorption of analyte(s) on to the metal oxide surface and to protect the Au/Ag core from wear and corrosion. The source LED wavelength is typically known for sensor deployment and the core-shell nanomaterial needs to have a plasmon resonance well-matched to the light source to enable maximum refractive index sensitivity-this is one practical example of the inverse design problem. When the analyte in question is a biomolecule with a fluorescent tag and the fluorescence is to be detected or imaged, it is desirable to match the emission maximum or absorption maximum of the fluorophore with the plasmon resonance to achieve maximum local field enhancement of the light-matter interaction to boost sensitivity-another example of the inverse design problem. In broadband light harvesting applications such as photovoltaics and photocatalysts, plasmonic nanoparticles are used to generate hot carriers and/or to maximize light-matter interactions. Furthermore, a thin insulating shell around the Ag/Au nanoparticle is needed to minimize carrier recombination on the metal surface. The ANN-guided inverse design of a core-shell plasmonic nanoparticle can then be used to boost the light absorption in spectral ranges where the active layer does not absorb efficiently and to achieve the maximum local electromagnetic field enhancement while still minimizing recombination. Conclusions Researchers throughout the world are finding novel applications for machine learning and broadening its horizons everyday. Recently its horizons have been reaching into simulations of the properties of nanomaterials as well. Electromagnetic phenomena are described by the four Maxwell's equations. Finite difference methods (including FDTD), which are important tools for simulation of the optical properties of metamaterials rely on solving partial differential equations in tiny building blocks (meshes) of the defined structures, which renders them time consuming by today's standards. In this report, we have shown that artificial neural networks (ANNs) with the proper architecture and optimized hyperparameters can rapidly predict the results of FDTD simulations. Plasmonic core-shell materials with different core and shell materials, and various core radii and shell thicknesses, are important materials for solar energy harvesting, sensing and light emission applications. We have chosen plasmonic metamaterials with sizes within the 5-75 nm range as a case study and our results show that using artificial neural networks, the far field optical response of the plasmonic core-shell materials can be rapidly predicted with high accuracy with little to no computational expense. Depending on the size of the whole core-shell structure, the speed of ANN prediction was estimated to be 100 to 1000 times higher than the FDTD simulations. The results from our work can be used as a proof of concept for potential replacement of FDTD simulation with faster artificial neural network methods. Furthermore, we have demonstrated that the problem of inverse design of a core-shell plasmonic metastructure with a desired absorption response can be easily tackled by a proper ANN structure. In general, the results from our work can be considered as a proof of concept for the next generation of electromagnetic simulations of nanomaterials, where rapid ANN can potentially replace sluggish finite difference methods.
6,958
2021-03-01T00:00:00.000
[ "Computer Science" ]
Condensation of actin filaments pushing against a barrier We develop a model to describe the force generated by the polymerization of an array of parallel biofilaments. The filaments are assumed to be coupled only through mechanical contact with a movable barrier. We calculate the filament density distribution and the force–velocity relation with a mean-field approach combined with simulations. We identify two regimes: a non-condensed regime at low force in which filaments are spread out spatially and a condensed regime at high force in which filaments accumulate near the barrier. We confirm a result previously known from other related studies, namely that the stall force is equal to N times the stall force of a single filament. In the model studied here, the approach to stalling is very slow, and the velocity is practically zero at forces significantly lower than the stall force. Introduction Actin filaments and microtubules are key components of the cytoskeleton of eukaryotic cells.Both play an essential role for cell motility and form the core components of various structures such as lamellipodia or filopodia.They are active elements which exhibit a rich dynamic behavior.For instance, actin filaments treadmill in a process where monomers are depolymerized from one end of the filament while other monomers are repolymerized at the other end.Actin polymerization is highly regulated in the cell, through many actin binding proteins.Some of these proteins accelerate actin polymerization, while others crosslink filaments or create new branches from existing filaments.All these proteins ultimately control the force that a cell is able to produce [1]. Given the complexity of actin polymerization, many studies have focused on its basic structural element, namely the filament itself.For instance, a lower bound for the polymerization force generated by a single actin filament has been deduced from the buckling of a filament which was held at one end by a formin domain and at the other end by a myosin motor [2].Other studies focused on the dynamics of single filaments through depolymerization experiments [3].In order to understand the rich dynamical behavior of single filaments like actin or microtubules and the force they can generate, discrete stochastic models have been developed which incorporate at the molecular level the coupling of hydrolysis and polymerization [4,5,6,7,8,9,10,11].The filament dynamics and the force generation are two related aspects: Hydrolysis is not only relevant for understanding the single filament dynamics but also for the force generation, since the force generated by a filament is typically lowered by hydrolysis [6]. Ensembles of parallel interacting filaments are able to generate larger forces than single filaments as in cellular structures called filopodia [12].General thermodynamic principles controlling the force produced by the polymerization of growing filaments pushing against a movable barrier were put forward many years ago by Hill et al. [13].For many years however, it was unclear how to extend these results in order to understand theoretically the effect of interactions or collective effects in the process of force generation.Progress in this direction was made through the introduction of stochastic models for ensembles of parallel microtubules [14,15,16], and through the development of simulations for actin filaments in parallel geometry [17] or in networks [18].In these works, the brownian ratchet model [19] was used at the single filament level, while some specific rule was assumed concerning the way the load is shared between the filaments.In the absence of hydrolysis and lateral interactions between the filaments, the stall force of an ensemble of parallel N filaments should be N times the stall force of a single filament, as confirmed either by a detailed balance argument valid only near stalling [15] or more recently by a more general analysis based on a decomposition into cycles [20]. In this work, we propose a new theoretical framework for this problem.One novel aspect as compared to previous work [15] comes from the fact that we model the dynamics of an ensemble of parallel non-interacting filaments at an arbitrary value of the force, rather than just predict the value of the stall force.Another important difference between this model and previous work, is that our model allows an arbitrary number of filaments in contact with the moving wall, which allows the possibility of a condensation transition for the number of filaments at the wall. This paper is organized as follows: we first present the model, secondly the meanfield approach for the general case of an arbitrary N , then the simulations, and a theoretical analysis of the approach to stalling.We end with a discussion of various related experiments in this field, in which forces generated by a few actin filaments have been measured [21,22]. Model We consider two rigid flat surfaces: one fixed where filaments are nucleated (nucleating wall) and one movable (barrier) whose position is defined to be the position of the filament(s) furthest away from the nucleating wall (thus there is always at least one filament in contact with the barrier).In the cellular environment, this "barrier" is often a membrane against which filaments exert mechanical forces.We do not model the internal structure of the filaments, and in particular we do not account for ATP hydrolysis.After nucleation, the filaments grow or shrink by exchanging monomers with the surrounding pool of monomers, which acts as a reservoir.The filaments are coupled only through mechanical contact with the barrier.In some previous models [14], a staggered distribution of initial filaments was assumed so that there would be only a single filament in contact at a time.Here we do not make such an assumption, on the contrary monomers inside different filament are precisely lined up.As a result, the number of filaments at contact is an arbitrary strictly positive integer. It follows that we can separate the filaments in two populations, the free filaments which are not in contact with the barrier, and the filaments in contact.Only the filaments in contact feel the force exerted by the barrier on them, and as a result this changes their polymerization rates as compared with free filaments.We assume that a monomer can be added to any free filament with rate U 0 or removed with rate W 0 , as shown in Fig1.Similarly, a monomer can be added to a filament in contact with rate U (F ), and removed with a rate W (F ) (or W 0 as explained below).The values of the rates which we have used correspond to an actin barbed end and are given in table 1.We also assume that the barrier exerts a constant force F on the filaments in contact, this force is defined to be positive when the filaments are compressed. We need now to specify more precisely how the force exerted by the barrier is shared by the filaments in contact.When a monomer is added to a filament in contact, the barrier moves by one unit, but only the filament on which the monomer has been added does work; we therefore treat all the other filaments as free during that step.Similarly, during depolymerization, filaments depolymerize from the barrier with the free depolymerization rate W 0 as long as there is at least one other filament in contact with the barrier, since in this case the depolymerizing filaments do not produce work.The depolymerization occurs with a rate W only when there is a single filament in contact with the barrier.In this case the filament produces work, since its depolymerization leads to the motion of the barrier. For a filament which has exchanged work with the barrier through addition or loss of monomers, we use a form of local detailed balance which reads : This relation is obeyed by the following parametrization of the rates [15,23,24]: where γ is the "load factor" and f is the non-dimensional force f = F d/k B T , where d is the monomer length.Note that γ itself could be a function of the force, however in the following we assume that it is just a constant.More elaborate treatments of the load dependence of the transition rates can be found for instance in Ref. [25]. An essential feature of this model is that although multiple filaments interact with the barrier, when a monomer is added to one of the filaments in contact, it must do work against the entire load.In the classification of [26], this corresponds to a scenario with "no load sharing".If the force could be shared by more than one filament or if the monomers in different filaments were not precisely lined up, the above discussion would still apply: in this case a single filament would carry a fraction of the load at a time, and for that filament a similar local detailed balance would hold.In this case, although the stall force would be the same as in the "no-load sharing" scenario, the form of the force-velocity curve would be affected.Such models have been considered in Refs.[15,16,26,20], but for simplicity, in the present paper, we focus on the "no load sharing" model. Theory In the particular case that there are only two filaments (N = 2), the master equation can be solved exactly in terms of the probability that there is a given gap at a given time between the two filaments, as shown Appendix A. Unfortunately, this approach is limited to the N = 2 case, because only in that case there is a single gap between the filaments.For N > 2, there are many gaps, so in general such an approach quickly becomes as complicated as the one based on the filaments themselves.So instead of looking for an exact solution, we provide in the section below, an approximate but accurate mean-field solution for the general case N > 2. An ensemble of N filaments with N > 2 We recall that the position of the moving barrier coincides with the position of the longest filament, and we define N i as the number of filament ends, which are present at a distance i from the moving barrier.We take the convention that i = 0 corresponds to the barrier itself.Since each filament has only one active end and the total number of filaments is fixed to be N , we have the condition that i=0 N i = N .After a careful account of all the possible events that can occur on any filament in a small time interval, we obtain the following master equations: where δ N 0 =1 represents the probability that there is only a single filament in contact. In deriving these equations, we have for instance implicitly replaced the joint probability to have N i filament ends at position i and to have only one filament at contact at time t, namely P (N i (t) = N i , N 0 (t) = 1) by the product of P (N i (t) = N i ) and P (N 0 (t) = 1).In other words, a mean-field approximation has already been used.A further consequence of this mean-field approximation is that in these equations, δ N 0 =1 can be replaced by its time-averaged value, which we call q: The quantity q is a central feature of our model for N > 2. All subsequent results and calculations appearing in this paper follow from this mean-field approximation.At steady state, the l.h.s. of Eq. 3 is zero.The r.h.s.leads to a recursion valid for i ≥ 2, which can be solved after a few lines of calculations.The solution is where l is the correlation length (expressed in number of subunits) given by The other two equations Eqs.4-5, together with the normalization condition fix N 2 , N 1 and N 0 .We find that the average number of filaments in contact with the wall N 0 is: When N = 2, this mean-field solution agrees with the exact solution derived in appendix A with the additional condition that γ = 1, in which case the on-rate carries all the force dependence.For an arbitrary value of γ, the mean-field solution does not agree with the exact result obtained for N = 2.This is expected since the mean-field approximation should work well only in the limit of large N . The average velocity of the moving barrier is where the first term within the parenthesis is the contribution of the filaments in contact polymerizing with rate U and the second term is the contribution from depolymerizing events of a single filament in contact.We have not found a way to solve in general the self-consistent equation satisfied by q, namely Eq. 6, except near stalling conditions as explained in the next section.For this reason, we have calculated numerically q from simulations, and derived predictions from the mean-field theory assuming that q is known.For instance, using Eqs. 9 and 10, one obtains the average velocity. Numerical validation of the mean-field approach We have tested the validity of the mean-field approach using numerical simulations.We used the classical Gillespie algorithm [27] incorporating the Mersenne Twister random number generator.Runs were executed for N up to 5000.Up to 200 trial runs were used to derive averages and distributions.We validated the simulation results by comparing them with the particular cases N = 1 and for N = 2 for which an exact solution is known (it is given in [6] for N = 1 and in the previous section for N = 2).By evaluating the parameter q from the simulations, we obtained a very good agreement between the theoretical approach based on the use of mean-field and the simulations for the determination of the force velocity curve (shown in Fig. 2, bottom) and for the number of filaments N 0 in contact with the barrier (shown in Fig. 2, top).We find that the values of N i as determined by theory does not deviate from the simulation value by more than one. Condensation transition as function of the applied force At low forces, the barrier velocity is close to its maximum value given by the free polymerization velocity.In this case, only one or a small number of filaments are in contact, therefore q 1, which corresponds to a non-condensed or single filament regime.The steady state density profile of the filaments is broad as shown in Fig. 2 (bottom, left inset) and the corresponding correlation length is large.With the parameters values corresponding to this figure, we have l 151nm. Inversely, at high forces, the filaments accumulate at the barrier.As a result q 0, the density profile is an exponential as shown in Fig. 2 (bottom, right inset) with a very short correlation length of the order of a monomer size.With the parameters values corresponding to this figure, we have l 4.1nm.Since in this case, the number of filaments in contact, N 0 is a finite fraction of N , we call this regime the condensed regime.In this high force regime (typically near the stall force F = F stall ), the following condition is obeyed: N U U 0 .Since we also have q 0, Eq. 9 simplifies to This equation can be used to predict the finite fraction of filaments in contact in the condensed regime.This condensed regime corresponds to the plateau in the curve of N 0 vs. F which is shown in Fig. 2 (top inset).In the conditions of this figure, Eq. 11 predicts a plateau for N 0 N/2 = 50 which is indeed observed, and as expected the plateau in N 0 (Fig. 2, top) occurs at the same force at which the velocity approaches zero (Fig. 2, bottom). Theoretical stall force Let us first discuss here the theoretical expression of the stall force and then in the next section the practical way this limit is approached.The stall force is defined as the value of the force applied on the barrier for which the velocity given by Eq. 10 vanishes.For N = 1, the stall force is For N = 2, using the results obtained in Appendix A for N 0 and q, we find that the stall force stall , is exactly twice the stall force of a single filament, F stall , In the general case of an arbitrary number of filaments N , we expect that stall force stall should be [15,16]: This result can be derived from the following argument: near stalling conditions, the average density of filaments at contact N 0 /N , can be obtained from Eq. 11 above.This average density of filaments can be used as an approximation of the probability to have one filament in contact when N 0 /N 1.Since q is the probability that there is a single filament in contact (in other words, there is one filament among N in contact and the remaining N − 1 are free), it follows that which leads using Eq.11 to We call this the binomial form for q.We note that Eq. 14 also means that q N 0 exp (−N 0 ), (16) which corresponds to a Poisson statistics for the distribution of the number of filaments at contact.Now inserting the final expression for q of Eq. 15 into the stalling condition, namely the vanishing of the velocity given by Eq. 10, one obtains the theoretical stall force given in Eq. 13. The theoretical expression of the stall force given by Eq. 13 has been also obtained in a recent study devoted to the stall force of a bundle of filaments [20].This study is based on the model introduced in Refs.[14,15] which the authors modified to include lateral interactions between the filaments of the bundle.Using a theoretical argument based on the identification of relevant polymerization cycles, the authors of Ref. [20] confirm the expression of the stall force obtained before in [15], which is also our Eq.13.More importantly, they show with this method that this expression has a universal character for models of this kind, hence in particular the independence of the stall force with respect to the load distribution factor γ.They also obtained force velocity curves for various values of the lateral interaction and staggering distance, which -as we have checked-agree with the numerical results obtained in this paper, when there is no lateral interaction and when the shifts are zero. In Fig. 3, the value of q determined from the simulations is compared with theoretical expression given by Eq. 14 or Eq.16 (both expressions give similar results).We note that the deviation between the simulation points and the theory increases as the force is lowered, this is due to the mean-field nature of the theory which becomes invalid when the force is small since then the fluctuations are large.For completeness, we also show in Fig. 4 the probability density function of the number of filaments at contact for various forces. The approach to stalling Let us now discuss more precisely how the velocity approaches zero.We find that in our simulations, for N larger than about 10, the velocity approaches zero at forces significantly lower than the stall force as shown in Fig. 2 (bottom).We note that a similar effect has been obtained when analyzing the stall force of an ensemble of interacting molecular motors [28].To quantify this effect, we therefore define an apparent stall force, as the value of force where the velocity drops to less than a small fraction α = 2.5% of the value it has for zero force [26].In the experimental situation, this bound could correspond for instance to the limit of resolution in the velocity measurement. The value of the velocity at zero force corresponds to the maximum velocity.When F = 0, there is no coupling between the filaments, which behave as independent random walkers.The probability to have more than one walker at the leading position is zero in the long time limit, which implies q = 1.Therefore, N 0 = 1 and the velocity at zero force equals the polymerization velocity of a single filament: which is mainly controlled by the monomer concentration.Now using the expression of the velocity at an arbitrary force given by Eq. 10, the expression of N 0 given in Eq. 9 and the parametrization of the rates of Eq. 2 for the particular case γ = 1, we find that Since q 1 near stalling, we can write the following more explicit expression In Fig. 6, we show the apparent stall force given by Eq. 18 as function of N together with the theoretical stall force of Eq. 13. Let us show now that filament condensation at the barrier and the drop in velocity occur simultaneously.Assuming for simplicity that γ = 1, N 1 and q 0 in the high force regime, we can substitute Eq. 19 into Eq.9 to obtain: From this we see that since α << 1, the maximum number of filaments at the barrier is almost reached.If V 0 is the initial velocity and N s 0 is the finite fraction of filaments at the barrier at stall force, we have the equivalence of the following two conditions 0 , which shows that filament condensation occurs at the value of the apparent stall force, a point which is confirmed by simulations.Indeed, in the case of Fig. 6 the apparent stall force is about 12.7 pN, and the condensation visible in Fig. 2 also occurs close to 12 pN. Close to stall force it is also possible to derive an analytic expression for the forcevelocity relation by substituting into Eq. 10 the expressions of q, given by Eq 14 and Eq.16.Assuming for simplicity γ = 1, and using Eq.11, we obtain with the binomial form: and with the Poissonian form: When these expressions are expanded close to stall force, one obtains in both cases: This indicates an exponential dependence of the velocity close to stalling, which is indeed present in the simulations as shown in Fig. 5. To summarize, we have shown in this section that the apparent stall force does not scale linearly with N as the theoretical stall force but rather as ln (N ).The apparent stall force is the quantity of experimental interest, it is also near the apparent stall force that the condensation transition discussed in a previous section occurs (nothing special of that sort occurs near the theoretical stall force). Related experimental work in connection with the model In this section we discuss related experimental work.Although a precise comparison with the present model is not attempted, we hope that the discussion could be useful in identifying some relevant questions in this field.The force generation by parallel actin filaments growing out of an acrosome bundle has been measured in Ref. [21].The observation of a plateau in force measurements by optical tweezers is a good indication of the stalling regime, but the measured stall force is very small, comparable with that of a single filament, although many filaments are present (about a dozen).These results thus stand at odds with the theoretical predictions for the stall force obtained in Refs.[15,20] (and in the present paper).In the present paper, we have emphasized the fact that the approach to stalling is slow, which can lead to an underestimation of the true stall force.The resolution of the optical tweezers leads to a limit in the detection of small velocities, which corresponds roughly to the criterion for the apparent stall force used in the previous section.However, with a dozen filaments, the apparent stall force should be significantly larger than that of a single filament.Another difficulty is that there is no indication in this experiment of the two regimes of low and large forces discussed in this paper.At this point, it may be important to say that the results of this experiment have not been reproduced, in fact in a new experiment discussed below, where the force generated by filaments growing from two magnetic beads outwards, very different results have been obtained [22].In view of all this, we think that the reason for these discrepancies may be foundin effects which are not accounted for (such as buckling or filament cross-linking) or they maybe attributed more simply to the fact that the two experiments have been done in very different biochemical conditions.Indeed, the authors of Ref. [21] have used profilin to suppress spontaneous nucleation of actin filaments, while profilin was absent in [22].The use of profilin in Ref. [21] introduced complications since profilin also modifies the thermodynamics of the system by binding to actin monomers, and possibly interferes with ATP hydrolysis during polymerization. The mechanical response of an actin networks confined between two rigid flat surfaces has been probed using a surface force apparatus (SFA) in Ref. [29], and using an atomic force microscope (AFM) in [30].Both experiments reported a load history dependent mechanical response, which presumably reflects a complex interplay between buckling and polymerization forces.This complex interplay makes it difficult to isolate the true contribution of polymerization forces.More recently, C. Brangbour et al. devised a new experimental setup in which actin is nucleated from magnetic beads which are covered by gelsolin [22].A magnetic field is used to counteract the polymerization force, which allows to measure the force-velocity curves.As mentioned above, the results of Ref [21] for the stall force of a single filament are not confirmed: on the contrary, the stall force which is obtained is of the order of 40 pN, which corresponds according to Eq. 13 to about 25 active filaments.The general shape of these force-velocity curves is similar to the ones obtained in this work, but some deviations are present at low and high forces.These discrepancies suggest that our model may be too simple to fully explain this experiment, and that other aspects may be important.First, it would be necessary to go beyond the parallel organization of the filaments in order to better model the experimental geometry of Ref. [22].Secondly, it is probably important to account in the model for the possibility of nucleating new filaments from existing ones [31].Thirdly, buckling forces could play an important role in the experiment.Some of these effects have been included in previous numerical simulations of branched actin networks [18,32], but they are typically difficult to study with analytical models of the kind presented here. Conclusion In this paper, we have provided a new theoretical framework to describe the dynamics of an ensemble of N parallel filaments with no lateral interactions, which are exerting a force against a movable barrier.The special cases N = 1 and N = 2 can be solved exactly, unlike the general case for arbitrary N , for which we have constructed a mean-field approach.We identify two regimes: a non-condensed regime at low force in which filaments are spread out spatially, and a condensed regime at high force in which filaments accumulate near the barrier.The transition occurs near the apparent stall force where the velocity approaches zero.We find that for large N this regime where velocity approaches zero occurs at forces significantly lower than the theoretical stall force, given by N times the stall force of one filament.In fact, the apparent stall force does not scale linearly with N as the theoretical stall force does; instead it scales logarithmically. On the theory side, several extensions of our work are worth investigating.For instance, bundles can be formed experimentally by growing filaments in the presence of specific proteins which cross-link the filaments.To describe such a situation, it would be necessary to include lateral interactions.Another direction would be to explore the role of load sharing, as done in [26] for instance.Although the dynamics will be different, we still expect a condensation transition to be present in this case. In the end, our model offers a very simplified view of the problem of force generation by actin filaments, but precisely for this reason we hope that it can be a useful starting point for more refined studies. Representation of the filaments pushing on a barrier (the white vertical rectangle on the right, which exerts a force F on the filaments).The right figure corresponds to the case that only one filament is in contact with the barrier while the left figure corresponds to the case where several filaments are in contact with the barrier.The on and off rates of monomers onto free filaments are U 0 and W 0 .The on-rate on filaments in contact is U , and the off-rate is W when there is only one filament in contact and W 0 otherwise.Note that the velocity decreases to near zero exponentially when approaching the theoretical stall force, which is shown with the arrow.For this value of the force, the numerical velocity is not strictly zero but it is close to the uncertainty intrinsic to the simulation, which is here of the order of 10 −8 nm/s.The parameters are N = 10, γ=1 and C = 1.2µM .F (N) app (simulations) F (N) app (approximated) F (N) Table 1.Parameters characterizing an actin filament barbed end.W 0 is the free filament depolymerization rate, k 0 is the rate constant entering the free filament polymerization rate U 0 = k 0 C, where C is the concentration of free monomers, d is the monomer size and C c is the critical concentration.W 0 (s −1 ) k 0 (µM −1 s −1 ) d(nm) C c (µM ) 1. 4 11.6 2.7 0.141 Figure 2 . Figure2.Illustration of the condensation transition of actin filaments against a barrier.Top: Average barrier velocity vs. force.Symbols represent simulation results, the dotted line represents mean-field predictions based on Eq. 10.Bottom: Average number of filaments in contact with the barrier.Symbols represent simulation results, the dotted line represents mean-field predictions based on Eq. 9.For both plots, the parameters are N = 100, γ = 1 and C = 0.24µM .Inset, Left: Density profile in the non-condensed regime (bars) as function of the distance to the barrier, together with mean-field theory prediction (line) from Eq. 7, for an applied force F = 2pN , which is low with respect to the apparent stall force.Inset, Right: Density profile in the condensed regime (bars) as function of the distance to barrier, together with mean-field theory prediction (line) from Eq. 7, for an applied force F = 12pN, which is close to the apparent stall force of ≈ 12.5pN . Figure 3 . Figure 3.Comparison between theoretical and numerical estimates for the parameter q, which represents the probability that there is a single filament in contact.Symbols represent simulation results, dotted line corresponds to Eq. 16 and continuous line corresponds to Eq. 14 (both expressions are mean-field approximations valid in the high force regime).The parameters are N = 10, γ = 1 and C = 0.24µM . Figure 4 . Figure 4. Probability distributions of the number of filaments in contact with the barrier at various forces.The parameters are N = 100,γ = 1 and C = 0.24µM . Figure 5 . Figure 5. Average barrier velocity in logarithmic scale as function of the force in linear scale.Note that the velocity decreases to near zero exponentially when approaching the theoretical stall force, which is shown with the arrow.For this value of the force, the numerical velocity is not strictly zero but it is close to the uncertainty intrinsic to the simulation, which is here of the order of 10 −8 nm/s.The parameters are N = 10, γ=1 and C = 1.2µM . Figure 6 . Figure 6.Theoretical stall force F (N ) stall (straight line -calculated from Eq. 13) and apparent stall force, both as computed from simulations F (N ) simulations (black symbols) and from the mean-field approximation given in Eq. 18 (F (N ) approximated -dotted line) vs. number of filaments N .The parameters are γ = 1 and C = 0.24µM .
7,322
2011-01-06T00:00:00.000
[ "Physics", "Biology" ]
Modulation of surface meteorological parameters by extratropical planetary-scale Rossby waves This study examines the link between uppertropospheric planetary-scale Rossby waves and surface meteorological parameters based on the observations made in association with the Ganges Valley Aerosol Experiment (GVAX) campaign at an extratropical site at Aryabhatta Research Institute of Observational Sciences, Nainital (29.45 N, 79.5 E) during November–December 2011. The spectral analysis of the tropospheric wind field from radiosonde measurements indicates a predominance power of around 8 days in the upper troposphere during the observational period. An analysis of the 200 hPa meridional wind (v200 hPa) anomalies from the Modern-Era Retrospective Analysis for Research and Applications (MERRA) reanalysis shows distinct Rossby-wave-like structures over a highaltitude site in the central Himalayan region. Furthermore, the spectral analysis of global v200 hPa anomalies indicates the Rossby waves are characterized by zonal wave number 6. The amplification of the Rossby wave packets over the site leads to persistent subtropical jet stream (STJ) patterns, which further affects the surface weather conditions. The propagating Rossby waves in the upper troposphere along with the undulations in the STJ create convergence and divergence regions in the mid-troposphere. Therefore, the surface meteorological parameters such as the relative humidity, wind speeds, and temperature are synchronized with the phase of the propagating Rossby waves. Moreover, the present study finds important implications for medium-range forecasting through the upper-level Rossby waves over the study region. Introduction The boreal extratropical winter climate is generally constituted by strong jets and large-amplitude quasi-stationary planetary-scale (Rossby) waves.The background flow dictates the propagation of Rossby waves, which are linked to the strong vorticity gradient around the tropopause in extratropics (e.g., Hoskins and Ambrizzi, 1993;Branstator, 2002;Schwierz et al., 2004b).Theoretical and observational studies along with idealized numerical experiments have indicated that topographically and diabatically generated vortex anomalies at upper and lower levels can serve as triggers for Rossby waves propagating into the extratropics (Sardeshmukh and Hoskins, 1988;Hoskins and Karoly, 1981;Schwierz et al., 2004a;Niranjan Kumar and Ouarda, 2014).Also, a significant role in the amplification of the waves is played by the diabatic processes as demonstrated in a previous case studies (Massacand et al., 2001).The most dominant periods of the stationary planetary-scale waves (Rossby normal modes) corresponding to intrinsic periods are 2, 5, 8.3, and 12.5 days (Forbes, 1995).Moreover, Rossby waves have been recognized as important features that can influence the predictability of midlatitude weather systems, which have a forecast capability of 1-2 weeks (Shapiro and Thorpe, 2004;Hoskins, 2006). Published by Copernicus Publications on behalf of the European Geosciences Union. More recent work has examined the link between Rossby waves or Rossby wave breaking and high-impact weather events.For instance, Niranjan Kumar et al. (2015) linked the breaking Rossby waves to episodes of extreme precipitation over the Arabian Peninsula.It has been found that the precursor waves to these breaking Rossby waves can be tracked up to 8 days in advance over the peninsular region.Similarly, precursor Rossby wave trains leading to heavy precipitation over the European Alps have been identified up to 8 days in advance and as much as 10 days for a case of flooding in Germany (Grazzini and Van der Grijn, 2003;Martius et al., 2006;Grazzini, 2007).It is also worth mentioning here that Grazzini (2007) studied the performance of the European Center for Medium-Range Weather Forecasts (ECMWF) system for these heavy-precipitation events and found that the forecasts had a higher than average skill on synoptic scales.The association between the long-lived Rossby wave trains and/or Rossby wave breaking and intense European cyclones is found mainly in the recent studies (e.g., Wirth and Eichhorn, 2014;Gómara et al., 2014). It is well known from previous reports that Rossby waves remain in coherent over many days under favorable atmospheric conditions.They also propagate over long distances and can contribute to teleconnecting remote regions of the atmosphere (Chang and Yu, 1999;Niranjan Kumar and Ouarda, 2014).For example, Chang and Yu (1999) have shown that during December-January-February in the Northern Hemisphere, the Rossby wave packets are most coherent along a band that extends from northern Africa through southern Asia (maximum coherence) into the Pacific storm track.Moreover, knowledge of the evolution of coherent Rossby waves is crucial for weather forecasting since they set the stage for weather systems to evolve.Furthermore, analyses of forecast errors of numerical weather prediction models have revealed a close link between the propagation of the error patterns and the propagation of upper-level Rossby waves (Davies and Didone, 2013;Grazzini, 2015).Therefore, the study of the characteristics of extratropical Rossby waves, such as propagation, breaking, and their organization, is imperative for understanding and forecasting regional and local weather. A significant fraction of weather-related loss of life and property in the extratropical latitudes is associated with severe convection (Pielke and Klein, 2001;Fritsch and Carbone, 2004).During the 21st century, there are several types of extreme weather events (heat waves, droughts and heavy rainfall events) that the IPCC (2012) considers to be increasing and becoming more frequent, more widespread, and/or intense in most parts of the world.A large number of recent extreme weather events have occurred in many parts of the world (for more details, refer to Coumou and Rahmstorf, 2012).Furthermore, the extreme events increase approximately in proportion to the ratio of the climate warming trend and short-term variability (Rahmstorf and Coumou, 2011).In several cases, these extreme weather events are connected on the synoptic-to-planetary-scale framework associated with upper-level Rossby waves (e.g., Petoukhov et al., 2013;Screen and Simmonds, 2014). Recent reports have also concentrated on the extreme rainfall events that could have resulted in several damaging floods in India (Goswami et al., 2006;Rajeevan et al., 2008;Guhathakurta et al., 2011).One such example of recent floods due to a heavy rainfall event, occurred in the state of Uttarakhand in June 2013, affecting the lives of thousands of people (Joseph et al., 2015).The trend analysis of heavy rainfall events over the country indicates an increasing trend, especially in the northern parts of India (Sinha Ray and Srivastava, 2000).These devastating floods in Uttarakhand affected the tourism industry, which is the principal contributor to the state gross domestic product (GDP).In addition, the extreme rainfall events were also recorded in the states of Himachal Pradesh and Jammu and Kashmir (Nibanupudi et al., 2015). Thus, the state of Uttarakhand situated in the Indian Himalayan region is known to face disastrous climatic hazard events like floods and landslides.In order to improve prediction of the high-impact weather in this region, it is also necessary to have knowledge of the global-to-regional influences on the evolution and predictability of high-impact weather.Furthermore, knowledge is still lacking on the characteristics of long waves that have a significant influence on background flow and the creation of flow patterns conducive to the development of heavy rainfall events.Therefore, the presence of synoptic features in the lower stratosphere along with the complex topography may have significant influence on the evolution of extreme weather events over the region.However, to date, only limited studies exist describing the links between synoptic-scale tropospheric activity in the region and variability in the lower stratosphere. Hence, the work presented here describes the observations at an extratropical site at Aryabhatta Research Institute of Observational Sciences (ARIES), Nainital (29.45 • N, 79.5 • E; 1958 m a.m.s.l.), located in the state of Uttarakhand.Over this site, a strong link has been found between the surface meteorological parameters and propagating upperlevel Rossby waves.The surface meteorological observations along with radiosondes launched in association with the Ganges Valley Aerosol Experiment (GVAX) campaign during November-December 2011 offered a unique opportunity to better understand the link between the upper troposphere and surface weather.The following Sects.2, 3, and 4 elaborate successively on the data used in this study and on the results and discuss them.Section 5 summarizes and concludes the present work. Data description The data were acquired through the measurements conducted at ARIES, Nainital, about 300 km northeast of New Delhi, India, during the GVAX campaign.GVAX was a re- sult of a joint research collaboration between the Indian Space Research Organization (ISRO), the Indian Institute of Science (IISc), ARIES, and the US Department of Energy (DOE).First, the Atmospheric Radiation Measurement Mobile Facility (AMF-1) was deployed at ARIES Observatory, Nanital, during June 2011-March 2012.In this study, the analysis is based on radiosondes (RS-92 Väisälä) launched every 6 h during November-December 2011.Pressure, humidity, wind, and temperature data were recorded every 2 s during ascent, giving a vertical resolution of roughly 10 m on average in the troposphere.We have also utilized the surface meteorological observations during the GVAX campaign.In situ sensors were used to measure the surface temperature, relative humidity (RH), and wind speed at a time interval of 1 min during the campaign period.For the present analysis, 1 min data are rearranged to obtain daily averages.The detailed technical report about the radiosonde and surface meteorological observation system (SMOS) can be found online (http://www.arm.gov/publications/handbooks). To support the observations, we also make use of wind fields from Modern-Era Retrospective Analysis for Research and Applications (MERRA).MERRA is a reanalysis product generated by the National Aeronautics and Space Administration (NASA) Global Modeling and Assimilation Office (GMAO) using the Goddard Earth Observing System (GEOS) version 5.2.0 (Rienecker et al., 2011; http: //gmao.gsfc.nasa.gov/research/merra/).MERRA has the advantage of incorporating information from a variety of recent in situ satellite data streams -for example, the observations from the Atmospheric Infrared Sounder (AIRS) and scatterometer-based wind retrievals.MERRA covers the period from 1979 to the present and continues to be updated with latency on the order of weeks.The model has a native resolution of 72 layers in the vertical and 2/3 • × 1/2 • in the horizontal.In addition to the 6-hourly 3-D analysis at the default spatial resolution, MERRA also provides 3-hourly 3-D diagnostics at 1.25 • × 1.25 • resolution on 42 vertical levels, and this has been used in this study In addition, Tropical Rainfall Measuring Mission (TRMM) 3B42 (version 7) daily rainfall intensities are also used over the study region during the GVAX campaign period of November-December 2011.This product incorporates different types of sensors, namely microwave and infrared (Huffman et al., 2007, and references therein).TRMM-3B42 products were obtained at a spatial resolution of 0.25 • × 0.25 • between 49.875 • S and 49.875 • N. Results Figure 1a and b portray the time-height cross section of horizontal winds from 1 November to 31 December 2011 based on 6-hourly radiosonde observations at an extratropical site of ARIES, Nainital.The zonal velocity is westerly through most of the troposphere and peaks at speeds in excess of 40 m s −1 in the subtropical jet stream (STJ) at an altitude of 12 km (Fig. 1a).The STJ is typically centered around 30 • latitude at an altitude of about 12 km and is strongest in the winter season.Hence, Fig. 1a clearly depicts the strong STJ during winter over the observational site.The meridional wind field is displayed in Fig. 1b.It is apparent that the meridional winds depict alternating southerlies and northerlies in the upper troposphere.In the upper atmosphere, these anomalous patterns in meridional winds, termed Rossby waves, become quite prominent and associated with the jet stream at the top of the troposphere in extratropical latitudes. Many studies have focused on the importance of upperlevel tropospheric jets as waveguides for the observed low-frequency waves (e.g., Hoskins and Ambrizzi, 1993;Branstator, 2002).According to the waveguide theory, the propagating Rossby waves are confined to a narrow belt such as the jet region, and any disturbances along the jet stream may either trigger or enhance wave responses downstream (Hoskins and Ambrizzi, 1993;Branstator, 2002).To reveal the properties of the waves, a Fourier spectral analysis (FSA) of zonal and meridional winds at each height is performed.The FSA is a method by which a given times series is decomposed into several frequency components and this decomposition gives us an insight into physical mechanisms underlying the data series (Jenkins and Watts, 1968).Figure 1c and 1d show the results from the FSA.The FSA indicates strong amplitudes in the upper troposphere of around 8 days in both zonal and meridional wind components.However, the meridional spectrum depicts larger amplitudes relative to the zonal spectrum in the STJ region.This could be due to the fact that the eastward moving jet, comprising the ridges and troughs associated with Rossby waves, has a greater meridional component compared to the zonal component.Nevertheless, the predominance power in horizontal winds indicates that these are linked with upper-tropospheric Rossby waves with periods of around 8 days.Similar wave periods are also noted in previous reports (Forbes, 1995).It is also observed from Fig. 1c and d that the zonal and meridional spectrum peaks at different periods greater than 10 days.However, in this study we mainly focus on the spectral band of 6-10 days. The vertical propagation characteristics of the Rossby waves are shown in Fig. 2. The height profiles of amplitude and phase of the selected harmonic component (around 8 days) are calculated by using the FSA method.The equations for the estimation of the magnitude and phase are described elsewhere in Smith (1997).The amplitude of Rossby waves shows its maximum in the upper troposphere.Afterwards it decreases with height in the troposphere, which further supports the result shown in Fig. 1b.However, it is interesting to note the phase propagation of Rossby waves as depicted in Fig. 2. The near-constant phase in the upper troposphere around 12 km indicates the source region of the Rossby wave activity.The notable feature is that the zonal velocity shows upward phase propagation of Rossby waves within the troposphere.In fact, the group velocity is orthogonal to the phase velocity, with upward phase propagation corresponding to the downward energy propagation.Fig. 2 suggests that wave energy flows downward into the troposphere from a source above in the upper troposphere.However, it is also true that the downward energy propagation occurs with a strong loss of amplitude in the troposphere as shown in Fig. 2. Due to the increase in mean atmospheric density with decreasing altitude, the amplitude of the waves decreases as they propagate downward in order to conserve wave energy.Nevertheless, it is interesting to see whether there is any effect of the upper-tropospheric Rossby waves on the surface meteorological parameters.This is because Rossby waves are long waves that play an important role in the formation of regions of divergence and convergence in the upper troposphere, further affecting the surface weather. The surface meteorological observations during the GVAX campaign are shown in Fig. 3.For instance, Fig. 3 (left column) shows the time series of RH (a), pressure (b), temperature (c), and wind speed (d) from 1 November to 31 December 2011.RH, in particular, shows strong modulations, with a period of around 8 days.RH is an important parameter that explains the amount of moisture availability in the air, the formation of clouds, and rainfall.Therefore, the time series of rainfall acquired from the TRMM data averaged over a latitude-longitude grid box (28-32 • N, 78-82 • E) is also overlaid along with RH in Fig. 3a.It is interesting to see that the rainfall time series oscillates with periods of between 6 and 10 days and closely follows the RH time series (Fig. 3a).This is more clearly seen in December 2011 than in November 2011 even though the percentage of RH is high.Nevertheless, the presence of moisture is not the only factor that determines rainfall as instability conditions associated with strong divergence and convergence at upper and lower levels, respectively, are also required.This will be further discussed in this section The time series of surface pressure (Fig. 3b) also indicates the strong fluctuations in the 6-10-day period band.Furthermore, the time series of surface temperature is shown in Fig. 3c.It is known that RH is also dependent on air temperature.For instance, the combination of Fig. 3a and c indicates that an increase in temperature is reflected as a reduction in RH and vice versa.Also, the interrelationship be- Furthermore, confirmation regarding the modulations noticed in the time series of surface parameters is obtained through the FSA described previously (Fig. 3, right column).For instance, Fig. 3e shows the Fourier spectrum of RH.The horizontal dashed line indicates the statistical 90 % confidence level of the calculated spectral power.The RH spectrum shows a significant power for a period of around 8 It is the only period which is greater than the 90 % confidence level.The significance of the largest peak in the spectrum is computed based on the methodology described in Press et al. (1994).It is interesting to note here that both the upper-tropospheric Rossby waves and RH oscillate with the same periodicity.Also, the spectra of other surface parameters such as pressure (Fig. 3f), temperature (Fig. 3g), and wind speed (Fig. 3h) indicate the strong power between 6and 10-day periods.However, the wind speed spectrum is slightly shifted to higher frequencies.This may be due to the Doppler shift effect in the presence of background surface winds.The spectra of temperature and wind speed below indicate the statistical 90 % confidence level (not shown in the plots), yet they also indicate that there undoubtedly is an ef-fect of the upper-tropospheric Rossby waves on the surface meteorological fields. In order to have a closer look at physical mechanisms through which propagating Rossby waves in the upper troposphere can influence surface weather, MERRA global reanalysis data are analyzed.For example, Fig. 4 shows the time-pressure cross section of RH overlaid with divergence contours over Nainital.The divergence (F ) is computed using the function defined by where u and v are the filtered zonal and meridional amplitudes, respectively, in the 6-and 10-day period band.Figure 4a shows interesting features of a series of divergenceconvergence patterns with time in the upper troposphere (above 400 hPa) in December 2011.Conversely, one can also note the convergence-divergence patterns in the lower troposphere below the 500 hPa pressure level.The strong upperlevel divergence and convergence occur when deep troughs and ridges exist in the flow aloft.This will be discussed further in this section.Figure 4a shows that the strong uppertropospheric divergence corresponds to lower-tropospheric convergence during 5-11 December 2011.The convergence in the lower levels results in an increase in RH in the troposphere below 200 hPa, while strong convergence in the upper troposphere during 12-15 December 2011 results in divergence below 500 hPa.Hence, there is less moisture available over the observational site.Similarly to Fig. 4a, Fig. 4b shows the time-pressure cross section of RH overlaid with divergence contours in November 2011.It may also be noted from Fig. 4b that the upper-tropospheric divergence leads to convergence at lower levels, especially in the first half of November 2011.However, the strength of the divergence at upper levels is comparatively less than the divergence values in Fig. 4a.This further supports the results in Fig. 3a where the rainfall in November 2011 is less compared to December 2011 The vertical section of circulation seen in Fig. 4 indicates that the anomalous ascending motion in the troposphere appearing over the site is accompanied by anomalous upper-tropospheric divergence and lower-tropospheric convergence and vice versa.It is apparent that the convergencedivergence pattern seen in the upper troposphere (Fig. 4) also influences the surface meteorological parameters (Fig. 3).As an example, the anomalous subsidence in the lower troposphere due to strong convergence in the upper troposphere results in less RH and rainfall from 12 to 15 December 2011, which is evident from Fig. 3a.Likewise, the strong divergence in the upper troposphere results in increased RH and rainfall from 8 to 11 December 2011 (Fig. 3a).Hence, Figs. 3 and 4 signify that the upper-tropospheric divergenceconvergence occurs in conjunction with the anomalous propagation of Rossby waves and further modulates the surface parameters The location of the Rossby wave and the associated jet stream can explain the formation of convergence and divergence regions over the observational site.Hence, at this juncture it is interesting to see the spatial pattern of Rossby wave activity along with the latitudinal position of STJ.To show this, we utilize the MERRA meridional wind anomalies at the 200 hPa (hereafter abbreviated as v200 hPa) along with the upper-tropospheric zonal velocity at 200 hPa pressure level (u200 hPa).Figure 5a and b show v200 hPa anomalies and u200 hPa zonal velocities, respectively on 8 December 2011, whereas Fig. 5c and d highlight these parameters for 12 December 2011.The v200 hPa anomalies are obtained by applying a band-pass filter of between 6 and 10 days.The geographical location of the ARIES (indicated by the star) along with the position of the STJ are also shown in Fig. 5a and c.The location of the jet stream is obtained by taking the maximum wind velocity between the latitude band 20 and 45 • N for all the longitudes.Note that the STJs do not pursue a latitudinal course at their respective positions but instead meander.In particular, the upper-tropospheric signature of a synoptic Rossby wave is evident as major lateral undulations of the jet stream (Fig. 5). The curving of the jet stream poleward over the observational site on 8 December 2011 is clear from Fig. 5a and b.Air moving poleward in the upper atmosphere undergoes divergence.This is also supported by Fig. 4, since a strong divergence exists in the upper troposphere over the observational site on 8 December 2011.The upper-air divergence is compensated for by convergence in the lower troposphere (Fig. 4).By contrast, Fig. 5c and d show the jet stream swing towards the equator over the observational site on 12 December 2011.We can observe that as the jet stream enters a trough, it narrows and air converges into it.A strong convergence is also evident in Fig. 4 on 12 December 2011 at higher level and is compensated for by divergence in the lower troposphere.Hence, the undulations in the STJ connected with the upper-tropospheric Rossby waves are responsible for the observed modulations seen in surface meteorological parameters over the observational site of Manora Peak during the GVAX campaign in November-December 2011. Discussion The two key links observed in this study between the upper air and surface are Rossby waves and the subtropical jet.However, the Rossby waves and STJ commonly work in tandem providing vertical motions and influence the daily weather.Hence, the changes to airflow patterns around the extratropical latitudes and in the Northern Hemisphere seem to influence prolonged spells of extreme weather.For instance, the dynamical forcing associated with the propagating upper-tropospheric Rossby waves from the west Pacific played dominant role in initiating dry-spell conditions over the Great Plains region of the United States (Lyon and Dole, 1995; Chen and Newman, 1998).Furthermore, Schubert et al. (2011) showed the stationary Rossby waves account for more than 30 % (60 %) of the monthly mean precipitation (surface temperature) over many regions of extratropical land areas and at the same time that they are major players in the development of short-term climate extremes.In addition, Schubert et al. (2011) stressed that the current general circulation models do not simulate and predict the development of such Rossby waves.More recent studies have also reported that month-long periods of extreme weather are associated with anomalous jet stream circulation patterns characterized by amplified atmospheric planetary waves that meander around the globe (e.g., Petoukhov et al., 2013;Screen and Simmonds, 2014;Coumou et al., 2014).In particular, it has been found that regional weather is strongly influenced by persistent longitudinal planetary-scale waves with zonal wave numbers 6, 7, or 8 (Petoukhov et al., 2013;Coumou et al., 2014).Under certain conditions, these waves become trapped by midlatitude waveguides and are amplified by a quasi-resonant response to orographic and land-sea thermal forcing (Petoukhov et al., 2013).The quasi-resonant conditions associated with planetary-scale waves, however, require the formation of pertinent waveguides in the zonally averaged flow, and this formation process may involve highly nonlinear dynamics (Palmer, 2013).Nevertheless, the observations in this study were made during the wintertime when a stronger subtropical jet stream (Fig. 1a) acts as a waveguide for the planetary-scale waves. Therefore, we analyzed the extratropical meridional velocity from the MERRA reanalysis data to characterize the high-amplitude Rossby wave patterns during the observa-tional period.Even if the amplified Rossby waves with a period of around 8 days are noted previously from Fig. 1, the zonal wave number of Rossby waves are now estimated from the longitudinal distribution of the v200 hPa anomalies over the globe in the extratropical latitudes.Figure 6 shows the power spectrum as a function of zonal wave number estimated based on the FSA using the v200 hPa anomalies shown in Fig. 5a and b.It is apparent from the Fig. 6 that both the spectra show strong power near the zonal wave number 6, further supporting earlier studies.For example, Petoukhov et al. (2013) found the strong contribution of quasi-stationary waves with zonal wave numbers 6, 7 and 8 to several recent severe regional weather extremes.Also, Coumou et al. (2014) demonstrated that the highamplitude quasi-stationary Rossby waves with zonal wave numbers 6, 7, and 8 resulted in persistent weather conditions at the surface and hence in a midlatitude synchronization of extreme weather, while Teng et al. (2013) showed the heat waves over the United States (US) are affected by the planetary-scale wave number 5. Furthermore, Screen and Simmonds (2014) also demonstrated the link between amplified Rossby waves and surface temperature as well as precipitation extremes in the midlatitudes for the period 1979-2012.From the above studies, it is apparent that the slowpropagating Rossby waves influence the surface weather.Moreover, the slow wave propagation would prolong certain weather conditions and therefore lead to extremes on timescales of weeks.For instance, Fig. 4 shows that the divergence and convergence persists for about 1 week over the observational site, which is further linked with the slowpropagating Rossby waves shown in Fig. 5a and c.Though the propagating Rossby waves on a sub-monthly scale observed in this study do not lead to extreme surface weather conditions, they nevertheless modulate the RH and rainfall patterns and other surface meteorological parameters shown in Fig. 3 considerably. While the study shows important implications of Rossby waves in relation to surface weather on the timescales of weeks, most of the previous studies discussed above are focused on timescales of over 1 month.The position of the jet stream along with the characteristics of planetary-scale Rossby waves varies substantially from day to day; hence, analysis over longer timescales shows distinct monthly and seasonal patterns.However, more recent studies have investigated the relationship between the sub-monthly predictability and characteristics of Rossby waves, such as their temporal duration, spatial extension, and the area of genesis.For instance, using the 12 000-year integration of an atmospheric general circulation model, Teng et al. (2013) demonstrated that the heat waves over the USA were preceded by 15-20 days by a pattern of planetary-scale waves.Furthermore, Grazzini (2015) reported that the predictive skill increases with the presence of long-period Rossby waves for mediumrange forecasts.It is noted that the medium-range forecast skill scores are above average when Rossby waves last for the duration of at least 8 days in the initial conditions.In contrast, bad medium-range forecast skill scores tend to be associated with shorter Rossby wave periods (Grazzini, 2015).In this context, the present study shows the Rossby waves last for at least 8 days in each phase over the study region (see, Fig. 4).Hence, there should be a fair chance of predictability for the timescales of weeks and above. Summary and concluding remarks In this study, 6-hourly radiosonde observations as well as surface meteorological parameters were analyzed during the GVAX campaign in November-December 2011 at the extratropical site of Manora Peak located in the central Himalayan region.It was observed that the upper-level wind fields were characterized by anomalous high-amplitude Rossby waves with a period of around 8 days.Furthermore, using the global MERRA reanalysis data it was found that the quasistationary Rossby waves are characterized by zonal wave number 6.The vertical phase propagation of Rossby waves indicates the downward injection of energy flux from the upper troposphere but with a drastic loss in amplitude due to strong density gradients below the tropopause.However, the propagating Rossby waves established a considerable influence in the surface meteorological parameters.A substantial modulation of Rossby waves is seen in the time series of surface RH and rainfall anomalies.The time series of other surface parameters also fluctuates according to the phase of upper-tropospheric Rossby waves.We further demonstrated the link between the Rossby waves and surface weather by analyzing the MERRA wind field data.The propagating Rossby waves in the upper troposphere along with the undulations in the jet stream create convergence and divergence regions in the mid-troposphere.Moreover, the convergencedivergence couplet modulates the surface meteorological parameters during the observational period. The characteristics of the planetary-scale waves observed in this study are consistent with those of previous studies.However, the present study further investigates the evolution of the upper-tropospheric circulation anomalies associated with the wet and dry conditions near the surface at the observational site during winter.The winter seasonal anomaly is observed to be associated with several rapidly intensifying and decaying, large-amplitude, anomalous cyclonic and anticyclonic circulations.Furthermore, these anomalous circulations in the upper troposphere are linked with the propagating Rossby waves circumnavigating the globe.Hence, the study further implies that the hydrological extremes over the region during the winter can be studied through the succession of events rather than as a single seasonal event.In addition, it can be concluded that improved process understanding and better coordinated modeling and observational studies will be needed to advance the medium-range forecasts over the study region. sis data sets used in this effort were acquired as part of the activities of NASA's Science Mission Directorate and are archived and distributed by the Goddard Earth Sciences (GES) Data and Information Services Center (DISC) (http://mirador.gsfc.nasa.gov/).We are also thankful to the Global Modeling and Assimilation Office (GMAO) and the GES DISC for the dissemination of MERRA data.D. V. Phanikumar thanks the Director of ARIES for providing the necessary support. The topical editor, V. Kotroni, thanks R. Kramer and one anonymous referee for help in evaluating this paper. Figure 1 . Figure 1.Zonal (a) and meridional (b) wind velocities measured by the radiosondes launched at 6-hourly intervals from an extratropical site at ARIES, Nainital (29.45 • N, 79.5 • E), between 1 November and 31 December 2011.The corresponding Fourier spectral amplitudes (m 2 s −2 ) of zonal and meridional perturbations are shown in (c) and (d), respectively.The contours in (c) and (d) are significant at a 90 % confidence level. Figure 2 . Figure 2. Height profiles of amplitude (m s −1 ) and phase (deg) of Rossby waves (period ∼ 8 days) in zonal winds observed by radiosonde measurements between 1 November and 31 December 2011. Figure 3 . Figure 3. Left column: time series of surface meteorological parameters such as (a) RH, (b) pressure, (c) temperature, and (d) wind speed at ARIES, Nainital, from 1 November to 31 December 2011.The rainfall measured by the TRMM is also overlaid in (a) during the observational period (dashed line).Right column: the corresponding power spectrum estimated from Fourier analysis of (e) RH, (f) pressure, (g) temperature, and (h) wind speed perturbations. Figure 5 . Figure 5. Meridional wind anomalies (a) at 200 hPa pressure level (v200 hPa) overlaid with the position of subtropical jet stream (thick solid black line) and zonal wind (b) at 200 hPa pressure level on 8 December 2011 obtained from MERRA reanalysis.(c) and (d) Same as (a) and (b) except on 12 December 2011.The location of ARIES, Nainital (29.45 • N, 79.5 • E), is also indicated by the star in (a) and (c). Figure 6 . Figure 6.Power spectra of v200 hPa anomalies on 8 and 12 December 2011.The spectra are obtained from the Fourier analysis of the longitudinal distribution of v200 hPa anomalies over the globe averaged for latitudes between 28-50 • N.Both spectra show strong power near the zonal wave number 6.
7,329
2016-01-25T00:00:00.000
[ "Environmental Science", "Physics" ]
Cancer Stem Cells and Signaling Pathways in Colorectal Cancer Colorectal cancer (CRC) is the third most common cancer in males, the second in females and is the second leading cause of cancer related death worldwide. Despite recent advances in chemotherapy, and targeted therapy for CRC, the prognosis for patients with advanced cancer has remained poor, due to drug resistance, metastasis and recurrence. A small fraction of cells possess tumor propagation abilities. These are termed “cancer stem cells (CSCs). A subset of colorectal cancer stem cells, may hold a key to controlling cancer. The cancer stem cell (CSC) model suggests that tumors are hierarchically organized, only CSCs possess cancer-promoting potential. The killing of CSCs is thought to be a critical component of effective antitumor therapies. A number of signaling pathways, most notably the Wingless related (Wnt), transforming growth factor-beta (TGF-β), Notch and Hedgehog signaling and other mechanisms have been found to be associated with CSCs in CRC. They play important roles in maintaining the growth and functional integrity of CSC. Many new molecules are now being studied to block theses pathways. Some of the molecules block the self-renewal and induction of apoptosis in CSCs. The design of CSC-targeted interventions is a rational target, and reduce local recurrence and metastasis. This review aims to summarize the issue on CSCs and signaling pathway relevant for CRC, which may lead to more effective therapeutic strategies for CRC. INTRODUCTION Colorectal cancer (CRC) is one of the most commonly diagnosed and lethal cancers worldwide. 1,2 It is the third most frequent cancer in males, and the second in females, and is the second leading cause of cancer related death worldwide. Indonesia with its 250 million population, has age standardized incidence rates for colorectal cancer per 100,000 population were 15.2 for males and 10.2 for females, and has estimated 63,500 cases per year, and an almost similar burden to other countries with increasing populations. 2,3,4 Most of CRC were located in the rectum (74.6%) than in colon (25.4%). 5 Colorectal carcinogenesis results from a series of genetic/epigenetic alterations and interactions with microenvironmental and germ-line factors that transform the normal colonic mucosa into an aberrant phenotype. 1,6,7 There are two models of carcinogenesis: the stochastic and the cancer stem cell (CSC) model. The CSC model suggests that tumors are hierarchically organized and only CSCs possess cancer-promoting potential. Besides all the advances in target therapy, many patients still fail to survive because they develop primary and acquired resistance. 8 The cause not only on tumor heterogeneity, but also that the tumor grows up in a complex ecosystem, that can influence the tumor main driver pathway to survival. 8,9,10 Genetic diversity, tumor micro-environment and epigenetics are coming together and influence the concept of maintenance of stem cell state. This revolutionary idea changed the historical concept that tumor cells may harbour stem cells, and with these active properties they may influence carcinogenesis and patient's outcome as never seen before. 8 Carcinogenesis of Colorectal Cancer (CRC) According to the stochastic model of tumorigenesis suggest that any kind of cell is capable of initiating and promoting cancer development. The CSC model suggest that tumors are hierarchically organized and only a small fraction of cells (CSCs) possess tumor propagation abilities or cancer-promoting potential, and various molecular pathways, such as Wingless/Int (Wnt), Notch and Hedgehog, as well as the complex crosstalk network between microenvironment and CSCs, are involved in CRC. 1,11 The CSC model modifies the classic Fearon and Vogelstein model, which is characterized by step-by-step genetic modifications of the adenoma to carcinoma sequence, by placing the normal stem cell (SC) as the primary candidate for being the cell of origin, by underlying the crucial importance of microenvironmental signals, and by explaining tumor heterogeneity within the context of a clonally evolved CSC model. 1,12 Large Intestine and Stem Cell The inner luminal lining of the large intestine is a single layer of epithelial columnar cells folded into finger-like invaginations, which are embedded in the submucosal connective tissue to form the functional unit of the intestine, called the crypt of Lieberkuhn. Normal human colon consists of millions of crypts, each containing about 2000 cells. [12][13][14][15][16] The crypt base columnar cells (CBCCs) are regarded as stem cells in normal colon crypt. It is now recognized that colorectal cancer stem cells (CCSCs) derive from normal colonic crypt stem cells, located at the bottom area of the normal crypt, and differentiate into a variety of crypt cells under normal circumstances. Current evidence suggest that CCSCs are a special subgroup of cells in colorectal cancer with the ability to initiate differentiation towards malignant cells and exhibit self-renewal and metastasis potential. These cells differentiate into mature cells, and recruite cancer cells in the mature cancer tissue during CRC carcinogenesis. [12][13][14][15] Overall, 3 main epithelial cell lineages comprises a crypt: the columnar cells or colonocytes, the mucin-secreting cells or goblet cells, and the endocrine cells. Turnover of these cell lineages is a constant process, occurring every 2-7 days under normal circumstances and increasing following tissue damage. 12,17 This complex process is regulated by adult stem cells (ASCs) located within the crypt unit. The Cancer Stem Cell Model in Colorectal Cancer (CRC) The silencing of key genes can foster increased CRC-related pathway signaling (such as Wnt pathway), resulting in genomic instability and mutations in the downstream pathway genes, such as APC or β-catenin, and further activate these signaling pathways to foster colon tumorigenesis. [18][19] In the CSC model, precursor cells are a type of partially differentiated stem cell which has the capacity to differentiate into one cell type, and therefore are also called unipotent stem cells. Epigenetic changes, such as aberrant methylation, may result in silencing of genes p16, SFRPs, GATA-4/-5 and APC in stem/precursor cells of adult cell-renewal systems and may lock these cells into stem-like states that foster abnormal cell clonal expansion, and the stem/precursor cells are transformed into preinvasive cancer stem cells. 20 At this stage, preinvasive cancer stem cells turn into cancer stem cells that will ultimately become cancer cells. Epigenetic and genetic alterations play crucial roles in CRC carcinogenesis, while epigenetic alterations may be a predominant factor during early malignant transformation of colonic stem cells in the stem cell model. The current chemotherapies generally aim at mature cancer cells, not the CRC CSCs. Although these treatments can reduce the size of cancer tissue, they cannot completely kill CSCs. CSCs have higher proliferative potential and stronger resistance to chemotherapy and radiotherapy and differentiate into mature cancer cells when therapy is withdrawn, resulting in cancer recurrence and metastasis. Therefore, development of therapy targeting CSCs has a therapeutic potential to achieve better treatment to radically suppress cancer growth and metastasis. 21 CSCs are characterized by self-renewal, multipotency, limitless proliferation potential, angiogenic, and immune evasion features. These cells are considered highly malignant, fundamental for the growth of neoplasia, for recurrence, and for metastasis. Also they are considered resistant to chemotherapy, radiotherapy and target therapeutics. CSCs in the colorectal cancer, becoming a potential target for the treatment of the disease. 8,[22][23][24][25][26] CSCs have been isolated from many solid tumors in humans using the combination of cell surface markers, including CD44, CD24, ESA 18 among others. Several biomarkers of colorectal CSCs are as follows: CD133+/ CD44+/ ALDH1+, EpCAM+/ CD44+/CD24+, Lgr5+/ GPR49+, and CD133+/CD26+. 8,22,27 These CSC play a predominant role in the initial phase of tumorigenesis. These facts suggest that inhibition of CSCs may be a therapeutic target for cancer. 8 The killing of cancer stem cells is thought to be a critical component of effective antitumor therapies. Signaling Pathways Some pathways, including the wingless related (Wnt), transforming growth factor-beta (TGF-β), Notch and Hedgehog signaling pathways and other mechanisms have been found to be associated with CSCs in many cancer. 27,28,29 The alterations in microenvironment may also be responsible for the tumor formation by dominance in growth promoting signals over the growth inhibiting signals. Therapeutic options follow the basic characteristic features or related features of the CSCs; however need to elucidate further for careful clinical applications. 30 The Wnt pathway plays an essential role in the growth and maintenance of CSCs. This pathway is regulated at the level of β-catenin, which is degraded by adenomatous polyposis coli (APC). Mutations in the APC gene are found in most colorectal tumors. As a result, β-catenin is accumulated in the nucleus, where it activates target genes with important functions in colorectal cancer development. [31][32][33] Wnt signaling pathway plays a pivotal role in the regulation of epithelial stem cell self renewal. In contrast, dysregulation of this signaling has been implicated in many epithelial cancers, including colon carcinogenesis. 34,35,36 TGF-β signaling pathway is one of the most commonly altered pathways in human cancers. This pathway regulates cell proliferation, differentiation, migration, apoptosis, stem cell maintenance and function TGFβ superfamily ligands bind to a type II serine/threonine kinase receptor, which recruits and phosphorylates type I receptor. 31,37,38 The TGF-β pathway acts as a tumor suppressor pathway in healthy tissues but as a promoter in colorectal cancers. 39 Notch signaling is active in colon cancer initiating cells (CC-ICs) and is essential for the intrinsic maintenance of CC-ICs self-renewal and the repression of secretory cell lineage differentiation gene. 40 Notch signaling is an evolutionarily conserved pathway in multicellular organisms, regulates cell-fate determination during development and in stem cells. It mediates juxtacrine signaling among adjacent cells. Interaction between Notch and its ligands initiates a signaling cascade that regulates differentiation, proliferation, and apoptosis. 41 Hedgehog signaling, which is active in both colon cancer epithelial cells and, strikingly, CD133(+) cancer stem cells, promotes colon cancer growth, stem cell self renewal and metastatic behavior in advanced cancers. 42 The hedgehog signaling is named after the polypeptide ligand, an intercellular signaling molecule called Hedgehog (Hh) found in Drosophila. 43 The proliferation, migration, and differentiation of target cells are regulated by Hh signaling in a spatial, temporal, and concentration dependent manner. In mammals, three Hedgehog homologues are present, of which Sonic hedgehog (Shh) is the best studied. 43 Many new molecules are now being developed and tested in clinical trials, to block these pathways. Some of these new small molecules block the self-renewal and induction of apoptosis in CSCs. They act inhibiting the Wnt/β-catenin pathway, the Notch pathway and the hedgehog pathway. Inhibition of the STAT3 pathway inhibits cell proliferation in vitro and reduces tumor growth in vivo. This pathway is critical for the self-regeneration and survival of CSCs in various neoplasms. The STAT3 pathway is connected to/βcatenin pathway activity, which is also very important in the early stage of carcinogenesis and progression of disease in many cancers. 8,28,29 The Wnt/β-catenin pathway is mostly dysregulated in colorectal cancer and epidermal cancer; the hedgehog pathway is dysregulated in colorectal cancer, gastric cancer, pancreatic cancer, basal cell carcinoma and medulloblastoma; the Notch pathway dysregulated in colorectal cancer, pancreatic cancer, breast cancer and leukemia, and the JAK/STAT3 pathway in colorectal cancer, gastric cancer, breast cancer, and glioblastoma. 8,22,27 THERAPEUTIC OPTIONS The CSC-targeted interventions is a rational target, which will enhance responsiveness to traditional therapeutic strategies and reduce local recurrence and metastasis. The problem is how to identify these subclones which express dysregulation of these crucial pathways? Science has advanced and identified subpopulations, which are eventually responsive to the blockage of these new molecules. 1,45,46,47 It has become clearer that a tumor does not have a single genome, but multiple genomes, which belong to different sub-clones. These different sub-clones will contribute to intra-tumoral heterogeneity. Nevertheless, these different sub-clones don't all behave in the same way: some are active and maintain their capacity of auto-renewal and are pluripotent, others remain dormant in a quiescent form and others are in a post-mitotic condition and run into apoptosis. 8 The new concept that one or more of these clones may harbour CSC, redefines the driver clone "the harmful cancer clone" that attributes the growth and survival potential. These cells maintain the embryological potential to maintain its primary capacity to stimulate their own oncogenes and inhibit the tumor suppressor genes, favouring carcinogenesis. These clones are the hierarchy of tumor survival, and it should be the main aim in personalized medicine in the near future. The future of treatment of CRC lies in research on CSCs, signaling pathways. If these CSCs and signaling pathways better understood, CSC targeting via markers and targeting these aberrant signaling pathways are important offers a new strategy for cancer therapy. 8,45,46,47 CONCLUSION Stem cells may become cancer stem cells under a series of epigenetic and genetic alterations. CSCs possess cancer promoting potential, and various molecular signaling pathways as well as the complex crosstalk network between CSCs and microenvironment are involved in CRC. The design of CSC-targeted interventions and agents targeting dysregulated signaling pathways in CSC, will enhance responsiveness to therapeutic strategies and reduce local recurrence and metastasis.
2,973.6
2018-04-01T00:00:00.000
[ "Medicine", "Biology" ]
Modeling and Performance Analysis of Channel Assembling Based on Ps-rc Strategy with Priority Queues in CRNs Based on two types of priority queues, this paper proposes a polling scheduling strategy with reserved channel (Ps-rc strategy) for predefined priority services in cognitive radio networks (CRNs). Channel assembling (CA) technology and spectrum adaptation (SA) technology are adopted to dynamically adjust the assembled channels of secondary users (SUs) to improve the performance of the secondary network. Specifically, the SUs in CRNs are divided into two queues with different priorities; based on polling scheduling, a part of the idle channel is reserved for high-priority queue during the polling stage of the low-priority queue. The purpose is to increase the service quality (QoS) of high priority on the basis of providing fair scheduling. Furthermore, the CA-based channel access process of the proposed strategy is presented and modeled by continuous time Markov chain (CTMC). Then, the process of resource flow between users is mapped on CTMC, and the transition conditions and parameter sets of channel assembling covering all user activities of the system are derived. Finally, the system performance of the proposed CA-based Ps-rc scheduling strategy is simulated and evaluated, including network capacity, spectrum utilization, blocking probability, and forced termination probability. Numerical results show that the proposed strategy can improve the QoS of the predefined high-priority service without causing excessive starvation problem of low-priority service. Introduction At present, the number of the communication users is growing exponentially [1], but the operation of the traditional wireless communication system has the problem of low utilization due to the static allocation of the radio spectrum. With the cognitive radio (CR) technology, secondary users (SUs) are able to transmit information over the unused spectrum of primary users (PUs) to enhance spectrum utilization [2]. Relying on the ability of spectrum sensing, CRNs can realize dynamic spectrum allocation (DSA) and resource sharing [3], which provide flexible resource allocation for SUs. Among the existing DSA technologies, the application of channel assembling (CA) and spectrum adaptation (SA) can effectively improve the QoS of the secondary system. By adopting the CA technology, the service rate of SUs can be increased through assembling one channel to multiple channels. Meanwhile, because the absolute priority of PUs, the service of SUs may be interrupted. Through combining the technology of SA, the interrupted SUs can flexibly adjust the assembled channels according to the activities of PUs and other SUs. Therefore, the resource scheduling strategy adapted to application requirements is the key to improve the service quality (QoS) of the secondary system. This paper considers the scheduling requirements of the predefined priority services in special scenarios, such as battlefield and medical rescue, and proposes a polling scheduling strategy with reserved channel (Ps-rc) for multitype of data services in CRNs. Based on the technologies of CA and SA, a CTMC is developed to model the resource flow process, then the system performance of the CA-based Ps-rc strategy is evaluated. The main contributions are as follows: (1) A novelty scheduling strategy: combining polling scheduling and channel reservation, part of idle channel in the system is reserved for high-priority SUs in the polling process of low-priority SUs. Thus, a Ps-rc strategy with two types of priority queues is proposed to increase the possibility of high-priority SUs accessing the channel. This strategy provides a new scheduling idea for the predefined priority services in CRNs (2) Dynamic channel access process with channel assembling: by adopting the technologies of channel assembling and spectrum adaptation, we present the dynamic channel access process by analyzing the activities of PU arrivals, PU departures, SU arrivals, and SU departures in the system (3) A continuous time Markov chain: based on the dynamic channel access process, a CTMC model is developed and all possible destination states, user activities, transition rates, and transition conditions starting from a general state are obtained. Therefore, the resource flow of the secondary network is mapped on the Markov chain; the performance analysis of SUs is transformed into a mathematical problem (4) The conditions and implementation methods of channel assembling are given: the change dynamics of the users in the secondary system are divided and presented by the graphical method visually. Then, the dimension reduction is achieved by the marking method, and the analytic expressions of system performance metrics are given. MATLAB is used to evaluate and analyze the performance of the system Related Work CR can make full use of the idle spectrum and satisfy the requirements of data services especially in scenarios such as natural disasters and emergency treatment [4][5][6][7]. Previous studies [8][9][10] have proven that adopting a resource scheduling strategy that is suitable for application scenarios can help improve spectrum utilization. Jiao et al. pointed out that using the channel assembling (CA) technology and spectrum adaptive (SA) technology in secondary system can help for enhancing the QoS [10]. For CRNs containing multiple types of data services, a scheduling strategy based on service classification can achieve more efficient data transmission. References [11][12][13][14][15] investigated the resource scheduling schemes based on service classification. A channel assembling strategy based on priority queues was proposed by using channel aggregation and spectrum adaptive technologies [11]. However, strict priority scheduling maintains the queues with decreasing priorities. Low-priority queue can only start their services when the services of high-priority queue are all completed, which may cause the starvation problem of low-priority services due to long-term lack of service. Thus, [12] used the channel bonding technology with starvation mitigation and adopted a priority-based nonpreemptive M/G/1 queuing model to set up the spectrum handoff of SUs. Reference [15] divided the SUs into five priorities and arranged into two queues, then a round robin priority (RRP) scheduling was proposed to minimize the starvation problem of lowpriority users by executing short suspension of the highpriority queue. But the RRP scheduling uses a combination of dynamic and static spectrum allocation, which has the disadvantages of high system overhead and inconvenient hardware implementation. Since the arrival of the users is not constrained by time slot in CRNs, continuous time Markov chain (CTMC) is used to model the resource flow process to avoid the problem of time synchronization. CTMC modeling provides a theoretical basis for resource allocation-based performance analysis of secondary networks. References [10,11] adopted CA and SA and established the CTMC model to analyze the performance of heterogeneous SUs in CRNs. Reference [11] proved that the establishment of priority queues could effectively reduce the blocking probability and the forced termination probability of the high-priority queue. Reference [16] indicated that when less than one channel was allowed to transmit for some SU traffics, combining with channel fragmentation technology, the noninteger channel can be used to satisfy the QoS of SUs, so as to enhance system utilization. Reference [17] proposed a flow-adaptive leased channel adjustment algorithm, and the CTMC was developed to conduct theoretical analyses. Polling scheduling is widely used in communication networks by serving each queue in turn [18,19]. However, when a certain queue has high traffic, the system performance will be reduced because other queues are not served for a long time. Reference [20] proposed a special priority reservation method, in which a part of the transmission rate was allowed to reserve for high-priority services to improve the QoS. The above studies have inspired our research. This paper proposes a polling scheduling strategy with reserved channel (Ps-rc) with two types of priority queues. Based on the technologies of CA and SA, a CTMC model is developed to depict the resource flow process. We present the differences between our studies and other studies as shown in Table 1. Besides, the results of the above-mentioned research develop the study ideas of this paper. Thus, we summarize the research results of the related references in Table 2. Channel Assembling Based on Ps-rc Strategy Priority-based scheduling is necessary to improve the reliability of important information during transmission. This paper proposes a polling scheduling strategy with reserved channel (Ps-rc) for the predefined priority services in special scenarios, which combines channel assembling technology and spectrum adaptive technology. Also, this section presents the dynamic channel access process based on the proposed Ps-rc strategy. 3.1. System Model and Assumptions. Centralized cognitive radio network consists of two types of radios, primary users (PUs) and secondary users (SUs), which are controlled by the base station. Suppose there are M ðM ∈ Z * Þ PUs in the system sharing the spectrum bandwidth; each PU can access only one channel. PUs have absolute channel access priority 2 Wireless Communications and Mobile Computing over SUs. SUs can obtain the behavior of PUs through spectrum sensing and access the channels that are not occupied by PUs. Suppose that SUs have perfect sensing ability and the sensing time is short enough that PUs will not arrive at the channel within this period. Once a PU arrival is detected, the ongoing SU will release the channel and perform spectrum adaptation. The adaptive time is also short, so that the transmission of SU is not affected. This paper considers the scheduling of the SU base station in the uplink direction of data transmission. The buffer is set for SUs at each terminal. The departure of the SUs in the buffer is treated as the beginning of the channel service. Through the packet classifier, users in the secondary system are divided into high priority ðSU h Þ and low priority ðSU w Þ and enter into the high-priority queue ðL h Þ and low-priority queue ðL w Þ, respectively. Suppose the two queues are isomorphic. The scheduling of SUs in the queues obeys the first come first service (FCFS). 3.2. Ps-rc Strategy. As shown in Figure 1, the Ps-rc strategy consists of three units: priority assignment unit, information scheduling unit, and data transmission unit. Priority assignment unit: through the packet classifier, SU h ðSU w Þ enters into L h ðL w Þ and waits for scheduling. Information scheduling unit: to ensure fairness between the two queues (L h and L w ), polling scheduling is adopted. Meanwhile, to improve the QoS of SU h , a part of the idle channels is reserved for the high-priority queue during the polling period of SU w . The reserved channel is allocated to SU h who enters the system in the polling period of SU w . Therefore, the secondary system state is divided into the polling stage when both SU h and SU w are scheduled and the priority support stage when only SU h is scheduled. Assuming that each stage has a fixed length of time, time length sets for the priority support stage will not cause excessive delay in the transmission of SU w . Data transmission unit: once a PU arrives at a channel occupied by a SU, the transmission of the SU will be interrupted. The interrupted SU will exit the channel at once and adjust the number of the aggregated channels based on spectrum adaptive. The feedback loop is adopted, then the interrupted SUs will go back to the end of the original queue to improve the possibility of being served. If the corresponding queue is full, the service of the SU will be forced to terminate. Besides, if there is no space left when a SU first arrives at the queue, the service of the SU will be blocked. Channel assembling is adopted for SUs to enhance service efficiency. The service rate of SUs can be increased [10] The performance of secondary network is superior when dynamic resource allocation is adopted. 2014 [11] Introducing queue for secondary users can significantly improve secondary system performance. 2016 [14] Differential scheduling can meet the demand of emergency data transmission. 2016 [16] The performance of the network varies according to different access schemes. 2017 [12,13]; 2018 [15] The starvation problem of low-priority queue should be taken into account when differential scheduling is adopted. 3 Wireless Communications and Mobile Computing through assembling one channel to multiple channels. Accordingly, SU h can start the service by aggregating h channels and enhance the service rate by increasing the number of the aggregated channels to the upper bound, vðh, v ∈ Z * , and h ≤ vÞ. SU w can start the service by aggregating w channels, and the service rate can be improved by increasing the aggregated channels to the upper bound, uðw, u ∈ Z * , and w ≤ uÞ. Set h < w to ensure that the highpriority services are started first. The implementation of the proposed Ps-rc strategy is shown in Algorithm 1. As shown above, the total number of the currently available idle channels, N free , will be determined at the beginning of each phase. In the priority support stage, the SU h in L h will be served first. If L h is empty, the idle channels will be assembled for SUs according to the following Rule 1. Rule 1: the ongoing SU h with minimum aggregated channels (denoted as SU h -min) will assemble the idle channels until all of the ongoing SU h were assembled to the upper bound, v. If there are still idle channels, the remaining chan-nels will be assembled to the ongoing SU w with minimum aggregated channels (denoted as SU w -min) until all of the ongoing SU w reached the upper bound, u:. In the polling stage, the data packets in L w are marked, and a service request list (SRL) for SU w is generated. Suppose that this process is short enough and none is reached within this period. If SRL is empty, this polling stage will be terminated. If SRL is not empty, a part of the channel will be reserved for SU h in L h according to the allocation factor, ηð0 < η < 1Þ, then the remaining channels will serve the SU w in SRL. Here, η is determined by the service classification requirements of different scenarios. We assume that the setting of η will not cause the SU w in L w to be out of service for a long time. This process is done on lines 10~15 of Algorithm 1, where j SRL is the number of SU w on SRL. Please note that the channel allocation of SU w in L w within the polling stage is only for the SU w marked on SRL at the beginning of this stage. Therefore, the newly arrived SU w in this polling stage needs to wait for at least 13: Then w•j SRL channels give service to SU w in SRL 14: Else j SRL − bηN free /wcSU w s waiting in the queue & bηN free c channels give service to bηN free /wcSU w s 15: End if 16: Else N free channels go back to (A). 17: End if (polling stage is over. The marked SU w s release the channel after completing the service.) Algorithm 1: Polling scheduling strategy with reserved channels. 4 Wireless Communications and Mobile Computing one service cycle. After this polling, the marked SU w on SRL will be cleared and remarked in the next polling stage. But the service will not be interrupted immediately when the polling time ends. If PU does not occupy the aggregated channels, the channels will be released when the ongoing service is complete. Therefore, there are still SU w running in the priority support stage. The Ps-rc strategy can be applied by the predefined highpriority and low-priority services in special environments to achieve traffic classification scheduling of different transmission requirements. For example, in emergency rescue scenarios, high-priority services can be defined as elastic or real-time services related to medical information according to message correlation, so as to improve the security and reliability of vital signal transmission. 3.3. Dynamic Channel Access Process. Dynamic channel access process is based on the Ps-rc strategy. In this section, the arrival and departure process of uses in CRNs is introduced, which contains four events including PU arrivals, PU departures, SU arrivals, and SU departures. Event A: PU Arrivals. When PU arrives at the system, if N free > 0, it will access to the idle channel. If N free = 0, the PU will interrupt the service of a SU h or a SU w , and the interrupted SU will exit the channel and perform spectrum adaptation. If the interrupted SU performs spectrum adaptation successfully, then the SU will continue its service, which includes two cases: (1) The remaining number of the interrupted SU can continue its service. (2) An ongoing SU with maximum number of channel aggregation donates a channel to the interrupted SU, and the remaining channels of the donor can also support the service. Furthermore, if SU h and SU w have the same number of channels, SU w will donate the channel first. If the adaptation is not successful, it means that all of the ongoing SU cannot donate the channel for the interrupted SU. The interrupted SU will release the remaining channels and return to the queue according to the feedback loop. In the worst case, if the queue is full, the service of the interrupted SU will be forced to terminate. All assembled channels of the terminated SU will be released. Then, if there are enough released channels, the services of SU in the queue will be started. The channel assembly follows Algorithm 1. Since h < w, this can usually occur when SU w is forced to terminate and SU h is waiting in L h . If there is no SU using the idle channels in the queue, these channels will be allocated to the ongoing SUs obeying Rule 1. Event B: PU Departures. The departure of the PU releases a channel. If the departure occurs in the priority support stage, according to FCFS rules, only when L h is not empty and there are h − 1 idle channels in the system, the SU h in L h can use the idle channel. If there is no SU h in L h , use the idle channel; it will be assembled to the ongoing SU obeying Rule 1. It needs to determine whether SRL of this polling stage is empty if the departure occurs in the polling stage. There are two cases: (1) SRL is empty. In this case, there is no waiting packet in L w or bηN free c ≥ w ⋅ j SRL (in Algorithm 1); the assembling of the idle channel is the same as that in the priority support stage. (2) SRL is not empty, then bηN free c < w ⋅ j SRL . The idle channel is preferentially assembled to SU w on SRL who can start the service when there are w − 1 idle channels in the system. If the current number of the channels is not enough to start the service of SU w but can start SU h , the SU h in L h will be started. If there are no SUs in the queue, we use the idle channel; the channel will be allocated to the ongoing SU obeying Rule 1. Event C: SU Arrivals. Whether in the polling stage or not, the service of SU h can be started when there exists h idle channels upon it arrivals. If there are less than h idle channels, the ongoing SU with maximum number of aggregated channels will donate to the newly arrived SU h . After donation, the ongoing SU should be able to satisfy its own service. If the service of the new SU h still cannot be started, the ongoing SU with the second maximum number will donate the channel, etc. In particular, for the ongoing SUs with the same number of aggregated channels, the low-priority SUs (SU w ) will donate the channel first. If the total number of the channels provided by all of the ongoing SUs cannot start the service, the new SU h will go back to L h . However, if L h is full, the newly arrived SU h will be blocked. If a new SU w arrives during the priority support stage, it will wait in L w . When the system starts a new polling stage, the newly arrived SU w will appear on the new SRL. But it will be blocked if L w is full. If it arrives during the polling stage, the new SU w needs to wait at least one polling cycle (i.e., the SU w that reaches the system during the polling stage cannot appear on the latest SRL). Accordingly, the new SU w will be blocked if L w is full. Event D: SU Departures. With SU departures, all aggregated channels are released. Consequently, we suppose the SU with m aggregated channels ðSU h ðmÞÞ releases m channels due to the departure, and the SU with n channels ðSU w ðnÞÞ releases n channels. If the SU leaves in the priority support stage, the released channels will first schedule to the SU h in L h according to Algorithm 1. Otherwise, these channels will be scheduled to the ongoing SU obeying Rule 1. When the departure occurs in the polling stage, if SRL is empty, channel access is the same as that of the priority support stage. If SRL is not empty, idle channels will assign to the marked SU w on SRL until this polling stage ends or the services on this SRL are completed. Due to the setting that h < w, it is easier to start the service ofSU h thanSU w . If the number of the idle channels is not enough to start SU w but can start SU h , the SU h in L h will be started. If no SU in the queue uses the idle channels, the channels will schedule to the ongoing SUs obeying Rule 1. CTMC Analysis and QoS Measures In this section, the continuous time Markov chain (CTMC) is used to avoid the problem of time synchronization 5 Wireless Communications and Mobile Computing between primary and secondary users. Based on the dynamic channel access process, a CTMC is developed to model the resource flows in secondary networks. Let a general state in the system be where l i , ðh ≤ i ≤ vÞ is the number of SU h with i aggregated channels, j k , ðw ≤ k ≤ uÞ is the number of SU w with k channels, l pu is the number of PU in the system, and l hq and l wq are the current queue lengths of L h and L w , respectively. s is an indicator function, in which s = 1 means the secondary system is in the polling stage; s = 0 means it is in the priority support stage. The set of feasible states for the system can be noted as follows: Here, n h and n w are the total capacity of L h and L w . ∑ u−w k=1 kj w+k < w, if l wq > 0 means that if L w is not empty, the summation of maximum number of channels that all ongoing SU w can donate will always be less than the lower bound of SU w . Similarly, if l hq > 0 shows that if L h is not empty, the summation of the maximum number of channels that all ongoing SU h can donate will always be less than the lower bound of SU h . bðxÞ is the total number of the utilized channels at state x, which can be described by Furthermore, M − bðxÞ is the number of the idle channels, which equals to N free that is defined in Algorithm 1. To analyze the transition rate from a general state x to other states, we assume that the arrivals of PU and SU services in the system obey Poisson distribution with arrival rates λ p for PU, λ h for SU h , and λ w for SU w , while the departures of PU and SU services are exponential distribution with departure rates μ p for PU, μ h for SU h , and μ w for S U w . However, the scheduling with feedback will cause the external arrivals of SUs services no longer obey the Poisson distribution. To simplify the study, it is assumed that the setting of feedback mechanism has little effect on SUs services, i.e., exogenous arrivals of SUs services are all obey Poisson distribution. Accordingly, the CTMC model can be established based on four events including PU arrivals, PU departures, SU arrivals, and SU departures. State transition from a general state x can be developed. Event A: PU Arrivals. State transition from a general state x upon a PU arrival is derived. Indexes such as destination state, user activity, transition rate are obtained in Figure 2 while transition conditions are listed in Figure 3. In Figure 3, the arrival behavior of PU is not constrained by the stage of the secondary system, so the transition condition is s = 0 or s = 1. The interrupted SU (SU h , for example) will return to the queue if the queue is not full, such as the condition l hq < n h in state t 4~s tate t 10 . Otherwise, the SU service is forced to terminate, such as l hq = n h in state t 19~s tate t 25 . The arrival of PU may interrupt the service of any SU, such as that in state t 4~s tate t 10 and state t 19~s tate t 25 (Figure 2), there may exist ongoing SU w ðwÞ in the system when SU h ðhÞ is interrupted. In Figure 3, the condition that the idle channels are released by the interrupted SU h ðhÞ or the forcibly terminated SU h ðhÞ can be assembled by the ongoing SU w ðwÞ is ∀k ðw < k ≤ uÞ, j k = 0, which means there is no SU w with more than w aggregated channels in the system. The condition ∀ i ðh < i ≤ vÞ, l i = 0, in state t 4 and state t 19 means that there is no SU in the system that can perform adaptation, while ∀ i ðh = i = vÞ, l i > 0, means the SUs in the system cannot perform spectrum adaptation. The conditions in state t 11 and state t 26 have similar meanings. Due to the assumption that h < w in state t 18 and state t 33 , if SU h exists in L h , i.e., l hq ≥ 1, the idle channels released by the interrupted SU w or the forcibly terminated SU w can be used by SU h in L h . Event B: PU Departures. State transition from a general state x upon a PU departure is derived. The destination state, user activity, and transition rate are shown in Figure 4, and the conditions are listed in Figure 5. When the departure of PU occurs during priority support stage, the ongoing SU h can use the idle channel released by PU in state t 3 which is based on that the idle channel is not occupied by SU h in L h . There are two possible cases: (1) No packet exists in L h , i.e., if h = 1, then l hq = 0:2. (2) There are packets in L h , but the total number of the remaining channels is not enough to start the service of SU h in L h , i.e., if 1 < h ≤ v, then l hq ≥ 0 and M − bðxÞ < h − 1. Furthermore, in state t 5 , the ongoing SU w can use the idle channel released by PU which is based on the SU h in L h , and the ongoingSU h do not use the idle channel, which also has two cases: (1) Ongoing SU h cannot assemble more channels, i.e., if l m > 0, When the departure occurs during the polling stage, if SRL is empty, the transition conditions are the same as those in the priority support stage. Otherwise, if SRL is not empty, SU h in L h can use the idle channel in state t 2 , which is based on that the sum of the idle channel released by PU and the remaining idle channels in the system is not enough to start the service of SU w on SRL but can start the service of SU h , i.e., h ≤ M − bðxÞ + 1 < w. The ongoing SU h can use the idle channel in state t 4 only if no SU in the queue uses the channel. The ongoing SU w can use the idle channel in state t 6 only if no SU in the queue uses the idle channel and the ongoingSU h do not use the channel. The condition that the service of SU h can be started upon its arrival in state t 1~s tate t 6 is that the sum of the idle channels in the system and the channels that can be donated Figures 8 and 9. The total number of the idle channels used by the ongoing SUs should not be greater than the number of the channels released by the leaving SUs, such as the condition m ≥ ∑ v−1 i=h ðv − iÞ l i − ðv − mÞ in state t 4 . Besides, in state t 10 , state t 11 , state t 21 , and state t 22 , the departure of SU occurs in the polling stage, and the condition that SU w in L w can use the idle channel is SRL ∉ ∅. In state t 12~s tate t 20 , since n ≥ w, if SU w ðnÞ leaves during the polling stage, there must be SRL = ∅. Above all, by integrating all the possible destination states, activities, transition rates, and transition conditions from Event A to Event D, the steady-state probability of the system, πðxÞ, can be calculated from global balance equations and normalization condition: where π is a steady-state probability vector and Q is the transition rate matrix generated by q ij and q ii : Here, q ij ði, j ∈ S, and i ≠ jÞ is the transition rate from any reachable state t i to another reachable state t j , which is the sum of the possible transition probabilities considering all events A~D. S is the system feasible state set (Equation (2)). The diagonal elements q ii in transition rate matrix Q can be found as q ii = −∑ i,jði≠jÞ∈S q ij : Then, the performance evaluation of the secondary system can be measured as follows: (1) Network capacity (packet/unit time) is the service completion rate, which means the average number of services completed per time unit. The capacity of SU h service is denoted as C h and that of SU w service is denoted as C w , which are given as (2) Spectrum utilization is determined by the ratio of the average number of available channels to the total number of channels: where bðxÞ is the number of available channels in state x and M is the total number of channels. (3) Blocking probability is the probability that the newly arrived SU cannot be served due to all of the available channels in the system are busy (i) Blocking probability of SU h : a newly arrived SU h will be blocked if the following conditions are met: (1) The sum of the total number of idle channels available in the system and the number of channels that can be donated by other ongoing SUs does not satisfy the lower bound of channel assembly of SU h ; (2) L h has no space left to hold SU h . Thus, the blocking probability of S U h , P h b , can be obtained: { lh+1, . . ,ju,l (ii) Blocking probability of SU w : a new SU w will directly put into L w no matter if it arrives during the polling stage or the priority support stage. The service of SU w is blocked if there is no space left in L w : The blocking probability of SU w , P w b , can be obtained: where the set A w is the trigger condition for SU w to be blocked: A w = fx ∈ S, l wq = n w g. (4) Forced termination probability refers to the probability that the SU service is forced to terminate because of the absolute channel access priority of PU. Forced termination of SU only occurs upon PU arrivals in Event A, which needs to meet the following conditions: (1) what the PU preempts is a SU h with h channels or a SU w with w channels, (2) there are no remaining channels in the system, and (3) other ongoing SUs cannot perform spectrum adaptation to share the channel with the interrupted SU. Forced termination probability can be determined by the ratio of the average forced termination probability to the arrival rate of SUs who are not blocked (iii) Forced termination probability of SU h : the forced termination of SU h occurs in state t 19~s tate t 25 of Event A, the probability, P h f , can be expressed as where Λ h is the arrival rate of the unblocked SU h : Λ h = λ h ð1 − P h b Þ:B h is the set of states of S U h who are forced to terminate: 10 Wireless Communications and Mobile Computing (iv) Forced termination probability of SU w : the forced termination of SU w occurs in state t 26 state t 33 of Event A, the probability, P w f , can be expressed as where Λ w is the arrival rate of the unblocked SU w : Λ w = λ w ð1 − P w b Þ:B w is the set of states of SU w who are forced to terminate: Numerical Simulation and Analysis In this section, four study cases are conducted to evaluate the secondary system performance based on the proposed Ps-rc strategy and the channel access process. Performance analysis is carried out in a CRN with 6 channels ðM = 6Þ sharing the spectrum bandwidth, i.e., there are 6 PUs in the secondary network. The arrival rate and departure rate of PUs are set as λ p = 1, μ p = 0:5, respectively. Only one secondary user is allowed to arrive at once in the secondary system; the arrival rate and departure rate of SU h and SU w are set as λ h = 1, λ w =2, μ h = 1, μ w = 1, respectively. Dynamic parameters ðh, v, w, u, n h , n w Þ in the spectrum access process are set as h = 1, w = 2, v = u = 4, n h = 1, n w = 2: The allocation factor η = 0:5: Based on the above setting, the general state x of Equation (1) can be specified: where l i ði = 1, 2, 3, 4Þ is SU h with i aggregated channels; j k ðk = 2, 3, 4Þ is SU w with k channels; the superscript of each position indicates the upper limit of the number of users at the corresponding position, such as l 2 3 means that there are at most 3 SU h s in the system simultaneously who assemble 2 channels; and l wq 2 represents that there are at most 2 SU w s waiting in L w . Therefore, the transfer events can be specific. Due to high dimension, the transition rate matrix Q generated by x needs to be downgraded. A random general state ð3 0 0 0 0 0 0 2 0 2 0Þ is taken as an example to show the degradation process. Obviously, the current secondary system is in priority support stage in state ð3 0 0 0 0 0 0 2 0 2 0Þ: And there are 7 flows, among which 3 are ongoing SU h with one aggregated channel, 2 PUs are in the system, and each PU occupies one channel. Also, there are 2 SU w waiting in L w . Marking method to achieve the dimension reduction: the possible transition of state ð3 0 0 0 0 0 0 2 0 2 0Þ is triggered by all user activities related to this state in Event A~Event D. Denote the state ð3 0 0 0 0 0 0 2 0 2 0Þ as 7 − * , where the first position, 7, is the total number of flows in this state and the second position, * , is the counting in order (the maximum value is the total number of states with 7 flows in the system). When there arrives a new flow in this state, it may be the arrival of PU or any SU (Event A or Event C may be triggered), then there will be 8 flows in the next state. When there is a flow leaving from this state, it may be the departure of PU or any SU h (Event B or Event D may be triggered), then there will be 6 flows in the next state. According to one-step transition, state 7 − * can only . ,lv,jw+1, . . ,lv,jw, . The transition rate matrix Q is a block matrix, whose number of layers is theoretically generated by the product of the superscript (Equation (14)); the size is 5184 × 5184: However, due to the system settings, it is not always reachable between any two states. In our numerical simulation process, the size of matrix Q was simplified to 443 × 443: But the specific form of Q is still hard to show here. Therefore, we take a random state i, i = ð3 0 0 0 0 0 0 2 0 2 sÞ to show the way of obtaining the elements in matrix Q (s is the secondary system stage). Figure 10 shows the transition from state i ð3 0 0 0 0 0 0 2 0 2 sÞ to all of the reachable states j. According to Figure 10, the transition rate of the i-th row (i.e., starting from the state ð3 0 0 0 0 0 0 2 0 2 sÞ) in matrix Q to all reachable states j (i.e., j-th column) can be obtained. The other specific states can be done in the same manner. Then, the transition rate matrix Q can be developed. Figure 9: The transition conditions of the system from the general state x to all reachable states with SU departures. 13 Wireless Communications and Mobile Computing the network capacity of SUs increases with the arrival rate, but the trend is slowing down. This is because as the arrival rate of SUs increase, the channels in the system are fully utilized for saturation. For SU h , the network capacity increased by 8.48% at the maximum difference position as the buffer capacity of L h increased from n h = 0 to n h = 2: For SU w , the network capacity increased by 9.3% at the maximum difference position as the buffer capacity of L w increased from n w = 0 to n w = 4: It shows that expanding the buffer capacity can increase network capacity. By allowing more flows to wait in the queue, the number of the flows stored in the system also increases, which makes it possible for more SUs to access the channel. Therefore, increasing the buffer capacity of SUs can effectively increase the network capacity. Comparing Figures 11 and 12, we can see that the network capacity of SU h is always higher than that of SU w , which is due to the proposed Ps-rc strategy paying more attention to high-priority services. In order to alleviate the starvation problem of low-priority SUs, we set a larger buffer capacity for SU w : Therefore, it can be seen that as the arrival 14 Wireless Communications and Mobile Computing rate increases, the increase of network capacity of SU w becomes greater. It is reflected in the slope change of SU w which is greater than that of SU h . Case 2: Spectrum Utilization. Figure 13 shows the change of the arrival rate of SUs with spectrum utilization. Let the x-axis be the arrival rate of SU w (it is the same for SU h ). Similar to network capacity, the change of spectrum utilization increases first and then becomes stable. As the arrival rate increases, more channels are utilized. The available channels in the system gradually reach saturation. The increase of buffer capacity increases the possibility of SUs being served, thus effectively increasing the spectrum utilization. Figures 14 and 15 show the relationship between the blocking probability of SU and the arrival rate of PUs. Figure 14 shows the change of blocking probability of SU h and SU w under different buffer capacities. When the arrival rate of PUs increases gradually, more channels are occupied by PUs because of the absolute channel access priority, resulting in a linear increase in the blocking probability of SUs. Since we set the polling loop for the low-priority queue, the newly arrived SU w cannot be served directly. Therefore, the blocking probability of SU w is higher than that of SU h : By comparing the blocking probability of SUs with different buffer capacities, it can be seen that the larger the buffer capacity, the more services of flows that can be stored, then the lower the blocking probability. Figure 15 shows the comparison of the blocking probability of SU h with the proposed Ps-rc strategy and without using the proposed strategy when the buffer capacity is n h = 1 and n w = 2: After using the proposed Ps-rc strategy, the blocking probability of SU h is reduced by 33.3% at the maximum difference position, λ p = 0:6: The reason is that the polling mechanism is adopted in L w , and then, a part of the channels are reserved for SU h in the polling stage. In fact, the service of SU w is delayed in exchange for the service of SU h : It is obvious that the adoption of the Wireless Communications and Mobile Computing Ps-rc strategy can significantly reduce the blocking probability. Therefore, the Ps-rc strategy proposed in this paper can meet the demands of improving the priority of important information in CRNs. However, when the arrival rate of PUs increases, the blocking probability of SU h still increases significantly, because SUs have a lower priority than PUs in CRNs. Figures 16 and 17 show the relationship between the forced termination probability of SUs and the arrival rate of PUs. Figure 16 shows the changes of the forced termination probability of SU h and SU w under different buffer capacities. Figure 17 shows the comparison of the forced termination probability of SU h with the Ps-rc strategy and without using the strategy when the buffer capacity is fixed ðn h = 1, n w = 2Þ. Case 4: Forced Termination Probability. From Figure 16, we can see that the forced termination probability of SU h under the Ps-rc strategy is always lower than that of SU w . As the arrival rate of PUs increases, the number of channels available to SUs decrease, then the number of SUs being served is decreased. When the queue of SUs in the buffer is full, the services of the interrupted SUs are forced to terminate. Therefore, the forced termination probability of SUs increases significantly. It can be seen from Figure 16 that setting a larger buffer capacity for SU w can effectively reduce the forced termination probability of SU w . It can be seen from Figure 17 that the proposed Ps-rc strategy can reduce the forced termination probability of S U h . After using the proposed Ps-rc strategy, the forced termination probability of SU h is reduced by 40% at the maximum difference position, λ p = 0:8: In particular, when the arrival rate of PUs is low, the proposed strategy has more channel access opportunities for SU h . Therefore, the proposed Ps-rc strategy can guarantee the QoS of the predefined high-priority SUs in CRNs. Conclusions A Ps-rc strategy with two types of priority queues was proposed in this paper to satisfy the classification demands of the predefined priority services in CRNs. The realization process was presented by algorithm. Combining with the technologies of channel assembling and spectrum adaptation, the dynamic channel access process based on the proposed strategy was depicted. The process was triggered by four events including PU arrivals, PU departures, SU arrivals, and SU departures. Furthermore, a continuous time Markov chain (CTMC) was developed, and the resource flow processes triggered by user activities were mapped on the state of CTMC. Then, all possible destination states starting from a general system state were obtained; the corresponding user activities, transition rates, and conditions were also derived. By dimension reduction, the steady state of the secondary system was further obtained, thereby obtaining the performance metrics including network capacity, spectrum utilization, blocking probability, and forced termination probability. Finally, four study cases were carried out to evaluate the performance of secondary system. Numerical experiments show that the proposed Ps-rc strategy can effectively reduce the blocking probability and forced termination probability of high-priority SUs. Results prove that the proposed Ps-rc strategy can improve the service quality of high-priority services without causing the problem of excessive starvation of low-priority services. 16 Wireless Communications and Mobile Computing Data Availability The data used to support the findings of this study are included within the article. Conflicts of Interest The authors declare that there are no conflicts of interest regarding the publication of this paper.
10,774.4
2022-02-08T00:00:00.000
[ "Computer Science", "Engineering" ]
2 Deficit ( Limited ) Irrigation – A Method for Higher Water Profitability Increasing world population and limitation of water and soil resources make the control of resource usage essential. Policymaking for the future must be based on a more profitable use of water and soil and it is necessary to consider economical, political and social aspects in order to reach a better condition in water and soil resources. Agricultural management, macro and micro policy should be based on sustainable use of limited water and soil resources. In some cases expanding farmlands needs vast investment while some times it is not possible. Plant production per given amount of water should be basis for organizing possibilities and invests to increase water profitability (Fereres and Soriano, 2007; Blum, 2009). The necessity of planning to increase the water use efficiency is inevitable from world population growth and water amount. Introduction Increasing world population and limitation of water and soil resources make the control of resource usage essential.Policymaking for the future must be based on a more profitable use of water and soil and it is necessary to consider economical, political and social aspects in order to reach a better condition in water and soil resources.Agricultural management, macro and micro policy should be based on sustainable use of limited water and soil resources.In some cases expanding farmlands needs vast investment while some times it is not possible.Plant production per given amount of water should be basis for organizing possibilities and invests to increase water profitability (Fereres and Soriano, 2007;Blum, 2009).The necessity of planning to increase the water use efficiency is inevitable from world population growth and water amount.Development pressure irrigation system, crop production based on crop rotation, plant nutrition and pest control are all for better use of water and soil resources. Undoubtedly future water management should be based on more production per given amount of water.Deficit or limited irrigation is one of the irrigation methods which has been designed for more efficient use of water in some crops (English, 1990).Environmental conditions, type of crop and available possibilities have particular importance in water management regarding deficit irrigation (English, 1990). In this method a plant won't encounter moisture deficiency during growth and development under normal condition, in other words, plant absorbs water requirements for metabolic functions easily.However, when a drought stress happens to a plant either in all its or at least in one of its growth stages, it won't be able to do metabolic functions due to water limitation or unbalanced water situation. Drought stress is described by its intensity and duration which have interaction with plant growth stage (Samarah and Al-Issa, 2006;Farooq et al., 2009).For example even a medium drought stress at anthesis time of wheat or barley causes more reductive effect on yield than a drought stress during grain filling (English and Nakamura, 1989;Martyniak, 2008;Katerij Irrigation Systems and Practices in Challenging Environments 20 et al 2009; Maleki farahani, 2009).Effect of severe short stress is more than a medium long stress, because under medium stress the plant is able to reduce bad effects of stress by stimulating some metabolic and morphologic mechanisms.Therefore it can be said that environmental stress including drought stress at any plant growth stage which has more contribution to the yield has determinant effects on yield reduction. Deficit irrigation Deficit irrigation is a water management method in which water will be saved with accepting little yield reduction without any severe damage to the plant (English 1990).Medium stress may be a delay in irrigation for a few days or reduced water consumption in each irrigation, but plant shouldn't encounter severe drought stress at any mentioned situation. The principal attitude in deficit irrigation methods are using saved water for expanding farmlands, saving water for using in critical growth stage or using for cultivating of cash crops like summer plants. Crop production response to given water Generally yield increases sharply per given water unit in production curve.After a sharp incline in yield, there is a fairly increase until it reaches maximum yield and after that yield will be constant with more given water.The zone for applying deficit irrigation is when yield increases slowly with each given water unit.Selection of exact point for water amount in deficit irrigation depends on following factors: 1. Type of crop 2. Possibilities for farmland expansion 3. Energy usage per area unit for farmland preparation 4. Costs of sowing, cultivation operations and harvesting Methods for application of deficit irrigation Selecting the methods depends on available possibilities and soil texture.Considering soil conditions, deficit irrigation is possible in two ways: In soils with light texture (sandy soil), soil doesn't have high water holding capacity, thus in such a situation irrigation periods may be constant or its frequency increases, however, in deficit irrigation the water amount reduces compared to normal irrigation in each irrigation (English, 1990). Accordingly an experiment conducted by Jorat et al (2011) on two forage sorghum cultivars.The irrigation treatments consisted of IR 70 : irrigation after 70mm accumulative evaporation from evaporation pan class A (control), IR 100 : irrigation after 100mm accumulative evaporation from evaporation pan class A and IR 130 : irrigation after 130mm accumulative evaporation from evaporation pan class A which were assigned to the main plots.The sowing density of 15, 20 and 25 plants per square meter and two sorghum varieties (Speedfeed and Pegah) were allocated as factorial arrangement to the subplots.The results indicated that the highest forage yield was produced by Speedfeed variety at the control (IR 70 ), medium water stress (IR 70 ) and severe water stress (IR 70 ) treatments with 25 plants per square meter density.The plant height followed an increasing trend as sowing density increased and decreased as water stress got more severe.The stem and leaf dry matter followed the same trend as forage yield in response to water stress and sowing density.The leaf/stem ratio increased as sowing density increased. Also in another study on chickpea the deficit irrigation was induced by reduction of volume of water in each consecutive irrigation.In this study which was conducted by Chaichi et al (2004), five chickpea accessions were treated by different irrigation gradient systems during generative growth stage.The irrigation gradient treatments were 5, 10, 15 and 20 percent of reduced water supplies compared to control (moisture kept at field capacity throughout the experimental period) at two-week intervals.Irrigation treatments started from flowering commencement and finished when plants reached physiological maturity.The volume of irrigation water in every other day intervals was determined by soil texture and soil moisture curve based on a preliminary experiment, which was 300 ml.Irrigation treatments were: 1: Control: soil moisture kept at field capacity level (±5%) throughout the experimental period by irrigating of 300 ml of water every other day, 2: Irrigation with 5% reduction of water supply compared to control in a two-week interval from flowering commencement to physiological maturity, 3: Irrigation with 10% reduction of water supply compared to control in a two-week interval from flowering commencement to physiological maturity, 4: Irrigation with 15% reduction of water supply compared to control in a twoweek interval from flowering commencement to physiological maturity, 5: Irrigation with 20% reduction of water supply compared to control in a two-week interval from flowering commencement to physiological maturity.Chickpea accessions were sown on March 6, 2001 outside the greenhouse and were normally irrigated to commencement of flowering.On May 10, 2001 pots were transferred to a controlled greenhouse and irrigation treatments were applied.Temperature and humidity was kept constant (temperature 23 ± 2 °C and humidity 65% ±5%). Seed production per plant was significantly (P<0.05)affected by both chickpea genotypes and interaction of irrigation systems x chickpea genotypes.Based on the mean seed production per plant, chickpea genotypes could be classified in three categories of high yielding accessions (4488 and 4283), medium yielding accession (5132 and 4348) and low yielding accessions (5436).The medium and low yielding accession produced 18 and 45 percent less seed yield per plant compared to high yielding ones, respectively.At irrigation gradients of 5 and 10% there was 39% less seed production and at irrigation gradients of 15 and 20% there was a 54% reduction compared to control.Nonsignificant difference in seed production at 15 and 20% irrigation systems indicates that chickpea accessions have a relative tolerance to drought stress and can produce an acceptable minimum production under unfavorable moisture conditions.Accession No. 4283 was the best seed producer at control, however, it showed a severe sensitivity to water stress especially at irrigation system of 20% when it produced the least amount of seed among chickpea genotypes.Accession No. 4488 not only had the highest mean (over all irrigation system) seed production among all chickpea cultivars, it also had fairly stable seed production ability under all irrigation systems.By producing of bigger seeds with less number per pod, and producing more pods per plant, accession No. 4488 was the best seed producer among other genotypes.The lower number of branches provided with less leaf area ultimately reduced its evapotranspiration under stressed conditions.Accession No.4488 was followed by No. 5132, which despite lower mean seed production had a better stability under all irrigation systems.This genotype followed the same vegetative and generative growth pattern of accession No.4488. Accessions No. 4283 and 4488 produced the most biomass and seed yield (respectively) averaged over all irrigation treatments.Accession No. 4283 showed a severe reaction to irrigation gradient compared to other accessions, while accession No. 4488 was more stable in biomass and seed production across all irrigation gradients. In heavy texture soils (clay soil) with high water holding capacity, irrigation intervals should be scheduled so that irrigation intervals will be increased while the plant will not encounter severe drought stress.In heavy soils, deficit irrigation is also possible by reducing water amount in each irrigation if the irrigation intervals are kept constant. In both methods, water consumption has to be less than normal condition per farm area unit.There are some factors which influence the efficiency of deficit irrigation including land leveling when irrigation is applied in surface and the existence of possibilities for conducting water in short time so that it can distribute uniformly in the farm. In a study performed by Heidari Zooleh et al ( 2011) on Foxtail Millet they used alternate irrigation systems with different intervals in a pot experiment.Their treatments consisted of different irrigation methods and intervals.There were three irrigation intervals: I1: Control, irrigated every 2 days, I2: Mild water stress, Irrigated every 3 days, I3: Sever water stress, irrigated every 4 days, There were three methods of water application, viz: Conventional irrigation (M1): the whole root system was relatively evenly dried, Fixed irrigation (M2): fixed irrigation group by which water was always applied to one part of root system during the whole experimental period, Alternate irrigation (M3): watering was alternated between two halves of root system of the same pot.The watered and dried halves of root system were alternately replaced each irrigation interval.Irrigation intervals were determined according to factors such as greenhouse temperature and humidity.At each irrigation event, enough water was allowed to be absorbed by the soil in each pot, and any excess water was allowed to drain.The pots were weighed before and after each irrigation event to determine the water consumption by the plant in each pot.They found The I1 had the highest dry forage yield, while I2 did not have significant difference compared with I1, but I3 had a significant reduction of dry forage yield compared with I1 .For example under conventional irrigation, I2 and I3 had a dry biomass reduction of 5% and 34% compared with I1, respectively.Less water was used by M2I3 and M3I3 compared with M1I3 but dry forage yields were not affected.Under conventional irrigation, irrigation interval of 3 and 4 days had a dry biomass reduction of 5% and 34% compared with irrigation interval of 2 days, respectively.In addition, less water was used by M2I2 and M3I2 compared with M1I2 but dry forage yields were not affected.The most important point is that M2I2 significantly reduced dry forage yield compared with M3I1, while M3I2 did not have a significant reduction compared with M1I1, M2I1 and M3I1.These suggest that alternate irrigation of root is the best irrigation method among other irrigation methods.Also There was significant difference between M2I3 and M1I1 in terms of WUE and the difference among the other treatments were not significant.M2I3 had a WUE increase of 40% compared with M1I1.There was positive and significant correlation between WUE and leaf to stem ratio.By increasing irrigation interval, water consumption was reduced evident in the I2 in fixed and alternate irrigation.Reductions in water consumption, but not in biomass, with fixed and alternate irrigation compared with conventional irrigation method suggests that these two irrigation methods can be used for saving soil water.This is especially so with alternate irrigation under mild water stress (M3I2) that did not reduce forage dry weight when compared with M3I1.Under irrigation interval of 3 days, fixed and alternate irrigation used 29% and 20% less water compared with conventional irrigation, respectively.There was positive and significant correlation between water consumption and fresh forage yield, dry forage yield, plant height, leaf area, leaf dry weight, leaf relative water content (sampling stage 1, 2), root dry weight, root volume, root surface area and root length, while there was negative and significant correlation between water consumption and leaf to stem ratio and specific leaf weight (SLW).Overall their results showed that fresh and forage yield were reduced by increasing irrigation interval.Under conventional irrigation, irrigation interval of 3 and 4 days had a dry biomass reduction of 5% and 34% compared with irrigation interval of 2 days, respectively.Under irrigation interval of 3 and 4 days, less water was used by the alternate and fixed irrigation compared with conventional irrigation, but plant growth in terms of dry biomass, plant height, leaf to stem ratio, specific leaf weight, leaf area, root dry weight, root volume, root surface area and root length, was not affected.Under irrigation interval of 3 days, fixed and alternate irrigation used 29% and 20% less water compared with conventional irrigation, respectively.However, water stress increased specific leaf weight, but reduced leaf area, leaf dry weight and leaf relative water content.Root growth was less sensitive than shoot to water stress.Under mild water stress, alternate irrigation performed better than fixed irrigation compared with all irrigation methods under non-water stress, so they suggested to use alternate irrigation under mild water stress to achieve acceptable yield along with efficient use of water.In the other study water deficit irrigation systems applied on pearl millet (Pennisetum americanum L.) by reducing water amount in each time and irrigation times (Rostamza et al., 2011).The irrigation treatments were 40%, 60%, 80% and 100% depletion of available soil water (I 40 , I 60 , I 80 and I 100 , respectively).The results indicated that water stress affected total dry matter (TDM), leaf aria index (LAI), water (WUE) and nitrogen utilization efficiency (NUE).The highest TDM of 21.45 t/ha was observed at I 40 .Furthermore, NUE and LAI were higher at I 40 .WUE increased as the water depletion increased and reached to a maximum of 3.44 kg DM m -3 at severe stress.In forage quality, TDN% reached to the highest value of 54.7% in non stress water treatment.However, CP% increased by soil water depletion and more N fertilizer application.The highest profit was observed when more water and N fertilizer was applied.They concluded pearl millet in semi-arid area can be cultivated with acceptable forage yield by saving irrigation water compared to traditional forms and reducing nitrogen supply. Suitable crops for water management under deficit irrigation Crop selecting has special importance in this method.As a general rule plants which their fresh yields are are consumed are not eligible to apply deficit irrigation systems on them.Summer crops including sugarbeet, potato and some forage crops and vegetables are not suitable.While small grains including wheat, barley, triticale and drought stress tolerant oil seeds specially safflower and canola are important crops that applying deficit irrigation is possible for them and among industrial crops, the cotton can be indicated (English, 1990).However, it is necessary to notice that drought stress doesn't induce specially at pod setting stage by applying deficit irrigation. Environmental conditions and deficit irrigation Identification of environmental conditions is of great importance for applying deficit irrigation; some of these conditions are listed as following: Soil: soil texture and structure along with topography have determinant role to apply deficit irrigation.In relatively light soils applying deficit irrigation is not as easy as heavy soils.As well as in soils without enough organic matter, this method is not applicable due to low water holding capacity. Pressure irrigation equipments are most important factors when the farmland is unleveled.In salty soil due to intensity of osmotic potential as a result of water deficiency the selection of irrigation method and type of crop have special importance. Weather conditions Drought stress is intensified by warm weather, as Maleki Farahani et al (2010b) found in their research that under deficit irrigation the barley 1000 seed weight decreased by 12% although in year with fairly higher temperature during grain filling 1000 seed weight decreased by 35%, thus applying deficit irrigation is more successful in autumn-winter crops than summer crops.Sanjani et al (2008) found yield of cow pea and sorghum decreased by about 50% in additive intercropping system of grain sorghum and cowpea under limited irrigation.The limited irrigation (moisture stress) treatments consisted of IR1: normal weekly irrigation (control), IR2: moderate moisture stress during vegetative and generative growth, IR3: moderate moisture stress during vegetative and severe during generative growth, IR4: severe moisture stress during vegetative and moderate during generative growth.Also Soltani et al (2007) evaluated 11 new corn hybrids under water deficit irrigation by applying different amount of water including irrigation after 70, 100 and 130 mm evaporation from A evaporation pan.Their findings revealed that all hybrids produced significantly less yield after medium or sever water stress as average yield over 11 hybrids was 7.5, 5.4 and 4.9 t/ha in 70, 100 and 130 mm treatments respectively.However, corn seed inoculation by phosphate soluibilizing microorganisms (Arbuscular Mycorrhiza and Pseudomonas fluorescence) showed satisfying results when applied along with above three irrigation levels (70, 100 and 130 mm) (Ehteshami et al.,2007).They stated that phosphate soluibilizing microorganisms can interact positively in promoting plant growth as well as P uptake in corn plants, leading to plant tolerance improving under water deficit irrigation systems.Summer farming will be successful if the temperature doesn't rise over the required optimum plant temperature.In tropical weather condition because of salt transformation due to soil water evaporation, it may intensify the salinity and drought stress after applying deficit irrigation.As a general recommendation, this method is more successful in autumn-winter crops than summer crops because of salts being washed downward, lower evapotranspiration and higher precipitation. Crop growth stage Success in applying deficit irrigation is highly dependent on asynchronism of sensitive growth stages and drought stress (Kirda, 2000).Plant growth and development stages in which important yield components are determined shouldn't encounter drought stress.For example, spikelet differentiation and anthesis have important role in wheat yield, therefore for wheat cultivation, deficit irrigation should set in a manner to avoid drought stress in both mentioned stages (English and Nakamura, 1989;Ghodsi et al, 2005;Ghodsi et al., 2007).Irrigation frequency and irrigation time should be regulated based on crop growth stage and their sensitivity of them to drought stress.For example, it is suggested to perform two light irrigations at grain filling of wheat without producing optimal moisture condition. There is a need to find detrimental effect of water stress in crops while limited irrigation is applied in different growth stages of crops.There are evidences that some experiments regarding deficit irrigation have been done in some crops like wheat, turnip, sorghum and etc. Ghodsi et al (2007) performed a field experiment on different bread wheat varieties to find the most critical growth stages to water stress.They conducted a field experiment in Torogh Agricultural Research Station (Mashhad, Iran) in 2000/01 and 2001/02 cropping seasons, using a split plot design based on a randomized complete block design with 3 replications.Main plots were assigned to 7 levels of water stress treatments D1, full irrigation; D2, cessation of watering from one leaf stage to floral initiation, and in other treatments, cessation of watering under rain shelter D3, one leaf stage to floral initiation; D4, floral initiation stage to early stem elongation; D5, early stem elongation stage to emergence of flag leaf; D6, emegence of flag leaf stage to anthesis; D7, anthesis stage to late grain filling (soft dough).Sub-plots were assigned to four bread wheat cultivars: Roshan, Ghods, Marvdasht and Chamran.Results of combined analysis of variance showed, biological yield, grain yield, yield components, harvest index and other traits were significantly affected by water stress treatments.Under D5, D6 and D7 treatments, grain yield decreased compared to D1 by 36.7,22.8 and 45.6%, respectively.There were also significant differences between genotypes for yield and yield components.Significant correlation coefficients were found between grain yield and number of spike per m 2 , number of grains per spike, harvest index, spike weight at anthesis and seed set percentage.Under water stress conditions, grain yield was more affected by number of grain per unit area.Results showed, susceptibility of developmental stages of bread wheat to water stress were different.Exposing to water stress in each developmental stages, lead to decrease in yield.Grain filling (D7) and stem elongation (D5) stages were the most critical stages under water stress conditions.The effect of water stress in early preanthesis (D6) and tillering (D3) stages was also considerable.The results of this study illustrated that imposing moisture stress in critical growth stages (Commencement of stem elongation, anthesis and grain filling) would significantly decrease grain yield; however, imposing moisture stress in initial growth stages would not have such a significant effect on grain yield.Furthermore, wheat cultivars reacted differently to different moisture stress treatments.Chamran cultivar had a higher grain yield and was more tolerant to moisture stress during critical growth stages.On the other hand, it was demonstrated that application of lower moisture stress treatments (D3 and D4) relatively increased water use efficiency (WUE), however, severe moisture stress treatments (D5, D6 and D7) decreased WUE.Genetic differences also played a significant role in variation in WUE among different cultivars.Roshan and Chamran cultivars exhibited the lowest and the highest WUE, respectively.It was also illustrated that there were some differences in moisture stress treatments for radiation use efficiency (RUE).D1, D2 and D3 treatments showed the highest RUE, while the lowest RUE belonged to D5 and D6 treatments. In other study that has been conducted by Keshavarzafshar et al (2011) the reponse of forage turnip were evaluated to water deficit.In this study a field trial was conducted in Research Farm of College of Agriculture, University of Tehran, in Karaj/Iran (N 35 º 56", E 50 º 58"), during 2009.The climate type of this site was arid to semiarid with the annual average climate parameters as follows: air temperature 13.5°C, soil temperature 14.5°C, and with a rainfall of 262 mm per year.The soil texture of the experimental field was Clay loam (33% sand, 36% silt and 31% clay) with pH= 8.2 and Ec = 3.41ds/m.The organic carbon content of the surface layer soil (0-15 cm) was 1.02 %.The soil had no salinity and drainage problem, and water table was more than 7 m deep.Turnip seeds were plantd on March 3 rd , 2009.Plant to plant spacing was 10 cm and plant rows were 70 cm apart.The depth of sowing was 2 cm.The crop was harvested on June 15th, 2009.After elimination of border effects, one square meter area was hand harvested in each plot.After harvest, fresh yields of roots and leaves were measured and samples were dried in oven at 70º C to a constant weight for dry matter content.Three replicated samples of each treatment were taken for forage quality analysis. Their results showed that highest tuber yield of 930.8 Kg/ha was produced at no water stress treatment (IR N ) while the lowest yield of 307 kg/ha was produced at control (IR 0 ).The most efficient irrigation regime in regard to tuber production was IR 1 causing 59% more tuber dry matter compared to control.As the severity of the water stress reduced, at IR 2 and IR 3 , the efficiency of extra water application followed a decreasing trend. In the most severe water stress condition (IR 0 ), 100%F Ch treatment demonstrated the best performance in tuber biomass production (almost five fold more than control).under favorable moisture condition (IR N ), application of integrated fertilizer (50% F Ch +F Bi ) produced the highest tuber yield which was 18% more than control.In other irrigation levels, no significant difference between these two treatments, 100% F Ch and 50% F Ch +F Bi www.intechopen.comwas observed.As the severity of water stress increased, the total biomass followed a decreasing trend.The highest biomass production of 3640 kg/ha was achieved by IR N irrigation regime which was nearly five fold more than control (IR 0 ).The highest efficiency of biomass production per unit water utilization was achieved in IR 1 in which with only one irrigation at sowing time, the biomass production reached 2091 Kg/ha (100% increment compared to IR 0 ).In IR 2 by an extra irrigation at tuber formation stage, the added biomass was only 472 kg/ha more than IR 1 , showing a much less efficiency in biomass production per unit water application. Interaction effect of irrigation regimes and P fertilizers on total biomass yield of turnip was significant (p < 0.01).In IR 0 treatment, application of 100% F Ch and 50% F Ch +F Bi increased biomass yield compared to control.Except for IR 0 , in other irrigation regimes application of F Bi treatment had no significant effect on biomass production of turnip. The effects of irrigation regimes and P fertilizers on tuber protein yield of turnip were significant (p < 0.01).Water stress caused a significant decrement in crude protein yield.The highest yield of crude protein (129.4 kg/ha) was obtained by IR N while the lowest yield (48.6 kg/ha) was obtained from IR 0 (nearly threefold increment).By one irrigation at sowing time (IR 1 ), the yield of crude protein highly increased (52 % increase compared to the control).However, the extra irrigation at tuber formation stage (IR 2 ) and third irrigation at stem elongation stage (IR 3 ) performed a lower efficiency in increasing the protein yield of turnip tuber. As the water stress severity decreased, the digestibility of tuber dry matter followed an increasing trend.The lowest percent of DMD (62.9%) was obtained by IR and the highest percent (66. 9 and 68.5) was achieved by IR 3 and IR N , respectively.Application of phosphorous chemical fertilizer (100% F Ch ) had positive effect on dry matter digestibility of turnip tuber and increased it by more than 10 percent compared to control.However, other fertilizers had no significant effect on this trait. By decreasing the severity of water stress, the ADF percent of turnip tuber followed a decreasing trend.The highest tuber ADF was observed in IR 0 (30%) and the lowest percent was achieved in IR N (23.4 %). The interaction effect of irrigation regimes and phosphorous fertilizers on ADF percent of turnip tuber was significant (P < 0.01).In the most severe water stress condition (IR 0 ), application of sole bio fertilizer (F Bi ) and integrated fertilizer (50% F Ch +F Bi ) increased tuber ADF compared to control.However, in other irrigation regimes, application of 100% F Ch and 50% F Ch +F Bi resulted in lower ADF percent compared to control.Overall, in all irrigation regimes, chemical P fertilizer had the most positive effect on decreasing ADF of turnip tuber.Also as the water stress severity decreased, the tuber ME followed an increasing trend.The ME in IR 0 was 8.7 while in IR N it was 9.6 MJ/kg dry matter. Finally they concluded that turnip tuber yield was adversely affected by water stress and it is very sensitive to water stress at germination, establishment and early growth stages. Considering to find most sensitive growth stages to water deficit, the following study was performed by Khalili et al (2006) on grain sorghum variety Kimia.The Experiment was initiated in Research Farm of College of Agriculture, University of Tehran located in Karaj/Iran during summer 2004.The main plots were allocated to five different irrigation regimes which applied drought stress on sorghum (soil moisture approached wilting point before the next irrigation) at different vegetative and generative growth stages.The irrigation regimes comprised of: 1) Full irrigation (IR1) (control): The plots in this treatment were irrigated at weekly intervals up to the of the growing period.2) Moderate drought stress in both vegetative and generative stages (IR2): The plots allocated to this treatment were irrigated on weekly basis until the plants reached well establishment at 6 to 8-leaf growth stage and then the irrigation was ceased until 10 to 12-leaf stage where the plots received irrigation.Again irrigation was ceased until the early flowering stage (5 to 10% flowering) which the plant received another irrigation.The next irrigation was applied when the plants were in early milky grain stage and since then no irrigation was applied until the plants reached the physiological maturity.3) Moderate drought stress in vegetative stage (after 6-8 leaf stage) and severe drought stress in generative stage (IR3): Irrigation treatment was identical to IR2 up to early flowering stage and then no irrigation was applied until plants reached the physiological maturity.4) Severe drought stress at vegetative stage and moderate stress at generative stage (IR4): At vegetative growth stage the irrigation treatment was similar to IR2 except that no irrigation was applied at 10 to 12leaf growth stage.However, the irrigation treatment followed exactly the same as IR2 in generative part of the plant growth.5) Severe drought stress in both vegetative and generative growth stages (IR5): The Irrigation treatment followed the same trend as IR4 at vegetative and IR3 at generative stages of plant growth. 1 The statistical analysis of the data showed that there was a significant difference (p<0.01) in grain yield production due to different irrigation regimes.The highest grain yield of 5871 kg/ha was obtained from control plots while the lowest grain yield of 500 kg/ha (less than ten times) was produced in severe drought stress both in vegetative and generative growth stages.As the drought stress in generative stage of the plant increased, grain yield followed a decreasing trend.In the severe drought stress regime in generative stage (IR3), the reduction of the kernel weight and one thousand kernel weight could be accounted for grain yield decrement.This shows the importance of water availability in generative stage of the plant growth (especially grain filling stage).The severe reduction of grain yield in irrigation regimes of IR2, IR3 and IR5 indicated the plant sensitivity to drought stress at different phenological stages.Grain production decreased over 50% in these treatments compared to control, however, in IR4 treatment, this reduction was only about 30%. The results of this experiment indicate the importance of irrigation at early flowering and milky grain stages of the plant growth which could produce not only a proper grain yield, but also contribute in significant water conservation compared to control (full irrigation).The number of irrigations in IR4 treatment was reduced by 50% (from 18 to 9) compared to control, which from ecological and economical point of the views is very important in dry areas.The statistical evaluations showed that there is a statistically significant positive correlation between kernel weight, kernel length, one thousand kernel weight; biological yield and harvest index with grain yield production.Drought stress especially in generative growth stages caused a severe decrement in grain yield which could be because of decreasing of one thousand kernel weight, kernel length decrement and consequentlydecreasing the number of grains per kernel.Also the lower number of grains in each kernel may be due to disordered pollination and finally decrement the number of fertilized flowers.By applying a regular irrigation on sorghum from germination to plant establishment stage (7-8 leaf) and then limited irrigations just at 10-12 leaf, early flowering and milky grain stages, the number of irrigations will be decreased from 18 to 9 times.Despite of 30%grain yield reduction in this system; still it is beneficial from ecological and economical point of the views for arid environments.So, Khalili et al (2008) suggested that by severe moisture stress at vegetative along with providing the minimum water requirements in generative growth stages of grain sorghum, the water consumption efficiency of the plant will be improved and a reasonable grain yield is achievable. Economical aspect of deficit irrigation Benefits: Beneficial effects of deficit irrigation are evaluated as different economic and social aspects.Researches have indicated that regardless to equal energy use either in normal or deficit irrigation, the amount of production per given water unit usually is more under deficit irrigation than normal irrigation.With water saving and providing possibilities for farmland expansion, the equipment use efficiency increases, therefore labor and machinery will be used in more efficient way (English and Raja, 1996).Also the farmer income will increase by cultivating of high demanded vegetable and summer crops through saved water in deficit irrigation.Furthermore, results of applying deficit irrigation by reducing irrigation times have shown quality enhancement of subsequent produced seeds.Seeds which produced under deficit irrigation condition germinated earlier and had greater germination percentage in drought and salinity stress which induced either by polyethylene glycol or NaCl compared to seed produced under normal irrigation (Maleki Farahani et al., 2010b).Moreover, the grain nutritional quality enhanced after implementing deficit irrigation (Maleki Farahani et al., 2011).Deficit irrigation increased barley N content by 12% as well as Zn and Mn 27% and 7% compared to control.Also 4% increment was observed in P concentration an important element for seed germination. In macro view, increment of agricultural production and efficiency of labor and machinery resulting from application of deficit irrigation can be assumed as benefits. Disadvantages: Lack of knowledge about sensitive plant growth stages, insufficient planning for water use and distribution not only can affect the benefits of deficit irrigation but also can cause damage for the farmers.Drought stress in every critical growth stage will make irrecoverable damages for crop (English, 1990).Deficit irrigation is not the same as complementary irrigation.In complimentary irrigation which is usually performed in dry land farming systems, one or two irrigations are applied at critical growth stages in which raining don't take place.However, in deficit irrigation the farmer's attitude should be based on relative reduction of water in an irrigated farming system.If the time and amount of water in this method are not determined properly, an irrecoverable damage will suffer the crop.More emphasis is on proper planning in this method to prevent probable damages. The role of policymakers in development of deficit irrigation Development and recommendation of new methods won't have favorable results if they aren't based on evaluation and planning.In first point of view, deficit irrigation won't be welcome by farmers because of relative reduction of yield.In agricultural farming systems, which are managed by deficit irrigation, the net income is less than normal irrigation because expenses for land preparation and weed control are equal in both systems. Generally, subsidizing and farmers supporting are not inevitable in case of policy for deficit irrigation.Subsidization may be indirect as providing of inputs like chemicals to the farmers who manage their farms with deficit irrigation method.Moreover water can be available with lower price for the mentioned farmers to compensate yield reduction. In years which water source deficiency may take place because of lower precipitation, the development of deficit irrigation is a preference.Repetition of deficit irrigation in a long period of time may be set as farmers culture.Media plays key role in explaining deficit irrigation to be accepted by farmers.Planning for better use of water resources is inevitable. Conclusion Deficit irrigation methods are those irrigation methods that yield increases per given water unit (water productivity).Beside the water productivity, quality of the crop could be improved by more tolerance to drought and salt stress as well as more nutritional quality.The performance of these method is better in large lands and in years with lower precipitation which water is limited.In general it can be apply by either fixed irrigation frequency and reduced water in each irrigation or reduced irrigation frequency and fixed amount of water in each irrigation time.In both ways the basic principle is water usage reduction compared to normal irrigation, so that none of the critical plant growth stage encounters drought stress. Soil texture, weather conditions, type and growth stage of plant and available possibilities have important role for applying and selecting of deficit irrigation method.Governmental support through subsidizing can play an important role in deficit irrigation development. Table 1 . Irrigation schedule and volume of irrigation water for chickpea accessions in 2001
8,531.6
2012-03-28T00:00:00.000
[ "Agricultural and Food Sciences", "Economics", "Environmental Science" ]
Cancer Treatment With the Ketogenic Diet: A Systematic Review and Meta-analysis of Animal Studies Background: The ketogenic diet (KD) has been reported to play an important role in the development of cancer by an abundance of pre-clinical experiments; however, their conclusions have been controversial. We therefore aimed to perform a systematic review and meta-analysis of animal studies evaluating the effects of KD on cancer. Methods: Relevant studies were collected by searching PubMed, Embase, and Web of Science. Outcome measures comprised tumor weight, tumor volume, and survival time. Meta-analysis was performed using the random-effect model according to heterogeneity. Results: The search resulted in 1,254 references, of which 38 were included in the review and 17 included in the meta-analysis. Pooled results indicated that KD supplementation significantly prolonged survival time [standardized mean difference (SMD) = 1.76, 95% CI (0.58, 2.94), p = 0.003], and reduced tumor weight [SMD = −2.459, 95% CI (−4.188, −0.730), p = 0.027] and tumor volume [SMD = −0.759, 95% CI (−1.349, −0.168), p = 0.012]. Meta-regression and subgroup analysis results suggested that KD supplementation at a ratio of 4:1 was associated with remarkable prolongation of survival time in animals with limited tumor types. Conclusion: In summary, the pre-clinical evidence pointed toward an overall anti-tumor effect of the KD in animals studies currently available with limited tumor types. INTRODUCTION Cancer is one of the major problems worldwide and is grievously harmful to human health (1). Recently, it has been found that tumor metabolic reprogramming is a central feature of tumors (2). The Warburg effect, as the core of tumor metabolic reprogramming, indicates that tumor cells tend to undergo aerobic glycolysis to metabolize glucose (3). Thus, reducing glucose supply and selectively cutting off the energy source of tumor cells could inhibit tumor growth (4). The ketogenic diet (KD), characterized by a high-fat, low-carbohydrate, and adequate-protein diet, can meet such demand. Therefore, ketogenic therapy for cancer has emerged and become an area of wide discussion in tumor research in recent years. A great number of pre-clinical studies have suggested that KD is a potent anticancer therapy when used separately or as an adjuvant (5). It has been reported that KD not only slowed tumor growth and delayed the initiation of tumor development, but also prolonged survival time (6,7). In addition, some studies have demonstrated that KD could increase the sensitivity of tumor cells to classic chemotherapy and radiotherapy when used in combination (8)(9)(10). Furthermore, KD has been reported to enhance the efficacy of targeted therapy and overcome drug resistance in several tumor models when using PI 3 K inhibitors (11), as well as reduce metastatic potential (12,13). On the contrary, pro-tumor effects or severe side effects have been found in certain cancer models. For instance, such effects have been described in a rat model of tuberous sclerosis complex when investigating the long-term KD treatment effects on kidney cancer (14), while another study observed that tumor growth has significantly increased with KD supplementation in a mouse model of BRAF V600E-positive melanoma (15). Therefore, it is controversial whether KD has shown anti-tumor effects in pre-clinical studies. To date, clinical evidence from randomized controlled clinical trials is still lacking, and available evidence is mostly from case reports and pilot/feasibility studies. To better understand the anti-tumor effects of KD and to pave the way for further prospective clinical studies, we performed a systematic review and meta-analysis of current available data on animal tumor models treated with KD alone or in combination with classic therapy and/or caloric restriction. Literature Search A comprehensive, computerized literature search was performed in PubMed, Embase, and Web of Science up to April, 2020 using the following key words: "ketogenic, " "caloric restriction" paired with the following: "glioma, " "glioblastoma, " "tumor, " "cancer, " "neuroblastoma, " "carcinoma" (see Supplementary Table 1). References of the identified publications were then reviewed to further identify potentially relevant articles. Study Selection and Inclusion Criteria Studies were included in our article if the following criteria were met: (1) published as full-length articles in English; (2) reported as animal studies; (3) the exposure of interest was KD alone or in combination with classic therapy and/or caloric restriction; and (4) reported data on at least one of the following: survival time, tumor volume, or tumor weight. The following additional exclusion criteria were used for full-text screening: (1) full text not available, (2) double publication, (3) conference abstracts, (4) review, (5) editorials, and (6) comments. Data Extraction and Quality Assessment Literature search, data extraction, and quality assessment were completed independently by two authors (J.L. and H.Y.Z.) according to the inclusion criteria. In cases of disagreement between the authors, consensus was reached. The following information were extracted: the first author's name, published year, tumor type, animal species, cell strain, the ketogenic ratio, the composition of KD, whether KD was accompanied with caloric restriction, study groups, animal number of each group, survival time, tumor weight, tumor volume, the levels of glucose and β-hydroxybutyrate, the changes of body weight and conclusion. Outcome measures, including tumor weight, tumor volume, and survival time were included in the meta-analysis. The mean value, standard deviation (SD), and number of animals per group were extracted. For studies with multiple intervention groups [e.g., KD and KD + chemotherapy (CT)], the shared control group was split into 2 or more groups of smaller sample sizes to overcome unit-of-analysis errors, and these multiple comparisons were included into the meta-analysis according to instructions of the Cochrane's Handbook. Data Synthesis and Statistical Analysis Given that various measurements have been applied in the included studies, the pooled effects are presented as standardized mean difference (SMD) with 95% confidence intervals (CI). The Cochrane's Q-test was performed to assess inter-study heterogeneity, and significant heterogeneity was considered when p-value was < 0.10. The I 2 statistic was also examined, and an I 2 value of > 50% indicated significant heterogeneity among the studies. A random effects model or fixed effects model was used according to the heterogeneity. To explore the potential causes of heterogeneity, metaregression analysis and pre-defined subgroup analysis were performed. Furthermore, potential publication bias was assessed using the Egger regression asymmetry test and funnel plots. All meta-analyses and statistical analyses were performed using the Stata software (version 12.0; Stata Corporation, College Station, TX, USA). Description of the Included Studies The comprehensive search strategy on the effects of KD on tumors resulted in 1,254 records. After removal of duplicates, 673 studies remained. After title and abstract screening, the full texts of 110 studies were screened. Ultimately, 38 studies were included in our systematic review, of which 17 studies were included in the meta-analysis (Figure 1). The characteristics of all included studies are described in Effects of KD on Survival Time in Animal Models There were a total of 12 studies investigating the effects of KD supplementation on survival time ( Table 2). Significant heterogeneity was found among these studies (I 2 = 91.3%, p = 0.000). Pooled analysis of the overall effects suggested that KD Meta-Regression Analysis and Subgroup Analysis In view of the fact that statistical heterogeneity existed across the included studies, meta-regression analysis was performed by including several pre-defined covariates to explore the potential sources of heterogeneity. The results indicated that KD ratio was positively related to effect size [regression coefficient = 1.69, 95% CI (0.86, 2.52), p = 0.02]. Furthermore, animal number was not a significant modifier to the effects of KD supplementation on survival time (p = 0.655). Additionally, a pre-defined subgroup analysis was conducted to observe the influence of study characteristics on the effects of KD supplementation on survival time. First, 3 subgroups were obtained according to KD ratio. As shown in Figure 3, even though all 3 subgroups showed significantly prolonged survival time, the effects of KD ratios of 4 [SMD 2.64 (1.36, 3.93), n = 5] seemed to be larger than those of 3 [SMD 1.06 (0.42, 1.70), n = 3]. In addition, heterogeneity levels significantly decreased in the subgroup analysis of KD ratios of 6 (I 2 = 0.0%), while high heterogeneity levels were still observed in subgroups with ratios of 4 (I 2 = 79.5%) and of 3 (I 2 = 56.6%). Specifically, KD supplementation with a ratio of 4 seemed to be associated with more remarkable prolongation of survival time in animals (p = 0.001). Effects of KD on Tumor Weight and Tumor Volume A total of 6 articles (6,28,29,31,32,37), including 7 studies, reported the effects of KD supplementation on tumor weight in animal models. The overall effects were estimated using a random-effect model because significant heterogeneity was found (I 2 = 90%, p = 0.000). Meanwhile, significant heterogeneity also existed in tumor volume (6,39,40) not performed for these outcomes because of the limited number of studies included. Publication Bias Publication bias was assessed for the outcome of overall survival time, since this outcome has been analyzed in the highest number of studies. The Egger regression asymmetry test of the 12 studies suggested no significant publication bias for survival time [p = 0.569, 95% CI (−4.23, 7.27), Figure 6]. DISCUSSION In this meta-analysis, we summarized evidence from 17 published animal studies that investigated the effects of KD supplementation on anti-tumor effects. Consistent with the previous meta-analysis which reported that unrestricted KD delayed tumor growth in mice (46), our results showed that KD alone or in combination with caloric restriction significantly reduced tumor weight and volume as well as prolonged survival time. Results of our meta-regression and subgroup analyses suggested that KD supplementation with a ratio of 4 seemed to associate with remarkable prolongation of survival time in animals with limited tumor types. Traditional KD consisted of a 4:1 ratio of fat to carbohydrate plus protein, with calories from fat, protein, and carbohydrate being 90, 8, and 2%, respectively. Other alternatives to traditional KD include the medium-chain triglyceride (MCT)-based KD, the Atkins diet, and a low glycemic index diet (47). In order to enhance the anti-tumor effects of KD, several studies have either increased the proportion of fat, or supplemented KD with MCTs, omega-3 fatty acids or ketone esters (6,13,30,42,44). For example, Aminzadeh-Gohari et al. found that KD (8:1) with a fat content of 25% MCTs and 75% long-chain triglycerides (LCTs) produced a stronger anti-tumor effect compared to that with only LCTs (42). The reason may be that MCTs are more rapidly absorbed into the bloodstream and oxidized for energy because of their ability to passively diffuse through membranes (48). In addition, MCTs have the unique ability to promote ketone body synthesis in the liver (49). Tisdale et al.'s study indicated that high fat KD (2:1) showed a significant reduction in tumor size when compared with normal diet and low fat KD (1:1) (28). These results demonstrated that it is important to optimize KD compositions to suppress tumor growth. Totally, KDs have revealed the potential anti-tumor effects, which is correlated with the restricted glucose and induction ketone bodies (e.g., β-hydroxybutyrate) (50). Ketone bodies are suitable energy replacements for normal cells with functional mitochondria, but unsuitable for tumor cells, as tumor cell mitochondrial functions are dysregulated (51). Indeed, most animal tumor models report decrease of glucose and increase of ketone bodies (Table 1). On the other hand, KDs are known to have an appetite suppressing effect which may contribute to body weight loss (52), while some studies report no significant effect or increase of body weight. The discrepancy may be caused by the animal species and growth stage, or the composition the KD. Caloric restriction (CR) has been reported to prevent tumorigenesis by decreasing metabolic rate and oxidative damage (53). Morscher et al. found that the growth of neuroblastoma xenografts was significantly reduced by KD (2:1) when combined with CR (40). Another study indicated that anti-tumor and antiangiogenic effects were revealed in experimental mouse and human brain tumors at a 4:1 KD ratio (16). It is reported that tumor growth is more strongly correlated with circulating glucose levels than with circulating ketone body levels (51). The reduction in glucose levels following CR largely accounts for why tumors grow minimally on either restricted KD or on restricted high carbohydrate standard diets. Although CR exhibited good anti-tumor effects and the potential to sensitize cancer cells to chemotherapy, CR has been considered to be contraindicated in a range of cancer patients, particularly those with cachexia (5). Thus, more attention is required on optimizing KD compositions to enhance the anti-tumor effects. The efficacy of KD may also be influenced by cancer type or even subtype, genetic background, and tumor-associated syndromes. KD with a ratio of 4:1 did not slow the growth of spontaneous medulloblastoma tumors or allograft flank tumors (43), while it was reported to be anti-tumor in other cancer models, including glioblastoma (7) and colon cancer (32). Meanwhile, one study indicated that the anti-neuroblastoma effects of KD were considerably attenuated in SKN-BE(2) neuroblastoma xenografts, which carried MYCN amplification, TP53 mutation (p.C135F), and chromosome 1p loss of heterozygosity, compared to SH-SY5Y xenografts which are TP53 wild-type and non-MYCN amplified (42). Another report indicated that mice bearing renal cell carcinoma xenografts with signs of Stauffer's syndrome experienced dramatic weight loss and liver dysfunction when treated with KD (38). Additionally, Maurer et al. found that a KD did not alter tumor growth or extend the life of mice given an orthotopic injection of LNT-229 glioma cells when compared to mice maintained on SD (45). This is in contrast to the study using a rodent KD (17). This discrepancy may be related, in part, to the cell line and/or model system used. Therefore, it is necessary to evaluate the effects of KD in pre-clinical studies for every specific type of tumor before its application to cancer patients. Furthermore, genetic alterations, tumor-associated syndromes, and anti-tumor mechanisms of KD should also be considered. To date, human data on KD and cancer are mostly single case reports (54,55) or pilot/feasibility studies (56)(57)(58), which have mostly focused on the safety and tolerability of KD. Only 3 randomized controlled trials are available. Two of them involved ovarian and endometrial cancer, and mainly focused on safety, adherence, and the mental and physical functions (59,60). The other trial evaluated the safety, tolerability, and beneficial effects of KD on body composition, blood parameters, and survival in breast cancer (61), which suggested that chemotherapy combined with KD can improve the biochemical parameters, body composition, and overall survival with no substantial side effects in breast cancer patients. Thus, it is still necessary for more randomized controlled trials to explore the benefits of adjuvant KD in specific cancers. Several potential limitations should be addressed in the present meta-analysis. First, we did not have complete access to every full text papers, resulting in a small number of studies included in this meta-analysis; results of some of the estimations, such as those for the effects of KD supplementation on tumor weight and tumor volume, should therefore be interpreted with caution. In addition, despite the attempts to explore the potential causative factors of heterogeneity, high heterogeneity was found among the studies. In addition, the number of included studies in the subgroup analysis was relatively small. CONCLUSION In summary, the pre-clinical evidence pointed toward an overall anti-tumor effect of the KD in animals studies currently available with limited tumor types. The efficacy of KD on tumor influenced by many factors, including cancer type or even subtype, genetic background, cell line and/or model system, the composition of KD and tumor-associated syndromes. Therefore, more preclinical studies should be performed to elaborate the anti-tumor effect of KD in the future. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author. AUTHOR CONTRIBUTIONS ZD conceived of the study idea. JL and HZ conducted the literature review and performed the data extraction. JL drafted the manuscript. All authors were involved in consensus agreements concerning data discrepancies, involved in revising the article for important intellectual content, interpreting the data, and approved the final version to be published. FUNDING This work was financially supported by the Natural Science Foundation of Hubei Province (No. 2020CFB500).
3,743.6
2021-06-09T00:00:00.000
[ "Medicine", "Biology" ]
Gamma-induced background in the KATRIN main spectrometer The KATRIN experiment aims to measure the effective electron antineutrino mass $m_{\overline{\nu}_e}$ with a sensitivity of 0.2 eV/c$^2$ using a gaseous tritium source combined with the MAC-E filter technique. A low background rate is crucial to achieving the proposed sensitivity, and dedicated measurements have been performed to study possible sources of background electrons. In this work, we test the hypothesis that gamma radiation from external radioactive sources significantly increases the rate of background events created in the main spectrometer (MS) and observed in the focal-plane detector. Using detailed simulations of the gamma flux in the experimental hall, combined with a series of experimental tests that artificially increased or decreased the local gamma flux to the MS, we set an upper limit of 0.006 count/s (90% C.L.) from this mechanism. Our results indicate the effectiveness of the electrostatic and magnetic shielding used to block secondary electrons emitted from the inner surface of the MS. Abstract The KATRIN experiment aims to measure the effective electron antineutrino mass m ν e with a sensitivity of 0.2 eV/c 2 using a gaseous tritium source combined with the MAC-E filter technique. A low background rate is crucial to achieving the proposed sensitivity, and dedicated measurements have been performed to study possible sources of background electrons. In this work, we test the hypothesis that gamma radiation from external radioactive sources significantly increases the rate of background events created in the main spectrometer (MS) and observed in the focal-plane detector. Using detailed simulations of the gamma flux in the experimental hall, combined with a series of experimental tests that artificially increased or decreased the local gamma flux to the MS, we set an upper limit of 0.006 count/s (90% C.L.) from this mechanism. Our results indicate the effectiveness of the electrostatic and magnetic shielding used to block secondary electrons emitted from the inner surface of the MS. Introduction The Karlsruhe Tritium Neutrino (KATRIN) experiment has been designed to reach a sensitivity of 0.2 eV/c 2 (90% C.L.) on the effective electron antineutrino mass m ν e [1]. As the successor to the Mainz [2] and Troitsk [3] experiments, KA-TRIN will precisely measure the energy of electrons produced from tritium β-decay and determine m ν e by fitting the shape of the β-spectrum near the endpoint energy (18.6 keV). Reaching the sensitivity goal requires a thorough understanding and mitigation of background sources along the entire beamline of the experiment. Background electrons produced in the main spectrometer (MS), the high-resolution MAC-E filter [4][5][6] that analyzes the energy of the β-particles, are of particular importance. Several sources of background in the MS have already been analyzed in detail, including cosmicray muons [7] and radon decays [8]. X-rays and gammas can also contribute to the background and have been previously studied with other MAC-E filter spectrometers. The Mainz spectrometer [9] and the KA-TRIN pre-spectrometer [10] were both irradiated using an X-ray tube with a peak energy of E X-ray = 70 keV. These tests showed the effectiveness of the electrostatic and magnetic shielding measures in place, which suppressed the background contribution caused by the X-ray tube by up to a factor of 50. However, the total background was still strongly elevated while the irradiation took place, indicating that gamma radiation can potentially have a large contribution to the background rate for MAC-E filter spectrometers. In this paper, the effect of environmental gamma radiation on the MS backa E-mail<EMAIL_ADDRESS>b Also affiliated with Oak Ridge National Laboratory, Oak Ridge, TN 37831, USA ground rate is investigated through a combination of simulation and measurement data, and the effectiveness of the shielding in the MS is demonstrated. The experimental setup of the KATRIN experiment is described in Sect. 2, focusing on the MS and the focal-plane detector (FPD) system. Sect. 3 discusses the background generation due to environmental gammas and gives details about their simulation in the spectrometer hall. In Sect. 4, the simulation results are compared to background measurements under conditions of gamma enhancement and suppression. The contribution of environmental gamma radiation to the KATRIN background rate is derived in Sect. 5, and some final remarks on the MS background are provided in Sect. 6. Experimental apparatus The KATRIN experiment is located at the Karlsruhe Institute of Technology (KIT), Campus North, near Karlsruhe, Germany. The approximately 70-m long beamline is shown in Fig. 1. The windowless gaseous tritium source has been designed to produce more than 10 11 β-particles per second [11]. Superconducting magnets adiabatically guide emitted electrons through the differential [12] and cryogenic [13,14] pumping sections, where the tritium flow must be reduced by 14 orders of magnitude [1]. Two MAC-E filters are available to analyze the electron energy. The smaller pre-spectrometer can be used as a pre-filter to reflect low-energy β-particles back toward the source and only allow the highest energy electrons, close to the endpoint energy, to enter the MS. The precise energy filtering is performed in the MS, which operates with an energy resolution of about 1 eV near the tritium endpoint [1]. Electrons that pass through the MS are measured with the FPD system. Due to their importance in studying the gamma-induced background, the MS and FPD system will be discussed in detail in the following subsections. Main spectrometer The largest component of the KATRIN beamline is the MS (see Fig. 1). The stainless steel vessel has an inner diameter of 9.8 m at its center and a total length of 23.3 m [1]. The walls of the MS vary in thickness between 32 mm for the central cylindrical region and 25 mm for the conical sections [1]. During standard KATRIN operation, the retarding voltage U 0 applied to the MS hull will be varied systematically around −18.6 kV in order to analyze electrons with energies close to the tritium endpoint energy. To prevent scattering with residual gas, the MS is designed to operate under ultra-high vacuum conditions, with a pressure close to 10 −11 mbar [15]. Fig. 1 The KATRIN beamline. From left to right: the rear section (RS), the windowless gaseous tritium source (WGTS), the differential pumping section (DPS), the cryogenic pumping section (CPS), the pre-spectrometer (PS), the main spectrometer (MS) with surrounding air-coils, and the focal-plane detector (FPD) system. Superconducting solenoids placed at the entrance and exit of the MS generate the magnetic flux tube that is responsible for adiabatically guiding electrons through the vessel [16]. Air-coils surrounding the MS are used to fine-tune the magnetic field inside the vessel and compensate for the Earth's magnetic field [17,18]. By manipulating the magnitude and polarity of the air-coil currents, it is possible to create nonstandard magnetic field settings inside the vessel (see Sect. 4). The magnetic field inside the MS acts as a passive shield against charged particles emitted from the vessel surface; the particles are deflected by the Lorentz force back toward the walls of the vessel, or they follow magnetic field lines that do not reach the detector. Additional shielding is provided by a two-layer wire electrode system installed near the inner surface of the MS [19]. The inner electrodes (IE) can be placed at an offset potential ∆U IE relative to the vessel (i.e. U IE = U 0 + ∆U IE ). If ∆U IE is set to a negative value, low-energy electrons emitted from the vessel walls will be repelled by the wires back toward the surface. Focal-plane detector The FPD [20] is a monolithic, segmented silicon PIN diode that detects particles emerging from the MS. These particles pass through an ultra-high vacuum system mated to the exit of the MS. Charged particles can be further accelerated using a post-acceleration electrode (PAE), with voltage U PAE , immediately preceding the detector wafer; this acceleration helps distinguish signal electrons from background originating within the FPD system. A superconducting solenoid focuses charged particles onto the wafer, which has a sensitive area with a diameter of 90 mm [20]. The dartboardsegmentation pattern of the wafer into 148 equal-area pixels provides sensitivity to the transverse spatial distribution of the particles. The wafer is divided into 12 concentric rings, each with 12 pixels, and a central bullseye with four pixels. The PIN diode is biased with a voltage U bias ; signals pass through preamplifiers (located in vacuum) before readout. Energy and timing for each event are analyzed online in the data-acquisition system using a cascaded pair of trapezoidal filters. Energy calibration is provided by gammas from an 241 Am calibration source. The electron response is characterized using a photo-electron source with adjustable energy [20]. Each source can be inserted into the line of sight of the FPD for a periodic, dedicated calibration run. An energy resolution of 1.52 keV (full width at half-maximum, FWHM) and timing resolution of 246 ns (FWHM) have been achieved with the system using 18.6 keV electrons and a 6.4 µs shaping time [20]. Environmental gamma radiation in the spectrometer hall During standard KATRIN operation, β-particles that are detected by the FPD must overcome the retarding potential applied to the MS. This potential reaches its largest value near the middle of the vessel, approximately equidistant from both ends of the MS. Because the value of the retarding potential will be scanned close to the endpoint energy, βparticles traveling through the middle region of the MS will have low kinetic energies, below about 30 eV [1]. Because of the relatively poor energy resolution of the FPD compared to the MS, any low-energy secondary electrons produced in this region cannot be energetically distinguished from the signal β-particles. When passing through the MS steel, environmental gamma radiation can produce secondary electrons. Of particular concern is the generation of "true-secondary" electrons, which have energies below 50 eV [21]. The inner surface of the MS provides a large area for electron emission: 690 m 2 for the steel hull and 532 m 2 for the wire electrode system (although the effective surface area for emission from the latter is reduced due to the two-layer structure of the IE) [15]. Due to imperfections in the magnetic and electrostatic shielding, secondary electrons emitted from these surfaces may have a small probability to enter the sensitive magnetic flux tube Table 1 Specific activities (in units of Bq/kg) determined from radioassay measurements for materials in the KATRIN spectrometer hall. The activity of the wall concrete was only measured for the basement walls; for the upper walls, the same distribution of activities was assumed. The steel activities were measured at the Oroville Low Background Facility [22]. Isotope Wall concrete Floor concrete MS steel 40 that connects to the FPD. As discussed in Sect. 1, the validity of this background-generating mechanism has been confirmed with other MAC-E-filter spectrometers. Because of the increased size of the spectrometer, there is the potential for a significant background contribution from gammainduced surface electrons in the MS. Radioactivity in the spectrometer hall The MS was constructed using low-radioactivity materials in order to limit background production from environmental gammas. The central cylindrical portion of the MS vessel was built from 32 mm-thick sheet metal, composed of type 316LN stainless steel. A sample of this metal, from the same batch used for the MS vessel, was measured for radioisotopes; the results of this study are given in Table 1. However, the primary source of gammas in the KATRIN spectrometer hall is not the vessel itself but rather the intrinsic radioactivity of the concrete used to construct the walls and floors. The walls of the spectrometer hall are made of standard concrete. To reduce the effect of ambient gamma radiation, the floors were built using a low-activity concrete due to their close proximity to the MS. During construction of the hall, samples from every load of low-activity concrete were monitored with a NaI detector to ensure the activity fell within specifications. Additionally, the gamma spectra of several samples of the concrete were measured at KIT using a shielded HPGe detector. The concrete activities shown in Table 1 were derived from these measurements. 40 K was found to be the largest contributor to the activity in the concrete, followed by the 238 U and 232 Th decay chains. Simulation of the gamma flux To better understand the background due to environmental gamma radiation, simulations of the gamma flux in the MS were performed with the GEANT4 simulation toolkit [23][24][25], version 10.4.p02. A simplified reproduction of the KA-TRIN spectrometer hall was implemented; the geometry in-y x z Fig. 2 The GEANT4 simulation geometry, consisting of the steel MS (red), the concrete walls (gray) and floors (green), and, optionally, water tanks (blue) and basin water (purple). The yellow sphere indicates the position of the optional 60 Co source. The coordinate axes for the experiment are also shown, where the z-direction (x-direction) corresponds to due north (west). For orientation, the FPD is located north of the spectrometer. cluded the steel MS vessel and the concrete walls and floors (Fig. 2). Radioactive isotopes were uniformly created in the walls, floors, and steel with the relative production rates determined from the radioassay measurements given in Table 1. The effect of 222 Rn in the air was also included, assuming an activity of 49 ± 15 Bq/m 3 , which is the average indoor radon level in Germany [26]. The decays of the isotopes and any subsequent daughters were handled by the GEANT4 Radioactive Decay Module, and secular equilibrium was assumed. This module has recently been updated by the GEANT4 collaboration to include newer versions of the Evaluated Nuclear Structure Data File (ENSDF) datasets [27] and to better ensure energy conservation for decays [25]. Available physics processes were set by the Shielding EMZ physics list, which uses the most accurate electromagnetic physics models and is well-suited for shielding simulations [28]. The gamma fluxes determined from simulation are listed in Table 2 for each geometrical component. As expected, the concrete walls are the primary contributor to the gamma flux inside the MS, followed by the concrete floors. The gammas originating from the steel vessel and the air, combined together, make up less than 4 % of the rate inside the MS. Fig. 3 The energy spectrum measured by a HPGe detector in the spectrometer hall (grey), and the simulated spectrum for energy deposited in a germanium crystal by gammas originating from a 0.6 m by 0.6 m region of the concrete wall (red). The simulated spectrum has been normalized to the measured spectrum, with a normalization factor of 5.7. Prominent lines from 208 Tl ( 232 Th decay chain) and 40 K are labeled for reference. Gamma spectra were measured at various locations in the spectrometer hall using a HPGe detector. One of these spectra, collected for the detector facing the western wall in the basement of the hall, is shown in Fig. 3. The dominant contributor to the gamma spectrum is the 1461 keV line from the decay of 40 K [29]. To compare with the measurement, a very simple germanium detector was implemented in the GEANT4 simulation. The simulated spectrum (see Fig. 3) qualitatively replicates the important features of the measured gamma spectrum. Overall, the simulation is able to adequately approximate the spectrum of environmental gammas in the spectrometer hall. Background measurements During the summer of 2015, KATRIN proceeded with a measurement phase with only the MS and the FPD system, lasting several months. One of the primary goals of this commissioning campaign was to study background events originating from the MS. As described in the following subsections, the electron rate was measured by the FPD for several configurations that modified the flux of environmental gammas at the MS. The voltage settings used for these measurements are given in Table 3. The electron rates were measured for the energy region of interest (ROI), which was defined as the range between 3 keV below and 2 keV above the expected electron energy at the detector. An asymmetric energy window is applied since the shape of the electron peak has a low-energy tail. For an electron with initial energy E 0 ≈ 0 eV emitted at the MS surface, the expected energy is set by the difference in the electric potential between the detector wafer and the MS inner electrode (E expected = E 0 + e[U PAE + U bias − U 0 − ∆U IE ]). Due to a broken preamplifier module in the detector readout, six FPD pixels (each located in a different pixel ring) were not functional. The rate for each detector pixel ring with a missing pixel is linearly scaled by a corrective factor of 12 11 . Several magnetic field configurations were utilized when studying the gamma-induced background. A symmetric magnetic field, similar to the planned field for standard KATRIN operation, is shown in the upper panel of Fig. 4. In this setting, the vast majority of electrons emitted from the MS surface should not reach the detector due to the inherent magnetic shielding of the flux tube [30]. In order to directly study electrons emitted from the surface, an asymmetric field configuration can be used by adjusting the currents applied to the air-coils surrounding the MS; an example field is shown in the lower panel of Fig. 4. Because the magnetic field lines directly connect the vessel walls to the FPD, this setting allows a significantly larger fraction of surface electrons to be detected. To ensure the measured electrons are emitted from a well-defined surface region of the MS, certain detector pixels are excluded from the rate analysis. For the Asym. M configuration, the inner 16 detector pixels are excluded, while the inner 28 (4) pixels are excluded for the Asym. U (Asym. D) configuration. Enhancement of gamma flux To increase the gamma-induced background, a 60 Co source with a total activity of 53.3 ± 2.7 MBq was positioned in the vicinity of the MS (Fig. 5). The source was originally used for geological surveys along underground piping and is therefore equipped with a shielded scintillator detector. It is housed in a lead-shielded transport container that allows convenient handling and transport when not in active use. Measurements were completed with the source partially outside its lead shielding ("open" configuration) and completely inside its lead shielding ("closed" configuration). The decay of 60 Co primarily results in the cascade emission of two gammas at 1173 keV and 1332 keV [31]. Measurements of the electron rate with the FPD system were performed with the source located at different locations near the MS. However, extended measurements with the "closed" configuration were only performed at one position: the 60 Co source located under the west side of the MS, approximately equidistant from the ends of the vessel (see Fig. 2). Therefore, only results from this position are presented here. The FPD rate was measured for several magnetic-field and electrostatic shielding configurations; the rates can be found in Table 4. The effect of the 60 Co source can clearly be seen in the bottom-right portion of the detector wafer in the left and center panels of Fig. 6 (asymmetric magnetic field), but is absent in the right panel (symmetric magnetic field). The measurements with the 60 Co source demonstrate the effectiveness of the shielding inside the MS against gamma radiation. The background rate due to the 60 Co source under an asymmetric magnetic configuration dropped from 231.7 ± 0.9 cps to 26.8 ± 0.3 cps with the addition of electrostatic shielding (changing ∆U IE from 0 V to −100 V). A further reduction in the rate by at least three orders of magnitude occurred when switching to the symmetric magnetic configuration (0.005 ± 0.006 cps). Though a large number of 60 Co-induced secondary electrons were emitted from the MS surface, no significant rate effect was observed with the nominal magnetic field setting. Read-out line Fig. 5 The 60 Co source used to increase the gamma radiation at the MS. The upper panel is a schematic of the source (in the "closed" configuration). In the bottom panel, the source can be seen installed next to the air-coils, beneath the MS vessel. A simulation of the gamma flux from the 60 Co source was performed using the geometry described in Sect. 3.2. Table 5 shows the simulated rates for gammas traversing the inner surface of the MS vessel. The presence of the 60 Co source increases the gamma flux through the entire MS surface by about a factor of 8. The change in rate due to the open 60 Co source is plotted as a function of axial position along the MS in Fig. 7. The distributions for the measured electron rate (asymmetric field setting) and the simulated gamma rate exhibit nearly identical shapes and peak at the same axial position. Suppression of gamma flux In an effort to reduce the gamma flux originating from the bottom floor of the spectrometer hall, water shielding was temporarily added below the MS. The basin beneath the MS (24.1 m long by 5.6 m wide) was filled with water to a depth of 20 cm. Additionally, a total of four flexible water tanks (each approximately 6.5 m long by 3.2 m wide by 0.6 m high) were installed next to the basin to increase the shielded area. The water tanks can be seen in Fig. 8. The background rates for measurements with and without water shielding are shown in Table 6. Two asymmetric field settings, with field lines intersecting different regions Table 4 Results from the FPD measurements with the 60 Co source, where the rates are measured in units of counts per second (cps). Rate is the raw result from the detected counts and measurement duration ∆t, while Rate * is corrected for the broken preamplifier module. The errors on the rates are statistical. The magnetic field settings used are those shown in Fig. 4 The rate includes all crossings (ingoing and outgoing) of the inner surface. The errors include both statistical and systematic uncertainties, although the latter only includes the uncertainty on the activity of the gamma sources (see Table 1). Other systematic effects, such as the accuracy of the included GEANT4 physics processes and the correctness of the simulation geometry, were not calculated. Rate (10 6 gammas/s) 60 Co Water of the MS surface, were implemented; similar reductions in the electron rate due to the water shielding were found for both (∼0.4 %). For the symmetric magnetic field setting, the shielding had no significant effect on the electron rate. To investigate the effect of the shielding on the gamma flux, water was added to the simulation geometry using the dimensions cited above (see Fig. 2). The simulated rates are shown in Table 5. The addition of water shielding reduced the gamma rate inside the MS by about 7 %. The change in Each bin corresponds to a detector pixel ring which images a specific axial region of the MS surface. Also shown is the simulated gamma rate induced by the source at the inner surface of the MS, which has been scaled to match the total measured rate. The location of the 60 Co source in the simulation geometry is marked by the dashed line. rate due to the water shielding is plotted as a function of axial position along the MS in Fig. 9. The simulated gamma rate is mostly flat across the measured range and roughly Table 6 Results from the FPD measurements with(out) the water shielding. "Asym. U" and "Asym. D" indicate that the asymmetric magnetic field images the upstream (z =−3.9 m to −1.0 m) and downstream (z =1.3 m to 3.7 m) region of the MS surface, respectively (see Fig. 4). Rate is the raw result from the detected counts and measurement duration ∆t, while Rate * is corrected for the missing detector pixels. For the Asym. U (Asym. D) measurements, all rates exclude the inner 28 (4) pixels. The errors on the rates are statistical. matches the measured electron rate distribution (which is statistics-limited). Gamma-induced background contribution By combining the asymmetric magnetic field measurements and simulation results, it is possible to compute two quantities of interest: the secondary electron yield for gamma radiation traversing the inner surface of the MS, and the fraction of secondary electrons caused by environmental gamma radiation. The symmetric field measurements provide a way to determine the gamma-induced background contribution under standard operating conditions in future m ν e measurements. The relevant values used to calculate these quantities are listed in Table 7. Three different rates are to be distinguished: the measured FPD electron rate (R), the calculated gamma-induced MS electron emission rate (S), and the calculated gamma rate through the MS (Φ). Without the 60 Co source or water shielding, the values of R, S, and Φ can be Fig. 9 The reduction in the electron rate at the inner surface of the MS, caused by the water shielding, for two asymmetric magnetic field settings. The region shielded by the water tanks is roughly z =−6.5 m to 6.5 m. The simulated decrease in the gamma rate is also shown in each figure, normalized to the measured rate. distinguished as follows: where "env" indicates the contribution from environmental gamma emitters (e.g. concrete) and "other" indicates the contribution from other backgrounds (e.g. cosmic-ray muons). The change in rate due to the 60 Co source is defined to be while for water shielding the change in rate is Identical formulations hold for ∆ S and ∆ Φ. Secondary electron yield The yield Y is the gamma-induced electron rate divided by the gamma rate through the same surface. This can be computed from the effect of the 60 Co source or water shielding according to the following equation: ∆ S can be computed from ∆ R after including electron transport and detection efficiencies: where ε = 0.950 ± 0.028 is the FPD detection efficiency [20] (ignoring the effect of backscattering from the detector surface [32]) and P arrival is the average arrival probability for electrons. Because of the magnetic mirror effect, electrons emitted from the MS surface have a small probability to reach the FPD, which depends on their initial energy and emission angle relative to the magnetic field direction. P arrival was calculated using KASSIOPEIA [33], the particle-tracking simulation software developed by the KATRIN collaboration. For each magnetic field configuration, 1.6 × 10 5 electrons were started on the MS surface with emission angles sampled from a cosine angular distribution [21,34,35]. The electron energy spectrum was assumed to have the following form [35][36][37]: where E is the electron energy and W = 3.5 eV [38] is the work function of the MS surface. The validity of these assumptions on the electrons' initial properties was shown in the context of the background analysis of cosmic-ray muons [7]. Using the 60 Co measurement, one finds Y = 3 × 10 −4 e − /γ. However, the yields derived from the water-shielding measurements give consistent values which are a factor of 2.6 larger than the 60 Co result (see Table 7). This difference is likely the result of an incorrect value for ∆ Φ obtained from simulations; the tension can be alleviated by decreasing the simulated effect of the 60 Co source or by increasing the simulated effect of the water shielding. Fraction of secondary electrons induced by gammas The fraction of secondary electrons emitted from the MS surface which are caused by environmental gamma radiation can be computed in the following manner: Assuming that the FPD rate is proportional to the flux of gammas in the MS, the following relation applies: The gamma-induced fraction can thus be obtained by combining Eqns. 7 and 8: Table 7 shows the values of f env calculated from the 60 Co measurements under the two electrostatic shielding conditions, as well as from the gamma suppression measurements with water shielding. The results indicate that less than 6 % of secondary electrons emitted from the MS surface are induced by environmental gammas. However, the values from the 60 Co and water shielding measurements differ by a factor of 2.5. The scale of this discrepancy is equivalent to the difference in the electron yields between the two types of measurements, as discussed in the previous section. Gamma-induced background rate under standard conditions Similar to the asymmetric field measurements, it is possible to use Eqn. 8 to determine the effects of environmental gamma radiation under symmetric field conditions. Applying the measured and simulated rates listed in Table 8, one finds that R env = 0.7 ± 0.9 mcps (millicount per second), which is consistent with zero. Assuming the rate is Gaussian, one can follow the unified approach [39] and set an upper limit on the gamma-induced background rate, obtaining R env ≤ 2.2 mcps (90 % C.L.). However, one must account for the discrepancy in the results between the 60 Co source and water shielding measurements, as mentioned in the previous sections. A conservative approach is to allow for the possibility that the simulation overestimates the flux of gammas through the MS from the 60 Co source by a factor of 2.6. In this case, one finds R env ≤ 5.6 mcps (90 % C.L.) Given a nominal rate of 561 mcps, this result indicates that less than ∼1 % of the MS background rate can be attributed to environmental gamma radiation. 1.9 ± 0.1 2.1 ± 0.1 4.6 ± 0.4 5.5 ± 0.5 Table 8 Background rate R env induced by environmental gamma radiation under standard conditions (symmetric magnetic field and ∆U IE = −100 V). The relevant values used to calculate this rate are also listed. A corrective factor of 2.6 was applied to the upper limit on R env for the 60 Co source. 60 Co source Water shielding ∆ Φ (10 5 γ/s) 182 ± 9 −1.53 ± 0.02 Φ env (10 5 γ/s) 25.6 ± 0.9 25.6 ± 0.9 ∆ R (mcps) 5 ± 6 0.8 ± 4.5 R env (mcps) 0.7 ± 0.9 −13 ± 75 Upper limit (90 % C.L.) on R env (mcps) 5.6 110 A similar procedure was followed with respect to the water shielding data, giving a limit of R env ≤ 110 mcps (90 % C.L.). Here, a corrective factor of (2.6) −1 (which accounts for the possibility that the simulation underestimates the effect of the water shielding) was not applied, in order to obtain a conservative result. Only a weak limit is obtained in this case due to the small value of ∆ Φ and the large uncertainty on ∆ R. Discussion and conclusions Low-energy background electrons produced inside the MS are indistinguishable from signal β-particles. Thus, it is necessary to understand and limit the various sources of electrons in the MS. Measurements using the asymmetric magnetic field, combined with simulation, indicate that less than 6 % of secondary electrons emitted from the MS surface are induced by environmental gammas. The measurements with the 60 Co source show that the electrostatic and magnetic shielding are highly effective in mitigating the effect from secondary electrons. Changes in the flux of environmental gammas have little if any effect on the MS background rate under standard operating conditions. When combined with simulation, the results indicate that less than 5.6 mcps (90 % C.L.) of the MS background rate is gamma-induced. The residual background has characteristics that fit with the ionization of Rydberg atoms [40,41]. These atoms are highly excited neutral atoms which can penetrate the electro-magnetic shielding implemented in KATRIN. Furthermore, thermal radiation at room temperature is energetic enough to ionize these atoms and create background electrons. The production of these Rydberg atoms is thought to primarily originate from the decay of 210 Pb within the MS walls [42]. Plans to mitigate this background are currently being studied.
7,396.2
2019-03-01T00:00:00.000
[ "Physics" ]
A stratified flow of a non-Newtonian Casson fluid comprising microorganisms on a stretching sheet with activation energy A stratified flow may be seen regularly in a number of significant industrial operations. For instance, the stratified flow regime is typically used by gas-condensate pipelines. Clearly, only a limited set of working situations for which this flow arrangement is stable allow for the achievement of the stratified two-phase flow zone. In this paper, the authors are considered the laminar, steady and incompressible magnetohydrodynamic flow of a non-Newtonian Casson fluid flow past a stratified extending sheet. The features of bio-convection, Brownian motion, thermal radiation thermophoresis, heat source, and chemically reactive activation energy have been employed. The set of equations administered flow of fluid is converted into ordinary differential equation by suitable variables. A semi-analytical investigation of the present analysis is performed with homotopy analysis method. Endorsement of the current results with previous results is also investigated. The outcomes showed that the velocity distribution of the fluid flow lessens with higher Casson and magnetic factors. The temperature profiles of fluid flow shrinkage as the Prandtl number and Casson factor increase and enlarges with higher values of thermal radiation, magnetic, and Brownian motion factors. It is found that the growing thermophoretic and Brownian motion factors reduce the rate of thermal flow of the Casson fluid flow. In contrast, the increasing thermal stratification parameter increases the thermal flow rate of fluid. Microorganisms difference parameter σ Chemical reaction parameter Fluids that disobey the Newtonian law of viscosity are termed as non-Newtonian fluids such as ketchup, honey, pasts, paints, gels, and polymer solution etc. Many of their applications include printing technologies, food products, dragging reducing agent, and polymer fluid flow through pipes at industrial level etc. Shah et al. 1 have explored gold-blood nanofluid flow amid two porous plates by considering micropolar effects upon the flow system and determined that fluid linear motion has opposed by microrotation factor whereas the rotational motion has augmented in this process. Salahuddin et al. 2 analyzed permeable squeezing flow of Maxwell fluid with thermally radiative and chemically reactive effects and have proved that velocity distribution has weakened while thermal distribution has boosted for hike in porosity factor. Sarada et al. 3 revealed the effects of MHD on non-Newtonian fluid past an extending sheet and have established that velocity panel has deteriorated and temperature has amplified for progression in magnetic factor. Shehzad et al. 4 studied MHD non-Newtonian fluid flow past an inclined permeable as well as rotating plate. Abiev 5 has studied mathematically the bi-phase hydrodynamics Taylor flow for different fluids through a conduit and has matched his results with published works with a fine agreement amongst all results. Banerjee et al. 6 have inspected the effects of electro-viscous flow of non-Newtonian fluid in a channel with slip condition at higher zeta potential. Gautam et al. 7 considered MHD bio-convective non-Newtonian fluid flow subject to the impact of multiple slip conditions and nonlinear thermal radiations. Kumar and Sahu 8 have discussed the non-Newtonian flow of fluid on a spinning cylinder in a regime of flow of fluid and deliberated that the lift and drag coefficients have declined with augmentation in Reynolds number and rotation speed. He et al. 9 have explored the dynamics of mixed convective and thermally radiative non-Newtonian fluid flow on a surface using power law velocity slip condition along with Hall current and proved that thermal characteristics enlarged with progression in radiation factor. Archana et al. 10 inspected Casson squeezing nanofluid flow subject to time variations, slip conditions, and magnetic effects and have establish that the velocity panel has heightened for greater squeezing factor, while the thermal panel exhibited an identical performance for thermophoresis and Brownian motion factors. Ganesh 11 scrutinized nonlinearly radiative flow of nanofluid in 3D space on an exponentially elongating surface and has solved the modeled problem computationally. Kumar et al. 12 considered cross diffusive properties for mixed convective MHD fluid flow with impacts of nonlinear radiation on a vertical surface and have noted that growth in Soret as well as Dufour factors have upsurge the concentration and thermal distributions. Kumar et al. 13 evaluated the impressions of convective constraints and uniform heat sink/source on nanofluid flow using Marangoni convective effects and have noted that with larger values of heat source, maximum heat is added to the system that has augmented the thermal distribution. Zeeshan et al. 14 conducted a thermal analysis for nanofluid flow (of non-Newtonian nature) on a parabolic curve using chemical reaction and deduced that thermal distribution has weakened with upsurge in Casson factor while it has boosted with advancement in chemically reactive factor. Salahuddin et al. 15 examined the variations in thermophoresis properties for Carreau fluid flow on a parabolic elongating surface with impacts of heat generation and proved that thermal panels have amplified with progression in heat generation, while upsurge in Prandtl number has an adverse impact on heat transmission. Salahuddin et al. 16 18 inspected the transportation phenomenon for 2D cross nanofluid flow on a parabolic surface using the mass and heat flux model proposed by Cattneo-Christov and highlighted that thermal and concentration distributions have weakened for progression in corresponding relaxation factors. The collective impact of free and forced convection is generally termed as mixed convection. It plays a substantial part in numerous engineering uses for instance solar collectors, electronic equipment and nuclear reactors. Such a process occurs whenever the influence of buoyancy force is more substantial in the forced convective process or the influence of forced flow in the free convective process becomes more dominant. Wahid et al. 19 discussed computationally the mixed convective fluid flow at three dimensional stagnant point of vertical plate and have determined that velocity of fluid has weakened while temperature has enlarged with expansion in nanoparticles concentration. Qureshi et al. 20 computationally simulated MHD mixed convective fluid flow in a conduit with cavities and exposed that by improving the radius of channel the thermal flow in the channel has enhanced by 119%. Islam et al. 21 explored mixed convective nanofluid flow on an elongating cylinder with the impact of thermal source as well as sink and have established that concentration has deteriorated while temperature has risen with progress in Brownian motion factor. Al-Hassani et al. 22 have simulated mixed convective nanofluid flow in a triangular cavity by keeping the bottom of cavity as insulated while the inclined wall has kept at some fixed temperature. Patel 23 has explored the thermal production influences upon mixed convective MHD fluid flow at the stagnation point of permeable medium. Fu et al. 24 have analyzed comprehensively the mixed convective nanofluid flow over a surface and have discussed the influences of various emerging factors on flow distributions. The readers can further have an insight of related concept in Refs. [25][26][27][28][29][30] . Fluids that are conducted electrically such as salted water and plasma etc. are named as magnetohydrodynamic (MHD). Many of their applications are comprised of the areas of biomedical engineering, medical sciences, chemical engineering, and fluid dynamics etc. The main benefit of applying the principles of MHD is to divert the flow filed in the desired direction by shifting the boundary layer development. The theory of MHD was first introduced by Hartmann 31 . Waqas et al. 32 considered the thermally radiative MHD fluid flow on a stratified convective sheet and has deduced that the thermal distribution and Nusselt number have amplified with boosted values of curvature factor. Jamshed et al. 33 physically specified the MHD mixed convection nanofluid flow through the inner elliptic cylinder and have deduced that Hartmann number has a positive impact upon thermal characteristics. Asjad et al. 34 evaluated the impacts of activated energy over MHD fluid flow past a elongating surface using the impact of microorganism and have explored that velocity of fluid has boosted with upsurge in mixed convection and magnetic parameters. Bejawada et al. 35 Thermophoresis and Brownian motion phenomena are the mechanisms of mass as well as thermal transmission of tiny particles in a manner of reducing the concentration and temperature gradients that also influenced these tiny particles associated with bulk surfaces. These are the two substantial sources for migration of fluid particles. Thermophoresis and Brownian motion have many applications in different fields such as nuclear safety phenomena, hydrodynamics, atmospheric pollution, and aerosol technology etc. Pasha et al. 41 have applied the analytical approaches for discussing the influences of magnetic factor and Brownian motion as well as thermophoresis effects amid two plates and explored that thermal flow has been upsurge for progression in Brownian and thermophoretic factors. Saghir and Rahman 42 have explored Brownian motion and thermophoresis effects over fluid flow in a channel and have deduced that diameter of nanoparticles has more impact upon the thermal diffusion enhancement. Soomro et al. 43 have discussed computationally the impression of Brownian motion and thermophoresis by using Crank-Nicolson approach for solution of modeled equations. Shah et al. 44 have discussed diffusions effects of thermophoresis and Brownian motion on upper convective Maxwell nanofluid flow over vertical shaped surface and have confirmed that enhancement in Brownian factor has promoted thermal conductance and motion of nanoparticles. Harish and Sivakumar 45 have exposed the influence of nanoparticles distribution on fluid flow through an enclosure taking the effects of thermophoresis and Brownian motion in the fluid flow system. Kalpana et al. 46 have studied the MHD hybrid nanofluid flow in irregular shaped channel using the influences of Brownian motion and thermophoresis and have explored that fluid's thermal profiles have been amplified with upsurge in magnetic factor, volume faction of nanoparticles and Brownian motion factor. Hazarika and Ahmad 47 have explored the behavior of thermophoresis and Brownian motion on nanoparticles flow and have explored that the growing diameter of nanoparticles has enhanced the Brownian motion within the flow system. Microorganisms like microalgae and bacteria are comparatively denser than water and subsequently capable to swim in reverse direction of gravity. During this phenomenon high magnitude of microorganisms are accumulating at the upper surface of suspension and are causing a disturbance in density of upper and lower layers of suspension. As a result a convection pattern is initiated due to the convective instability in aforementioned phenomenon. Such random motion is responsible for occurrence of bioconvection in the fluid flow process and has many practical applications such as ecological products like ethanol, fuels and fertilizers etc. Eldabe et al. 48 studied nanofluid flow using gyrotactic microorganisms and thermophoresis as well as Brownian motion and revealed that thermal flow panels have amplified with impact of magnetic factor and Brownian motion parameter. Ijaz et al. 49 www.nature.com/scientificreports/ microorganisms. Bhatti et al. 50 have investigated MHD Williamson nanoparticles flow amid rotary circular plates induced in a permeable medium subject to the influences of gyrotactic microorganisms. Alrabaiah et al. 51 have assessed parametrically the microorganism fluid flow amid conical gap of rotary disk and cone. Madhukesh et al. 52 have explored the dynamics of swimming microorganism and water-based nanofluid flow on a Riga plate with effect of thermal source and sink and have estimated that upsurge in slip effects has declined the profiles of concentration, temperature and velocity of fluid. Azam 53 has exposed numerically the mathematical model of bioconvective time-based nanofluid flow on a surface with nonlinear radiations and explored that fluid motion has deteriorated for expansion in bioconvective Rayleigh number. Azam et al. 54 designed mathematically a new model to investigate the impact of bio-convection and activation energy on chemically reactive nanofluid flow using nonlinearly radiative effects and have deduced that the microorganism motile number has dropped for progression in Peclet number and variance factor of microorganism. Waqas et al. 55 studied bio-convective MHD stratified nanofluid flow supported by gyrating and elongating sheet using dissipative and Joule heating effects. Keeping in mind the above literature, we are sure that there is very less work based on the stratified flow of a non-Newtonian Casson fluid flow over a stretching surface. For liquid-gas and liquid-liquid two-phase flow in a gravitational environment, stratified flow is a fundamental flow configuration in which the lightened fluid flows over the thicker one. This flow pattern may be seen regularly in a number of significant industrial operations. There are two-phase phenomena known as stratified and slug flows, which occur in many applications, such as petroleum transportation and chemical microreactors. Along a microchannel, the slug flow reduces the transfer distance and enhances the mixing process. The pressure drop in production pipelines is heavily influenced by phase flow rates, pipe diameters, and fluid properties such as density, viscosity and surface tension. Therefore, the flow is considered to be incompressible, laminar, and steady. Various flow conditions have been employed for current problem. The analysis is considered in the following subsequent sections. In "Problem formulation" section, the model formulation is presented. A semi-analytical investigation along with validation with previous results is presented in "HAM solution" section. "Validation" section describes the discussion of various results of the present analysis whereas "Discussion of results" section includes the outcomes of this study. Problem formulation Assume the two-dimensional magnetohydrodynamic flow of a non-Newtonian Casson fluid on a stratified stretching sheet. The features of bioconvection and thermophoresis phenomena have been used along with effects of heat source, thermal radiation, activation energy and chemical reaction. The stretching velocity along x-axis is denoted by u w = a x with a > 0 as constant and y-axis as in normal direction. B 0 is the strength of magnetic effects that is taken normal to flow direction. Temperature at surface and its ambient values are T w and T ∞ . Likewise the surface nanoparticle and microorganisms concentration are dented by C w and N w , respectively. The ambient nanoparticle and microorganisms concentration are dented by C ∞ and N ∞ . The geometrical representation of two-dimensional Cartesian coordinate system is described in Fig. 1. Using the suppositions, the flow equations take the subsequent form: www.nature.com/scientificreports/ Above π c shows the critical value of π , e mn is the (m, n)th deformation rate, µ Bnf represents dynamic viscosity of plastic while yield stress is given by P y . The mass conservation, momentum, energy, nanoparticle concentration and microorganisms concentration equations can be described as: Hear the velocity vector is V =(u, v) and β is the Casson factor. From above we have: The conditions at the boundaries are: The stratified restrictions are defined as: where e 1 , e 2 , e 3 , d 1 , d 2 and d 3 are positive constants. The variables of transformation are given by 56,57 : By incorporating Eq. (14) we have from above: where Rc is bio-convective Rayleigh number, Rb is buoyancy ratio factor, ω is mixed convection factor, Sc and Pr are Schmidt and Prandtl numbers, Rd is thermal radiation factor, S 1 is thermal stratification parameter, S 2 is the concentration stratification factor, S 3 is the microorganisms stratification factor, Nt is thermophoresis parameter, Lb is bio-convective Lewis number, Nb is Brownian motion factor, E is activation energy parameter, δ is temperature difference factor, δ 1 is microorganisms difference parameter, M is magnetic parameter and σ is chemical reaction factor. The default values and ranges of these factors are included in Table 1. To discover the surface drag, heat and mass transmission characteristics, and density number, the local quantities of interest expressed as: HAM solution In this segment, the homotopic solution of the present model is tackled with HAM which is applicable to both linear and nonlinear differential equations. The operators (linear) are given by: The initial guess for above problem is given by: With properties: where ζ 1 − ζ 9 are fixed values. Here 0th deformation problem can be written as: where ∈ [0 1] is the inserting factor, is auxiliary factor with nonzero value. The N f , N θ , N φ and N χ are nonlinear and expressded as: For = 0 and = 1 we have: (24) L f ζ 1 + ζ 3 e ξ + ζ 2 e −ξ = 0, L θ ζ 5 e ξ + ζ 4 e −ξ = 0, L φ ζ 7 e ξ + ζ 6 e −ξ = 0, L θ ζ 8 e −ξ + ζ 9 e ξ = 0. www.nature.com/scientificreports/ Using Taylor series expansion, we obtained: The mth-order deformation problem can be written as: where which is the required solution. The advantages of HAM includes: i. Convergence Control HAM allows for control over the convergence of the solution series. The convergence of the solution can be accelerated or improved by adjusting the auxiliary parameter, known as the convergence-control parameter. This flexibility is valuable in obtaining accurate and reliable solutions, especially for highly nonlinear problems. ii. Applicability HAM is applicable to a wide range of nonlinear differential equations arising in various scientific and engineering fields. It can handle problems with both regular and singular behavior, making it a versatile method for studying diverse phenomena. iii. Efficiency HAM is computationally efficient compared to some computational approaches. The analytical nature of HAM eliminates the need for discretization of the problem domain, reducing computational efforts and memory requirements. www.nature.com/scientificreports/ iv. Non-Perturbative Approach HAM does not rely on perturbation techniques and can capture both weakly and strongly nonlinear behaviors of the system. This makes it a valuable tool for studying problems where traditional perturbation methods may fail. v. Physical Interpretability HAM allows for the incorporation of physical parameters and constraints directly into the solution process. This facilitates a deeper understanding of the underlying physical phenomena and provides a physical interpretation of the solution. Validation To validate the results of the current analysis, the HAM results are matched with earlier established results as shown in Table 2. Quite similar results are established here which validate correctness of the current analysis. Discussion of results This segment deals with the influences of emerging factors on various flow profiles using numerous figures. Additionally, the impressions of emerging factors on the skin friction, Sherwood, Nusselt, and density numbers are exhibited by means of Tables. The default values of the embedded factors are shown in Table 1. Figures 2, 3 demonstrate the consequence of Casson factor ( β ) on velocity ( f ′ (η) ) and temperature ( θ (η) ). The increasing Table 2. Comparison of current results for −θ ′ (0) with established results. Pr Chen 58 Zaimi et al. 59 Sithole et al. 60 www.nature.com/scientificreports/ β reduces both f ′ (η) and θ(η) . It is well known that the increasing β expands the fluid's viscosity which causes reduction in velocity of fluid. Therefore, the growth in β diminishes the fluid velocity. Also, the increasing β reduces the yield stress which consequently weakens the thickness of thermal boundary layer. Therefore, the increasing β reduces thermal profile of Casson fluid flow as displayed in Fig. 3. Figures 4, 5 display the consequences of M on f ′ (η) and θ (η) , respectively. The growing M reduces f ′ (η) while increases θ (η) . The greater M increases the dragging force on extending sheet surface and weakens velocity panels. This effect occurs due the Lorentz force that encounters the fluid particles flow. Thus, the increasing magnetic parameter reduces the Casson fluid flow whereas the increasing dragging force on elongating sheet surface upsurges the thickness of thermal layer at the boundary. The intensification in the boundary of thermal layer results augmentation in the temperature fluid. Therefore, the growing magnetic factor augments the thermal panel. Figure 6 demonstrates the impact of radiative factor ( Rd ) on θ(η) . The increasing thermal radiation factor significantly augments θ(η) . It is obvious that the aggregating Rd upsurge the thermal panels. This outcome is due to the reason that as Rd increase then the Rosseland radiative absorptivity ( k * ) (from the definition of Rd ) which results an augmentation in rate of thermal flow and escalates the temperature of the Casson fluid flow. Hence, aggregating thermal radiation factor increases θ(η) . Figure 7 depicts the impression of Prandtl number ( Pr ) on θ (η) . Thermal distribution along with thermal layer at the boundary monotonically declines as Pr rises. The reason behind this effect is because when Pr rises, the thermal boundary layer thickness reduces as growth in Pr corresponds to a thinner boundary layer and a weaker thermal diffusivity. Figures 8, 9 show the influence of Brownian motion factor ( Nb ) upon θ(η) and φ(η) , respectively. The thermal characteristics augment while the concentration distribution upsurges with hike in Nb . Here, we can see that θ (η) grows as the Brownian parameter factor Nb increases. This phenomenon showed that Brownian motion, which produces micro-mixing and increases a nanofluid's thermal conductivity, is primarily responsible for the increasing Casson fluid flow temperature (see Fig. 8). Conversely, the greater Nb reduces the concentration profile significantly. The reason is that when particle Brownian motion increases, the fluid moves irregularly and is vigorously mixed that causes in degeneration in the nanoparticle concentration distribution of Casson fluid as displayed in Fig. 9. Figures 10 and 11 depict the impression of thermophoresis factor ( Nt ) over θ(η) and φ(η) , respectively. Both these profiles augment with the increasing thermophoresis factor ( Nt ). Increased Nt values result in an enrichment of the thermophoresis force, which in turn leads nanoparticles to diffuse into the surrounding fluid owing to temperature gradients, thickening thermal and concentration Figure 12 indicates the impact of chemical reaction factor ( σ ) on φ(η) . The higher values of ( σ ) reduces φ(η) . Physically, this stands to the reason as the destructive chemical reaction speeds up the rate at which reactant species decompose and reduces φ(η) . Figure 13 displays the influence of activation energy factor ( E ) on concentration profile. The upsurge in ( E ) augments the concentration panels. Increasing E retards the Arrhenius function and boosts the chemical reaction effects that generates the boundary layer's high concentration. So, the Figure 14 exhibits the impact of Sc on φ(η) . An upsurge in Sc retards φ(η) . Greater values of Sc indicate that the fluid has a lower chemical molecular diffusivity, or that mass transport contributes less to diffusion. Therefore, with growth in Sc , the thickness of the concentration boundary layer thickness decreases. Greater species diffusion takes place with lower values of Sc , and the thickness of concentration layer at the boundary rises. It follows that under such an environment, a lower Schmidt number diffusing species must be used to improve concentration profile in the medium, according to chemical engineering designers. Figure 15 indicates the influence of Peclet number ( Pe ) on microorganisms profile. The growing Peclet number reduces the microorganisms profile. There is an inverse relation amongst the Peclet number and www.nature.com/scientificreports/ microorganism diffusivity, and there is a direct relation between the Peclet number and cell swimming speed and chemotaxis constant. The Peclet number is associated with microorganisms diffusivity, means that the greater Peclet number reduces the microorganisms diffusivity and as a result the density profile reduces. Therefore, the microorganisms profile of the Casson fluid flow diminishes for higher Peclet number. Figure 16 indicates the impact of Lb on microorganisms profile ( χ(η) ). The increasing value of Lb reduces the microorganisms profile. Table 3 shows the influence ω , Rb , Rc and M on surface drag of the Casson fluid flow. Form Table 3, it is found that the increasing ω and Rc reduces the surface drag of Casson fluid flow. Conversely, the aggregating values of Rb and M augments the surface drag of the Casson fluid flow. Table 4 shows the influence Nt , Nb , Rd and S 1 on Table 4, it is found that the boosting values of Nt and Nb diminishes the rate of thermal flow of fluid whereas the increasing Rd and S 1 increases rate of thermal flow of fluid. Table 5 shows the consequences of Nt , Nb , Sc and S 2 on rate of mass transfer. From here, it is noticed that increasing Nt reduces the mass transfer rate of the Casson fluid flow while the increasing Nb , Sc and S 2 augments the mass transfer rate for fluid. Table 6 shows the impact of Lb , Pe and S 3 on density number of the Casson fluid flow. From this Table, it is found that the augmenting Lb and S 3 increases the density number of the Casson fluid while the upsurge in Pe diminishes the density number of the Casson fluid flow. Conclusion In this section, the final outcomes of the laminar, steady, and incompressible MHD flow of a non-Newtonian Casson fluid flow over a stratified stretching sheet are presented. A semi-analytical investigation along with validation with previous results is presented. The final outcomes are listed as: i. When the Casson and magnetic factors increase, the velocity distribution of the Casson fluid flow diminishes. ii. As the Casson factor and Prandtl number rise, temperature distribution of fluid flow diminishes. On the other hand, when thermal radiation, magnetic, and Brownian motion factors rise, the temperature distribution of the fluid also rises. www.nature.com/scientificreports/ iii. The concentration distribution of fluid is reduced by the higher Schmidt number, Brownian motion, and chemical reaction factor whereas the increased thermophoresis and activation energy parameters enhance the Casson fluid flow concentration. iv. The microorganism profile is reduced when the bioconvection Peclet and Lewis numbers rise. v. It is perceived that when the bioconvective Rayleigh number and mixed convection factor increase, the surface drag of the Casson fluid flow decreases while, the rising buoyancy ratio factor and the magnetic factor increase the surface drag. vi. The rate of heat transmission is decreased by Brownian motion and rising thermophoresis whereas the rate of heat transfer is accelerated by rising thermal radiation and thermal stratification factors. vii. It is found that raising the thermophoresis factor lowers the mass transmission rate while increasing the Brownian factor, Schmidt number, and concentration stratification factor raises the heat transfer rate. Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request.
6,105.4
2023-07-11T00:00:00.000
[ "Physics", "Engineering" ]
Generation and validation of a PITX2–EGFP reporter line of human induced pluripotent stem cells enables isolation of periocular mesenchymal cells PITX2 (Paired-like homeodomain transcription factor 2) plays important roles in asymmetric development of the internal organs and symmetric development of eye tissues. During eye development, cranial neural crest cells migrate from the neural tube and form the periocular mesenchyme (POM). POM cells differentiate into several ocular cell types, such as corneal endothelial cells, keratocytes, and some ocular mesenchymal cells. In this study, we used transcription activator–like effector nuclease technology to establish a human induced pluripotent stem cell (hiPSC) line expressing a fluorescent reporter gene from the PITX2 promoter. Using homologous recombination, we heterozygously inserted a PITX2–IRES2–EGFP sequence downstream of the stop codon in exon 8 of PITX2. Cellular pluripotency was monitored with alkaline phosphatase and immunofluorescence staining of pluripotency markers, and the hiPSC line formed normal self-formed ectodermal autonomous multizones. Using a combination of previously reported methods, we induced PITX2 in the hiPSC line and observed simultaneous EGFP and PITX2 expression, as indicated by immunoblotting and immunofluorescence staining. PITX2 mRNA levels were increased in EGFP-positive cells, which were collected by cell sorting, and marker gene expression analysis of EGFP-positive cells induced in self-formed ectodermal autonomous multizones revealed that they were genuine POM cells. Moreover, after 2 days of culture, EGFP-positive cells expressed the PITX2 protein, which co-localized with forkhead box C1 (FOXC1) protein in the nucleus. We anticipate that the PITX2–EGFP hiPSC reporter cell line established and validated here can be utilized to isolate POM cells and to analyze PITX2 expression during POM cell induction. Neural crest cells (NCCs) 2 are multipotent stem cells generated at the border between the neural tube and surface ectoderm during early embryonic development in vertebrates (1). In eye development, cranial NCCs migrate to form the periocular mesenchyme (POM). POM cells in turn differentiate into a wide variety of cells, such as corneal endothelial cells (2), keratocytes, iris stromal cells, ciliary muscle cells, trabecular meshwork cells, and scleral cells (3,4). In addition, peripheral tissues of the eyes, such as cartilage, bone, dermis, and fat, which are connected to the extraocular muscles, also originate from POM cells (5). PITX2 (paired-like homeodomain transcription factor 2) is one of the homeobox transcription factors that play key roles during embryogenesis. PITX2 is crucial in left-right asymmetry in visceral organs (6), such as the heart (7-9), lungs (7), gut (8,10), and stomach, as well as in eye development (11). In the eyes, PITX2 is expressed in the cranial NC-derived POM cells, together with other transcription factors, such as FOXC1, FOXC2 (12), and LMX1B (13), and plays important roles in ocular anterior segment development (14). Specifically, PITX2 is involved in the development of corneal endothelial cells (15), keratocytes (16), iris stromal cells (17), ciliary muscles, trabecular meshwork cells, scleral cells, mesenchymal cells of the ocular glands, and peripheral connective tissues connected to the extraocular muscles (18). Mutation of PITX2 or FOXC1 causes Axenfeld-Rieger syndrome, which manifests with dysgenesis of the anterior segment of the eyes as well as mild tooth malformation and craniofacial dysmorphism (19,20). It has been suggested that POM cells can be induced from human induced pluripotent stem cells (hiPSCs) (21). We recently reported that hiPSCs form self-formed ectodermal autonomous multizones (SEAMs) from which ocular cells, such as corneal epithelial cells, conjunctival epithelial cells, lens cells, retinal cells, and NCCs, can be derived (22,23). In SEAMs, various types of cells mimic their differentiation process to form a whole eye structure in vitro. We expect that SEAMs contain PITX2-expressing POM cells differentiated from NCCs. However, it is nearly impossible to isolate PITX2-expressing POM cells from the various cell types in culture systems without a reporter line, because POM cell-specific cellsurface markers have not been reported to date. Transcription activator-like effector nucleases (TALENs) are restriction enzymes that generate site-specific double-strand breaks in DNA, with lower nonspecific cleavage activity (24) than CRISPR-Cas (25), through binding of a TAL effector to specific DNA regions (26). The double-strand breaks are repaired through nonhomologous end joining or homologous recombination. TALENs can be used for specific gene knockout or knockin. We previously demonstrated that a p63 knockin reporter line generated with TALEN technology could be used for detailed analysis and isolation of p63-positive cells in SEAMs (27). Here, we report the generation of a PITX2 reporter line of hiPSCs harboring an IRES2-EGFP sequence, using TALEN technology. We validated the reporter line in a system in which PITX2 expression is induced in pluripotent stem cells. In addition, we were able to isolate and analyze POM cells. The PITX2 reporter hiPSC line generated in this study allows robust induction and isolation of POM-derived cells and insights into the detailed mechanisms of induction of POM cells and POM cellderived cells. PITX2 is expressed in POM cells in the mouse embryo We evaluated Pitx2 expression in POM cells in mouse embryos at E10.5 (Fig. S1A) and E12.5 (Fig. S1B). At both stages, cells ranging from the periocular sites to primordial cells of the cornea were positive for Pitx2 and Foxc1, but negative for Sox10, a negative marker of POM cells. This finding indicated that POM cells exist in the periocular sites in E10.5 and E12.5 mouse embryos. Evaluation of GFP fluorescence intensity driven by the PITX2 promoter To achieve strong GFP fluorescence upon forced expression in 293T cells, various GFP variants, polycistronic sequences, and polyadenylation (poly(A)) signals were evaluated. First, three GFPs-EGFP, EmGFP, and TurboGFP-were inserted downstream of the PITX2 and 2A peptide sequences in pEF5/ FRT/V5-DEST. There were no obvious differences in intensity between these three GFPs (Fig. S2A). As polycistronic sequences, 2A peptides and IRES2 were evaluated. Fluorescence intensity was stronger when EGFP was located downstream of IRES2 than when it was downstream of the 2A peptide sequences (Fig. S2B). As for polyadenylation, poly(A) signals of bovine growth hormone, herpes simplex virusthymidine kinase, SV40, PITX2 3Ј-UTR, and ␤-actin 3Ј-UTR were evaluated. SV40 and PITX2 3Ј-UTR yielded slightly stronger EGFP intensity than the other poly(A) signals (Fig. S2C). Based on our findings, we used a donor vector containing EGFP, IRES2, and SV40 as the GFP variant, polycistronic sequence, and polyadenylation signal, respectively, for establishing a PITX2-GFP reporter hiPSC line. Design of a TAL effector and donor vector There are six splice variants of PITX2 and three isoforms (Fig. 1A). Exon 8 is common to all PITX2 variants. Thus, we added the artificial sequences downstream of PITX2 exon 8. As shown in Fig. 1B, a TAL effector recognition site in the left arm was located immediately upstream of a stop codon of PITX2, and a right-arm recognition site was designed after the PITX2 stop codon. The donor vector was designed so that IRES2-EGFP-SV40 poly(A) followed the left arm of PITX2 with a silent mutation to avoid the generation of double-strand breaks after successful site-specific double-strand break generation by the TALEN. Generation of a PITX2-EGFP knockin reporter hiPSC line After electroporation of the TALEN vector and donor vector into 201B7 hiPSCs, the cells were seeded on DR4 mouse embryonic fibroblasts (MEFs) for drug selection. Knockin cells were screened on G418 sulfate as outlined in Fig. 2A. Based on PCR results, colony 8 (IRES2) was selected for further recloning analysis (Fig. 2B). Colonies 5 and 7 (2A) seemed to be successfully transfected; however, they were not further analyzed because the 2A peptides yielded lower EGFP intensity as shown in Fig. S2B. Colony 8 was recloned, and six colonies were analyzed by PCR, which revealed that the construct was heterozygously introduced in all six colonies. Colony 8-2 produced a slightly stronger band intensity than the other colonies ( Fig. 2C) and was therefore chosen as the best candidate PITX2-EGFP hiPSC reporter line for further analysis. The genome sequence of this line was confirmed using Sanger sequencing (Fig. S3). Pluripotent stem cell markers and typical SEAM phenotypes in the PITX2-EGFP knockin reporter line After passaging the PITX2 knockin hiPSC line for feeder-free culture, the cells formed round, normal colonies on iMatrix-511, as shown in Fig. 3A. In an alkaline phosphatase (ALP)staining assay, PITX2 knockin hiPSC clone 8-2 showed a staining intensity and color similar to those of 201B7 WT hiPSCs (Fig. 3B). Expression of the pluripotent markers NANOG, OCT3/4, TRA-1-60, and SSEA-4 was evaluated by immunofluorescence analysis using specific antibodies. All these markers showed strong expression from the center to the borders of the colonies (Fig. 3C). We successfully induced SEAM structures consisting of four zones using the PITX2 knockin hiPSC line as reported previously (22,23). After SEAM induction, markers of corneal epithelial cells (p63, PAX6) lens cells (p63, ␣-crystallin), neuroretina (CHX10), and retinal pigment epithelial cells (MITF) were stained (Fig. 3D). The cells in zone 3 were p63-and PAX6-positive and showed cobblestone morphology, which indicated that they were corneal epithelial cells. The aggregated cells at the end of zone 2 were p63-and ␣-crystallin-positive lens cells. The cells in the inner area of zone 2 were CHX10-positive neuroretinal cells. The cells aggregated in the outer area of zone 2 were MITF-positive retinal pigment epithelial cells. All these structures were similar to those induced in 201B7 hiPSCs. We also evaluated gene expression patterns in SEAMs (Fig. 3E). Expression of TUBB3, a neuron marker, was substantially higher in zone 1. Expression of the neural crest cell marker SOX10 and the neural retina marker RAX was higher in zone 2. PAX6 was expressed in all PITX2 reporter line from human pluripotent stem cells zones. Epithelial markers DN-p63, CDH1, and KRT18 were highly expressed in zones 3 and 4. The lens cell marker CRYAA was the most strongly expressed in zones 3 and 4. This expression pattern was similar to that of 201B7 hiPSCs (22). Validation of the PITX2-EGFP knockin reporter line To confirm that EGFP is expressed in the PITX2 knockin hiPSC line, POM cells were induced by a combination of reported induction methods (28 -31) (Fig. 4A). Aggregated cells were detected ϳ20 days after the start of induction (Fig. 4B). The aggregated cells showed EGFP signals at day 20 ( Fig. 4C). We determined EGFP and PITX2 protein expression levels using whole cell lysates by Western blotting analyses. PITX2 expression in knockin iPSCs was nearly the same as that in clone 8-2 PITX2 knockin iPSCs (Fig. 4D), and we observed robust protein expression of EGFP in PITX2 knockin iPSC clone 8-2. Next, we conducted immunofluorescence staining using anti-PITX2 antibody to evaluate whether EGFP and PITX2 are expressed simultaneously. As shown in Fig. 4E, EGFP fluorescence and PITX2 fluorescent staining were detected simultaneously. Next, we sorted and collected EGFPpositive and -negative cells by FACS (Fig. 4F). We analyzed PITX2 expression levels by quantitative RT-PCR (qRT-PCR). The population of EGFP-positive cells exhibited high PITX2 expression, whereas EGFP-negative cells hardly expressed PITX2. Moreover, FOXC1 and TFAP2B, which are markers of POM cells, were significantly more strongly expressed in EGFP-positive cells when compared with EGFP-negative cells. Conversely, SOX10, which is a negative marker of POM cells, was more strongly expressed in EGFP-negative than in EGFPpositive cells. On the other hand, the POM markers FOXC2, LMX1B, NGFR, LMX1B, and COL8A2 were not highly expressed in EGFP-positive cells (Fig. 4G). Isolation and characterization of POM cells To acquire more genuine POM cells, we tried a SEAM induction method (Fig. 5A). Typical, aggregated cells emerged in SEAM zone 2 after 14 days of induction (Fig. 5B). EGFP signals were confirmed at the location of aggregated cells in SEAM zone 2 (Fig. 5C). After sorting EGFP-positive cells (Fig. 5D), they were analyzed for marker expression by qRT-PCR. All positive POM cell markers were significantly increased in EGFPpositive compared with EGFP-negative cells, and there was no difference of SOX10 expression level between them (Fig. 5E), which revealed that the EGFP-positive cells were POM cells. They were cultivated for 2 days in a culture plate using differentiation medium (DM) with Y-27632, epidermal growth factor (EGF), basic fibroblast growth factor (bFGF), and retinoic acid (Fig. 5F). Cells expressed PITX2 protein, which co-localized with FOXC1 protein in the nucleus (Fig. 5G). Discussion Various GFP variants, polycistronic sequences, and poly(A) signals were evaluated in a stepwise manner to establish a PITX2-EGFP knockin hiPSC reporter line with optimum GFP expression. The GFP variants EGFP, EmGFP, and TurboGFP did not show a difference in fluorescence intensity. Unexpectedly, the IRES2 sequence yielded stronger EGFP fluorescence than the 2A peptides. The 2A peptides were added at the C terminus of PITX2, and the proline added at the N terminus of GFP might have affected GFP expression or fluorescence. Alternatively, unknown mechanisms might determine the compatibility between target gene and following polycistronic sequences. The various polyadenylation signals tested yielded slightly different EGFP expression. Interestingly, among them, poly(A) of SV40 and PITX2 3Ј-UTR had the strongest ability to stabilize PITX2 mRNA. The efficiency of PITX2-IRES2-EGFP knockin was 1 of 12. This was more or less as expected, but there is room for improving the knockin efficiency, for example by increasing the vector concentrations and optimizing the electroporation program. The reporter line established in this study showed normal pluripotency based on ALP staining and immunofluorescence staining of the markers analyzed in this study. We confirmed that a PITX2 knockin hiPSC line formed typical SEAMs and Odd numbers indicate samples for which 2A peptides were used, and even numbers indicate samples for which IRES2 was used as a polycistronic sequence. Binding sites for primer pairs 1 and 2 are indicated as red and blue arrows, respectively. The amplicon size of primer pair 1 is 598 bp for knockin, whereas amplicon sizes of primer pair 2 are 3677 (3150 when 2A is used) and 1191 bp for knockin and WT, respectively. Arrows indicate the amplicons that prove successful knockin in the candidate clones. C, PCR analysis upon recloning of clone candidate 8. Arrows (clone 8-2) indicate the amplicons that prove successful knockin in the candidate clones of recloning. PITX2 reporter line from human pluripotent stem cells showed a robust ability to induce lens cells, neuroretinal cells, and retinal pigment cells, although they induced less corneal epithelial cells than 201B7 hiPSCs did (Fig. 3D). Such a difference in differentiation tendency often occurs among pluripotent stem cell lines (32). We were able to establish PITX2-expressing cells by combining published approaches: NC induction by WNT and low BMP signaling (28), corneal endothelial cell induction by EGF treatment, and FGF signaling for POM cell-derived tissue induction (29). Unfortunately, the cells did not show perfect gene expression patterns of POM cells, although they showed high FOXC1 mRNA expression and low SOX10 mRNA expression, which are characteristic for POM cells (12,33). However, PITX2-expressing cells induced by the SEAM method did show perfect gene expression patterns of POM cells based on qRT-PCR results (Fig. 5E). Moreover, COL8A2 and COL8A1, which are markers of POM and corneal endothelial cell, were significantly increased in EGFP-positive cells. This might indicate that they are POM cells and have a potential to differentiate into POMderived cells. In the near future, we would like to utilize this cell line to further analyze POM cells in SEAM-inducing and other culture conditions. In conclusion, we successfully generated and validated a PITX2-IRES2-EGFP knockin hiPSC line, and we were able to isolate PITX2-expressing POM cells. These cells showed higher POM marker expression than EGFP-negative cells. The POM cells sorted in this study are a reliable tool for detailed analysis of POM-derived ocular cells. Our reporter line provides a technical platform for testing induction methods and for detailed analysis of PITX2-expressing cells and their derivatives. Immunostaining of mouse embryos All animal experimental protocols in this study were in accordance with the Association for Research in Vision and PITX2 reporter line from human pluripotent stem cells Ophthalmology Statement for the Use of Animals in Ophthalmic and Visual Research, with prior approval from the Animal Ethics Committee of Osaka University. E10.5 and E12.5 ICR mice were purchased from Japan SLC (Sizuoka, Japan). After the mice were euthanized with pentobarbital sodium, they were perfused with PBS. Mouse embryos were collected and were also euthanized. They were embedded in Tissue-Tek O.C.T. compound (Sakura Finetek Japan, Tokyo, Japan), frozen on dry ice, and stored at Ϫ80°C. The mouse embryos were sectioned and incubated with blocking solution (5% normal donkey serum, 0.3% Triton X-100 in TBS). Then the sections were incubated with primary antibodies against PITX2 (Ab55599; Abcam, Cambridge, MA), FOXC1 (8758S; Cell Signaling Technology), and SOX10 (sc-17342; Santa Cruz Biotechnology) at 4°C overnight, followed by secondary antibodies conjugated with Alexa Flour 488, Alexa Flour 568, Alexa Flour 594, and Alexa Flour 647 (Thermo Fisher Scientific) for 1 h. The sections were counterstained with Hoechst 33342 (Thermo Fisher Scientific) and visualized and imaged under an Axio Observer D1 microscope (Carl Zeiss). Analysis of fluorescence intensity 293T cells were seeded in a 6-well plate at 3.0 -4.0 ϫ 10 5 cells/well. Constructs for forced expression in 293T cells were synthesized using Gateway LR Clonase II (Thermo Fisher Scientific) and the pEF5/FRT/V5-DEST vector (Thermo Fisher Scientific). FuGENE HD transfection reagent (Promega, Madison, WI) and 1.3 g of each plasmid harboring PITX2, and EGFP, EmGFP, or TurboGFP as a fluorescent reporter, 2A or IRES2 sequences as a polycistronic sequence, and bovine PITX2 reporter line from human pluripotent stem cells growth hormone, herpes simplex virus-thymidine kinase, SV40, PITX2 3Ј-UTR, and ␤-actin 3Ј-UTR as a polyadenylation signal were used in the experiments. After 24 h, fluorescence was evaluated and imaged using the Axio Observer D1. The experiments regarding recombinant DNA and genome-editing were approved by the research ethics committee of Osaka University and were performed in accordance with guidelines of Osaka University. hiPSC culture The hiPSC line 201B7 was kindly provided by the Center for iPS Cell Research and Application, Kyoto University (Kyoto, Japan). The cells were cultured on dishes seeded with MEFs in Dulbecco's modified Eagle's medium/F-12 (Thermo Fisher Scientific) supplemented with 20% knockout serum replacement (Thermo Fisher Scientific), 0.1 mM nonessential amino acids (Thermo Fisher Scientific), 0.1 mM 2-mercaptoethanol (Thermo Fisher Scientific), and 4 ng/ml bFGF (Fujifilm Wako Pure Chemical Corporation, Tokyo, Japan), with or without 10 M Y-27632 (Fujifilm Wako Pure Chemical Corporation), which is selective inhibitor of Rho-associated coiled coil-forming protein kinases, to avoid apoptosis induction by electroporation. For feeder-free culture, the cells were cultured on dishes coated with iMatrix-511 (0.5 g/cm 2 ; Nippi, Tokyo, Japan) in StemFit medium (Ajinomoto, Tokyo, Japan). PITX2 reporter line from human pluripotent stem cells Establishment of the PITX2-IRES2-EGFP knockin hiPSC line TALEN plasmids (2.5 g) and donor vector (5 g) were mixed in the solution included in the P3 primary cell 4D-Nucleofector TM X kit (Lonza, Basel, Switzerland) and were electroporated into 1.5 ϫ 10 6 hiPSCs using program CB150 in the 4D-Nucleofector TM system (Lonza). The electroporated cells were seeded on DR4 MEFs (ASF-1001; Applied StemCell, Milpitas, CA) and were cultured for 5 days for recovery. The cells were cultured in the presence of G418 sulfate for 8 days and then in the absence of G418 sulfate for 4 days. Then the cells were picked up and transferred to a 12-well plate and cultured for 11 days. The cells were used for the screening of knockin colonies by end point PCR. Then the candidate knockin clone 8 was reseeded and screened as described above. The candidate knockin clone 8-2 was transferred to a 10-cm dish. Eventually, we used the heterozygous knockin clone 8-2, whose knockin sequence was confirmed by sequencing on an Applied Biosystems 3100 Genetic Analyzer (Thermo Fisher Scientific). End point PCR for the screening of knockin colonies Candidate clones were picked up and cultured in single wells of a 96-well plate. After 1 day, floating cells were harvested and lysed. The genomic DNA was extracted with a NucleoSpin tissue kit (Macherey-Nagel, Düren, Germany) and was used for end point PCR using Tks Gflex TM DNA polymerase (Takara Bio, Shiga, Japan) and two primer pairs. The primers were as follows: primer pair 1: forward: TGCATTCTAGTTGTGGTT-TGTCC: reverse: AGTTTCTCTGGTGGATGCAATGA, thermal cycles: 94°C for 1 min and 30 cycles of 98°C for 10 s, 60°C for 15 s, 68°C for 30 s; primer pair 2: forward: TAGTAATCT-GCACTGTGGCATCT, reverse: AGTTTCTCTGGTGGAT-GCAATGA, thermal cycles: 94°C for 1 min and 30 cycles of 98°C for 10 s, 60°C for 15 s, 68°C for 2 min. Sanger sequencing The cells were harvested and genomic DNA was extracted using a NucleoSpin tissue kit (Macherey-Nagel). Sanger sequencing was conducted using Applied Biosystems 3730 DNA Analyzer and a BigDye TM Terminator version 3.1 cycle sequencing kit (both from Thermo Fisher Scientific), according to the manufacturer's instruction. ALP staining ALP staining was conducted using a TRACP and ALP double-stain kit (Takara Bio). Briefly, the cells were washed with PBS, fixed with fixation solution, and washed with water. The cells were treated with ALP substrate solution at 37°C for 15 min and then washed twice with water. The cells were photographed using an EVOS FL auto imaging system (Thermo Fisher Scientific). Induction of PITX2 Cranial neural crest and neural crest cells were induced from hPSCs as previously described (28 -31). Using these cells as a reference, we established a POM cell induction method. hiPSCs were seeded in a 6-well plate coated with iMatrix-511 (0.5 g/cm 2 ) at 1300 -1500 cells/well. After 8 days of cultivation in StemFit medium, the medium was replaced with DM to start induction, and the cells were cultured for 2 days. Then the medium was replaced with DM containing 0 -10 ng/ml BMP4 (R&D Systems, Minneapolis, MN) and 0 -10 M CHIR99021 PITX2 reporter line from human pluripotent stem cells (Sigma-Aldrich), and the cells were cultured for 2 days. Then the medium was replaced with DM without any additional reagents, and the cells were cultured for 4 days. Then the medium was replaced with DM containing 20 ng/ml EGF (R&D Systems) and 10 ng/ml bFGF, and the cells were cultured until analysis. Western blotting analysis The cells were lysed with radioimmune precipitation assay lysis and extraction buffer (Thermo Fisher Scientific) containing protease inhibitor mixture set I (Fujifilm Wako Pure Chemical Corporation) and sonicated. Protein concentrations were measured with a BCA protein assay kit (Thermo Fisher Scientific). The samples were analyzed using the WES system (Pro-teinSimple, San Jose, CA). Antibodies against GAPDH (sc-32233; Santa Cruz Biotechnology), PITX2 (Ab55599; Abcam), and GFP (sc-8334; Santa Cruz Biotechnology) were used. Other reagents, including secondary antibodies, were used following the manufacturer's instructions. Flow cytometry hiPSCs were cultured for 20 days and then dissociated with Accutase for 30 min. Accutase was removed by centrifugation, and the cells were resuspended in PBS. The cells were analyzed and sorted using an SH800 cell sorter (Sony, Tokyo, Japan). Aggregated cells were gated and excluded. Single cells were expanded in R-phycoerythrin (PE) and FITC to exclude autofluorescent cells. The cells were selected using polygon gates; the cells in the purple polygon were counted as EGFP-positive cells, whereas cells in the blue polygon were considered EGFPnegative cells. Cultivation of EGFP-positive cells EGFP-positive cells were sorted by FACS. The cells were attached to a plate coated with 0.5 g/cm 2 iMatrix-511 and 1.56 g/cm 2 fibronectin by centrifugation at 210 ϫ g for 4 min. The cells were cultivated in DM with 10 M Y-27632, 20 ng/ml EGF, 10 ng/ml bFGF, and 50 -100 nM retinoic acid. After 2 days of cultivation, phase contrast images of the cells were captured using the Axio Observer D1.
5,419.4
2020-02-07T00:00:00.000
[ "Biology" ]
Single extreme low dose/low dose rate irradiation causes alteration in lifespan and genome instability in primary human cells To investigate the long-term biological effect of extreme low dose ionising radiation, we irradiated normal human fibroblasts (HFLIII) with carbon ions (290 MeV u−1, 70 keV μm−1) and γ-rays at 1 mGy (total dose) once at a low dose rate (1 mGy 6–8 h−1), and observed the cell growth kinetics up to 5 months by continuous culturing. The growth of carbon-irradiated cells started to slow down considerably sooner than that of non-irradiated cells before reaching senescence. In contrast, cells irradiated with γ-rays under similar conditions did not show significant deviation from the non-irradiated cells. A DNA double strand break (DSB) marker, γ-H2AX foci, and a DSB repair marker, phosphorylated DNA-PKcs foci, increased in number when non-irradiated cells reached several passages before senescence. A single low dose/low dose rate carbon ion exposure further raised the numbers of these markers. Furthermore, the numbers of foci for these two markers were significantly reduced after the cells became fully senescent. Our results indicate that high linear energy transfer (LET) radiation (carbon ions) causes different effects than low LET radiation (γ-rays) even at very low doses and that a single low dose of heavy ion irradiation can affect the stability of the genome many generations after irradiation. Ever since the seminal discovery of cellular senescence of human cells by Hayflick and Moorhead (1961), their finding has contributed substantially to understanding the mechanism of aging (Shay and Wright, 2000). Some of the molecular regulators associated with senescence have been identified (Herbig et al, 2004) these molecules include p21 and p16. Many years before Heyflick's finding, Henshaw et al (1947) showed that exposure to ionising radiation (IR) accelerated the process of aging in animal experiments. Along with other evidence, genome stability has been regarded as an important factor in the aging process (Vijg and Suh, 2006). However, premature aging by IR was not clearly demonstrated in the cell culture model demonstrated by Hayflick (Holliday, 1991). In contrast, one fairly recent study indicated the extension of lifespan of human embryo cells in culture with repeated daily doses of low level g-rays, although these irradiated cells had a higher number of chromosomal genome instability than non-irradiated control (Suzuki et al, 1998). Recently, Suzuki et al (2005) demonstrated a reduction in the lifespan of normal human fibroblasts exposed to chronic low doses of heavy ion particles, whereas no reduction in lifespan under similar dose/dose rate of g-rays was observed. The dose rate they used was about the level astronauts would receive during their space travel. To further clarify this important subject of in vitro senescence phenomenon with IR at low doses, we exposed normal human fibroblasts to a single dose of low dose/low dose rate high linear energy transfer (LET) heavy ion irradiation and observed the cultured cells up to 5 months. Our results indicate a clear reduction in the cell's lifespan after a single dose of carbon ion irradiation, while no reduction in lifespan was observed in g-irradiated cells under similar conditions. The markers of DNA double strand breaks (DSBs) were also examined in these cells as a recent study indicated the accumulation of these markers in senescent cells (Sedelnikova et al, 2004). Irradiation Cells were inoculated into a 25 cm 2 flask and cultured until at a confluent state. Medium was changed and the flasks were filled with new medium before irradiation. Low dose (1 mGy) and low dose rate (1 mGy 6 -8 h À1 ) carbon ion (290 MeV u À1 original energy, 70 keV mm À1 ) irradiation was performed at Heavy Ion Medical Accelerator in Chiba (HIMAC) biology facility at National Institute of Radiological Sciences (NIRS). g-ray irradiation was performed at a similar dose and dose rate with 137 Cs g-rays (1 mGy 6 h À1 ). As a control, non-irradiated cells were placed in HIMAC biology control room under the same conditions. Cell growth kinetics and immunofluorescence measurements Cell growth kinetics was obtained by counting the number of subcultured cells using a haemocytometer at regular intervals (about 7 days) up to 5 months. Two hours after irradiation, cells were trypsinised, counted, and then reinoculated on coverslips for immunostaining. The cells on coverslips were immunostained as described previously (Okayasu et al, 2006). We used antiphosphorylated DNA-PKcs (Thr 2609) polyclonal antibody (Sigma Genosys, Ishikari-shi, Japan) and an anti-g H2AX polyclonal antibody (Upstate, NY, USA) as primary antibodies. As secondary antibodies, we used Cy2-conjugated AffiniPure goat anti-rabbit IgG (Jackson ImmunoResearch, West Grove, PA, USA) for DNA-PKcs, and Cy3-conjugated AffiniPure donkey anti-mouse IgG for g-H2AX (Jackson ImmunoResearch). Cell growth kinetics shows early senescence in low dose carbon-irradiated cells We irradiated normal human fibroblasts with carbon ions once at 1 mGy at low dose rate (1 mGy 6 h À1 : 0.0028 mGy min À1 ), observed the cell growth kinetics for a period of 5 months, and compared the results with non-irradiated control cells. The dose rate we used was similar to the level astronauts would be exposed to in space. The growth of irradiated cells with carbon ions started to slow down much earlier than that of non-irradiated control cells reaching senescence (Figure 1). To make certain that this is a reproducible phenomenon, we repeated the same growth experiment with cells irradiated with carbon at a similar dose and dose rate. As can be seen in Figure 2, early senescence was again observed in the carbon-irradiated cells and the slowing down of cell growth started to occur around the same cell passage number (about passage 24) as in the first experiment. The data analyses indicate that the two carbon growth curves are statistically significant when compared with non-irradiated control cell growth (see figure legend). We also examined the growth of cells irradiated with low LET g-rays at a similar dose and dose rate ( Figure 2). Of interest, there was no growth disadvantage in cells irradiated with g-rays, and these cells showed a rather slight delay in the onset of senescence; however, this delay was not statistically significant (see figure legend). We have repeated the g-ray experiment and basically obtained the same result (data not shown). The number of foci for DNA DSB markers starts to increase as cells reach senescence Figure 3A shows yield of average numbers of g-H2AX foci per cell as a function of cell passage number after cells exposure to g-rays and carbon ions along with non-irradiated control. As g-H2AX foci are known to correspond to DNA DSBs (Paull et al, 2000;Rothkamm and Lobrich, 2003), the senescence process itself seemed to produce DSBs as the number of foci increased for all the samples at passage 22, and this phenomenon was further enhanced by IR at later passages (see passage 26), especially high LET carbon ions. In order to confirm the existence of DSBs in senescing cells, we used a phospho-specific antibody for DNA-PKcs (Thr 2609) to detect an active NHEJ-type DSB repair process (Dibiase et al, 2000;Chan et al, 2002) ( Figure 3B). The number of phosphorylation sites for DNA-PKcs started to increase in cells with carbon irradiation at passage 22, and although the number was further increased for all the samples, it significantly increased with carbon-irradiated samples (Po0.1 between control and carbon data at passage 26). Although these DSB markers could be a sensitive indicator for senescence as recently reported (Sedelnikova et al, 2004), it appears that DNA-PKcs phosphorylation is a better marker for senescence. The representative foci images for g-H2AX and DNA-PKcs are given in Figure 4A (passage 22) and Figure 4B (passage 26). However, once cells reached the full senescence stage, the numbers of these markers were significantly reduced. DISCUSSION In this report, we have shown for the first time that a single low dose/low dose rate heavy ion irradiation causes early senescence. Figure 2 HFLIII cells were irradiated with 1 mGy carbon ions (290 MeV u À1 , 70 keV mm À1 ) at 1 mGy 7.3 h À1 and with 1 mGy g-rays at 1 mGy 6 h À1 , and the cell growth was compared with that of non-irradiated control cells. The numbers in the figure indicate cell passage numbers. The cells irradiated with carbon ions senesced earlier than the non-irradiated control cells, while the cells with g-irradiation showed delayed senescence when compared to control. (*Po0.05 compared to non-irradiated control cells.) However, cells irradiated with g-rays were not statistically significant (P ¼ 0.16) when compared with non-irradiated control. Low dose/low dose rate irradiation and genome instability M Okada et al The dose and dose rate level we used (1 mGy 6 -8 h À1 ) was similar to the level astronauts would receive per day in their space exploration (about 1 mGy day À1 ). The general public could receive this dose level (1 mGy or higher), although it is not heavy ions, in a diagnostic radiology examination. In the past, a similar lifeshortening phenomenon in normal human fibroblasts was reported by Suzuki et al (2005) after many days of chronic low dose/low dose rate charged particles; however, the accumulated dose was 200 -300 mGy in their case. Thus, our finding with a single 1 mGy heavy ion exposure is unique and unexpected. We also found that a single g-ray exposure at the similar dose and dose rate did not cause life shortening, but rather led to a slight extension of their lifespan. A similar tendency was reported with chronic low dose/low dose rate g-ray exposure studies (Suzuki et al, 1998(Suzuki et al, , 2005. Our senescence data with heavy ion irradiation are consistent with the animal data with neutron irradiation found by Henshaw et al (1947) many years ago. This would make sense as neutron irradiation has been shown to have similar biological effectiveness as heavy ions (Hall, 1982). Henshaw et al also showed data with g-irradiation, but the life shortening was much less distinct. Our theoretical calculations indicate that in cells irradiated with carbon ions at 1 mGy 6 h À1 , only one in eighteen cells would be hit. These data seem to indicate that the accelerated senescence caused by low dose carbon irradiation was a result of bystander effect. Bystander effects are the nontargeted effects observed in cells that were not directly irradiated, but were either in contact with or received soluble signals from irradiated cells via gap junctions. Although the effect of our carbon ion irradiation was mainly caused by the bystander effect, early senescence was clearly observed when compared to the non-irradiated control and g-irradiated cells. Sedelnikova et al (2004) showed that g-H2AX foci accumulated in senescing human cells and in aging mice, and these foci colocalised with DSB repair proteins such as 53bp1, Mre11, Rad50, and Nbs1. They indicated that cells accumulated persistent DNA lesions that contain unrepairable DSBs during senescence. Zhang et al (2005) also showed histone H2A variant, macroH2A foci increased exponentially as the cells approached senescence. We confirmed their finding with the g-H2AX assay and further indicated that low dose heavy ion irradiation created extra unrepaired DSBs after many days of culturing; this should not be caused by the direct hit of radiation as the sample from an earlier passage (passage 20 for example) did not show the increase. To confirm the appearance of DSB damage many days after irradiation, we used an antibody to detect the phosphorylation of DNA-PKcs, an NHEJ protein, which indicates the actual occurrence of DSB repair process. Our data clearly revealed the passageand irradiation-dependent appearance of this phosphorylation signal, suggesting that aged cells sustained DSB, and low dose heavy ion irradiation further induced novice DSBs in late passages. We also analysed senescence-associated b-galactosidase, but not much difference among g-ray irradiated, carbon ion irradiated, and non-irradiated control cells was observed. As mentioned before, g-H2AX and DNA-PKcs foci could be more useful indicators than the senescence-associated b-galactosidase analysis for cell senescence. This would be the first time that the γ-H2AX phospho-specific DNA-PKcs marker was introduced as an indicator for cell senescence. If DSBs were associated with cell senescence, the senescent status on NHEJ-deficient cells would be affected. In this regard, our preliminary results with NHEJdeficient human fibroblasts 180BR showed further accelerated senescence than normal cells (data not shown). These results are consistent with our DSB marker results. In addition, once cells reach the full senescence stage, the signals for the DSB markers decreased significantly, indicating that the fully senesced cells have different metabolic functions (less need for repair function). A similar finding was recently reported by Bakkenist et al (2004). They indicated that although ATM activation and g-H2AX foci formation were induced by telomere dysfunction as a stress response in late-passage presenescent cells but not in early-passage cells, they disappeared once cells become fully senescent. They concluded that fully senescent cells do not require these stress responses induced by telomere dysfunction for the maintenance of senescence. Moreover, there are a number of studies discussed about the correlation between telomere shortening (cellular senescence) and DNA damage response (d' Adda di Fagagna et al, 2003;Takai et al, 2003;Shay and Wright, 2004;von Zglinicki et al, 2005;Herbig and Sedivy, 2006). We showed an increase in the number of foci related to DSBs at the presenescence stage. In summary, we showed that a single low dose/low dose rate irradiation (1 mGy 6 -8 h À1 ) with heavy ion particles induced early senescence in normal human fibroblasts, while g-irradiation under a similar dose/dose rate condition did not cause life shortening. DNA DSB and DSB repair markers were increased at the presenescence stage and were further enhanced in number for cells irradiated once with low doses of carbon ions. However, these DSB markers were significantly reduced once cells became fully senescent, suggesting less necessity for DNA damage/repair function in that stage.
3,206
2007-05-08T00:00:00.000
[ "Physics", "Biology" ]
Diameter Tuning of Single-Walled Carbon Nanotubes by Diffusion Plasma CVD We have realized a diameter tuning of single-walled carbon nanotubes (SWNTs) by adjusting process gas pressures with plasma chemical vapor deposition (CVD). Detailed photoluminescence measurements reveal that the diameter distribution of SWNTs clearly shifts to a large-diameter region with an increase in the pressure during plasma CVD, which is also confirmed by Raman scattering spectroscopy. Based on the systematical investigation, it is found that the main diameter of SWNTs is determined by the pressure during the heating in an atmosphere of hydrogen and the diameter distribution is narrowed by adjusting the pressure during the plasma generation. Our results could contribute to an application of SWNTs to high-performance thin-film transistors, which requires the diameter-controlled semiconductor-rich SWNTs. Introduction Single-walled carbon nanotubes (SWNTs) [1] have attracted intense attention due to their prominent electrical and optical properties [2][3][4][5][6].One of the most promising electrical applications of SWNTs is to fabricate a thin-film field-effect transistor (FET) exploiting their flexible structure and high carrier mobility [7][8][9].Since the mixture of metallic SWNTs in the FET channel increases leak currents between source and drain electrodes, which results in low on/off ratios, the selective growth of semiconducting SWNTs is urgently required.The band gap of SWNTs and contact resistance between nanotubes and electrodes are strongly influenced by the tube diameter [10,11].Thus, it is indispensable to grow diameter-controlled semiconductor-rich SWNTs for realizing high-performance SWNTs-based thin-film FETs.It is known that the SWNTs grown by plasma chemical vapor deposition (CVD) show a tendency to contain semiconducting SWNTs with concentration higher than that by the other growth methods such as arc discharge, laser ablation, and thermal CVD, while their detailed mechanisms are still under investigation [12][13][14][15].Although plasma CVD can be one of the promising approaches to obtain the abovementioned well-diameter-controlled semiconductor-rich SWNTs, the difficulty in controlling the plasma conditions has gotten in the way of realizing the precise diameter tuning of SWNTs. Here, we demonstrate the diameter tuning of SWNTs by changing the gas pressure during the CVD process.Detailed photoluminescence (PL) and Raman scattering spectroscopy analyses reveal that the main diameter of SWNTs becomes large with an increase in the gas pressure.The systematic investigation is also carried out to figure out the mechanism of the diameter variation. SWNTs Growth. A home-made diffusion plasma CVD system was utilized for the growth of freestanding SWNTs [16,17].The Fe (thickness (t): 0.5∼1 nm)/Al 2 O 3 (t: 20 nm) multilayer catalyst was formed on an Ag substrate (t: 0.2 mm) by a vacuum evaporation and a sputtering method, respectively.A capacitively coupled plasma was generated by supplying a radio-frequency (RF: 13.56 MHz) power (P RF ) to an upper electrode.A mesh grid was used as an anode to promote spatial diffusion of plasmas.A substrate was placed on a heater which was located underneath the lower anode-electrode.The distance between the lower electrode and the substrate was fixed at 70 mm.The growth of SWNTs by plasma CVD was carried out with the following procedures.First, the system was pumped down to a base pressure of 10 −2 Pa with rotary and diffusion pumps.The substrate was heated up to 600 • C under an Ar flow, and then the Ar gas was immediately switched to a methane and hydrogen mixture gas (3 : 7 mixture ratio).Note that a total pressure during the heating process was set to be as the same as that during the growth unless otherwise specified.When the total pressure reached a desired pressure (20-650 Pa), the P RF of 80 W was supplied to generate plasmas and the SWNT growth was started.The growth time was 20 sec.After the SWNT growth, the methane and hydrogen gases were pumped out and an Ar gas was introduced into the system in order to cool down the substrate. Electron Microscope Analysis. As-grown states of freestanding SWNTs were analyzed with a scanning electron microscope (SEM, Hitachi 4100) and a transmission electron microscope (TEM, Hitachi HF-2000).A thin Cu wire (dia.= 100 μm) covered by the Fe/Al 2 O 3 catalyst was utilized as a substrate, which was able to be directly set in a TEM holder without any conventional TEM grid. Photoluminescence Analysis. Photoluminescence-excitation (PLE) measurements were performed with a JY (Horiba) Fluorolog-3 system.The excitation wavelength was varied from 500 to 828 nm in 4 nm step, and emission signals were accumulated for 20 sec in each excitation step.Excitation and emission slit widths were fixed at 10 nm. Raman Scattering Spectroscopy Analysis. Raman scattering spectra were taken with 488 nm Ar laser and 632.8 nm He-Ne laser excitations.As-growth SWNTs were used for this Raman scattering analysis. Results and Discussion Figures 1(a)-1(c) show typical SEM (a) and TEM (b, c) images of the as-grown SWNTs produced by diffusion plasma CVD.SWNTs are found to be grown in the freestanding form [17].The existence of SWNTs is also confirmed by a Raman scattering spectrum as shown in Figure 1(d). In general, electrostatic potentials in plasmas sharply drop in the interface between the plasmas and solid materials, forming strong electric fields on the surface of the solid materials.Since the polarization constant of SWNTs in an axial direction is extremely high, the strong dipole moment is induced in the axial tube direction in the presence of electric fields.Due to the energy stability of the dipole moments under the electric fields, SWNTs tend to align with the electric fields.This is a possible explanation for the freestanding growth of SWNTs in the case of plasma CVD [18].PL spectroscopy is a powerful tool to assign each chirality of semiconducting SWNTs [19,20].Since excited excitons in the semiconducting SWNTs are easily quenched through the metallic SWNTs, each SWNT has to be well isolated to observe the occurrence of the optical emission.Thus, PL measurements of SWNTs are usually carried out with the dispersed SWNTs in the specific chemical solution.Owing to the unique freestanding as-grown state of SWNTs grown by diffusion plasma CVD, it is possible to obtain the PL signal from the as-grown SWNTs without any chemical dispersion [21]. Figures 2(a)-2(d) give PLE maps of as-grown SWNTs produced at different gas pressures.It is to be noted that all the PLE measurements are carried out immediately after the growth process in order to prevent the freestanding SWNTs from forming bundles, which causes significant PL changes [21].It is found that peaks in the PLE map at the high growth pressure (Figure 2 Diameter (nm) .Since each peak corresponds to each chirality in the sample and smaller-diameter SWNTs are positioned in the shorter wavelength region, the peak-position shift in the PLE map indicates that the diameter distribution of produced SWNTs is strongly influenced by the growth pressure.Thus, lower pressure enables the SWNTs diameter to become smaller.This diameter dependence on the growth pressure is also reflected in Raman scattering spectra of SWNTs grown at the different growth pressures.Figure 2(e) demonstrates that peak positions of the radial breathing mode (RBM) clearly shift from higher to lower values of wave number with an increase in the growth pressures.Here the RBM peak position and the SWNTs diameter are known to have a close correlation, which is ω = 248/d [22], where ω and d are RBM peak position (cm −1 ) and diameter (nm), respectively.This result is fairly consistent with the PLE result shown in Figures 2(a)-2(d).The typical pressure range where SWNTs can be grown is from 30 Pa to 650 Pa, which depends on the P RF used for the plasma generation.Although the absolute intensity of G-band in Raman scattering spectra decreases in the low or high pressure range, the G-band to D-band ratio is almost the same.This indicates that the quality of SWNTs should be the same in any pressure range whereas the density of SWNTs depends on the pressure.When we increase the input P RF , it is possible to grow SWNTs even below 30 Pa, which means that the lack of hydrocarbon supply is significant under the lower pressure condition, and hence additional input P RF is required to increase the density of active species used for the growth of SWNTs.Since the optical absorption and emission efficiency of each SWNT depend on its chirality, the PL emission intensity does not directly correspond to the population of each chirality SWNT.To discuss the population of SWNTs, it is therefore required to inspect the optical absorption and emission efficiency of each chirality SWNT.Figures 3(a)-3(d) and 3(e)-3(h) are SWNTs population histograms as the functions of diameter and chiral angle of SWNTs produced at the different growth pressures.These data are calculated from the absolute PL intensities shown in Figures 2(a)-2(d) and the optical efficiencies obtained from the theoretical calculation [23].It is worthy of being noted that the detectable diameter range is 0.75-1.2nm in this study due to the limitation of the PL detector used.The larger diameters of SWNTs (1.1-1.2 nm) are confirmed to be dominant in the samples grown at the high pressures, and the main diameter shifts to around 1.0 nm with a decrease in the growth pressure.In contrast, there is no clear difference of chiral angle-among the growth pressures.In all the cases, near-armchair (25-30 deg) SWNTs show the highest population, which is similar to the SWNTs grown by the other methods [24].The mechanism of the near-armchair rich growth is still unclear.In general, the chirality can be determined by the initial cap structure formed on the catalyst particle surface.For the stable cap formation, it is required to satisfy the isolated pentagon rule [25].This cap structure stability might have effects on the higher population of near-armchair SWNTs than the other types of SWNTs. In order to understand the mechanism of diameter shift depending on the process pressure, we carried out the systematic investigation.Since the pressures during the heating and growth are the same in our growth process, the process pressure affects both the heating and growth process.In order to clarify which exercises a critical effect on the diameter tuning of SWNTs, we performed experiments on the SWNTs growth under the following conditions; the pressures during the heating and growth are (1) both 60 Pa, (2) 500 Pa and 60 Pa, and (3) both 500 Pa, respectively.Figures 4(a)-4(c) and 4(d)-4(f) are PLE maps and population-diameter histograms of SWNTs produced under the conditions (1) (a and d), (2) (b and e), and (3) (c and f), respectively.For the simplification, we define the diameter ranges 0.75-1.0nm and 1.0-1.2nm as "small" and "large," respectively.When we compare the conditions (1) and ( 2), it is found that the population of small-diameter SWNTs decreases but the population of large-diameter SWNTs increases.Although we do not have any evidence about the catalyst particle size change after the high-pressure annealing, we believe that the catalyst size becomes large due to their aggregation after the highpressure annealing, which results in the growth of largerdiameter SWNTs because the other growth conditions are completely the same for these two samples.Interestingly, the clear difference is also found between the conditions (2) and (3).Although the population of large-diameter SWNTs is almost the same, the growth of small-diameter SWNTs is obviously suppressed.This can be explained as follows.The density of reactive hydrocarbon radials and ions should increase under the higher growth-pressure conditions.In the high carbon-supply condition, the small catalyst can be deactivated due to the oversupply of hydrocarbons, and hence the population of small-diameter SWNTs decreases.These indicate that the heating pressure is important to control the catalyst size distribution, which directly influences the main diameter of SWNTs, and that the pressure during plasma CVD is also important to narrow the SWNTs diameter distribution. Conclusions We have investigated the diameter distributions of plasma CVD-grown SWNTs in their as-grown state by PLE mapping and Raman scattering spectroscopy analyses.The pressure dependence study reveals that the process pressures strongly influence the SWNTs diameter distribution in plasma CVD.Furthermore, it is found that the catalyst particle size distribution can determine the main diameter of SWNTs and that their distribution can be narrowed by adjusting the plasma conditions.Our results could contribute to precisely and perfectly controlling the structure of as-grown SWNTs in the near future. Figure 1 : Figure 1: (a) SEM and (b), (c) TEM images of freestanding individual SWNTs.(d) Raman scattering spectrum (488 nm excitation) of freestanding individual SWNTs.Inset of (d) is emphasis of the RBM region. (a)) tend to appear in the range of long excitation and emission wavelengths.The peak Figure 4 : Figure 4: (a-c) PLE maps and (d-f) population-diameter histograms of as-grown freestanding SWNTs.The pressures during heating and growth are (a, d) both 60 Pa, (b, e) 500 Pa and 60 Pa, and (c, f) both 500 Pa, respectively.
2,903.6
2011-01-01T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
Nanovoid formation mechanism in nanotwinned Cu Nanotwinned metals have been intensely investigated due to their unique microstructures and superior properties. This work aims to investigate the nanovoid formation mechanism in sputter-deposited nanotwinned Cu. Three different types of epitaxial or polycrystalline Cu films are fabricated by magnetron sputtering deposition technique. In the epitaxial Cu (111) films deposited on Si (110) substrates, high fractions of nanovoids and nanotwins are formed. The void size and density can be tailored by varying deposition parameters, including argon pressure, deposition rate, and film thickness. Interestingly, nanovoids become absent in the polycrystalline Cu film deposited on Si (111) substrate, but they can be regained in the epitaxial nanotwinned Cu (111) when deposited on Si (111) substrate with an Ag seed layer. The nanovoid formation seems to be closely associated with twin nucleation and film texture. Based on the comparative studies between void-free polycrystalline Cu films and epitaxial nanotwinned Cu films with nanovoids, the underlying mechanisms for the formation of nanovoids are discussed within the framework of island coalescence model. Graphical abstract Introduction As a typical method of physical vapor deposition (PVD), magnetron sputtering technique is attractive to industry for fabricating various metallic films and coatings with unique microstructures and properties [1].Sputtering deposition normally takes place in a vacuum chamber where the atomic flux of source material is transferred from the target (cathode) to the substrate (anode) [2].Since this process is largely nonequilibrium, involving the sputtering of a target by energetic ions (e.g., Ar ions) and the condensation of a vapor into a solid, as-deposited films can be far away from their energetic minimums [3].As a result, the microstructure of sputter-deposited coatings is characterized by a large number of defects, such as grain boundaries and voids [4].For instance, previous studies have revealed that the polycrystalline metallic films synthesized by sputtering at low temperatures consist of fine grains ranging from several to tens of nanometers in size [5].Especially, in sputter-deposited films of face-centered-cubic (FCC) metals with a low stacking fault energy, such as copper (Cu), silver (Ag), and 330 steel [6][7][8], a considerable fraction of nanoscale growth twins can be formed.The twin structures are bounded by a special type of high-angle grain boundary, the coherent twin boundary (CTB) that stores a minimal boundary energy and renders nanotwinned (NT) metals superior physical and mechanical properties [9,10].Compared with nanograined and coarse-grained counterparts, the NT metals exhibit combinations of high electrical conductivity [11,12], good thermal stability [13][14][15], outstanding radiation tolerance [16][17][18][19], as well as high strength and ductility [20,21].Recently, it has been found that the mechanical properties of NT metal films can be enhanced further by introducing nanovoids [22,23].Our previous studies also found that preexisting nanovoids can improve the radiation tolerance of NT metals [24][25][26], as they can act as the effective sinks for radiation-induced defects [27,28].Although the effects of nanovoids on mechanical and radiation properties have been systematically investigated, the underlying mechanism of nanovoid formation in NT films is still unclear.To advance the applications of NT metals in surface and coatings technology and to extend our understandings on the sputtered nanostructured materials, it is warranted to investigate how nanovoids form and evolve during film growth. This work focuses on the nanovoid evolution in sputter-deposited NT and polycrystalline Cu films.The variations of nanovoid size and density with increasing Ar working pressure, deposition rate, and film thickness were systematically investigated.Experimental results revealed that the nanovoid formation is closely associated with twin nucleation and film texture, hence this study provides new insights into design and fabrication of NT metallic films with nanovoids. Experimental High purity (99.995%)Cu thin films were deposited onto the HF-etched silicon (Si) wafers at room temperature using a custom-designed direct current magnetron sputtering system.Prior to depositions, the main chamber was pumped to a typical base pressure < 8 × 10 −8 torr.Sputtered films can be divided into three types according to the variations of deposition condition, orientation of Si substrate, and the seed layer on Si substrates.Type 1 films were directly deposited on Si (110) substrates.They include three subgroups with different Ar working pressures P Ar , deposition rates R Dep. , and film thicknesses T Flim , as summarized in Table 1.In contrast, Types 2 and 3 refer to the Cu films deposited on Si (111) substrates, respectively, without and with an Ag seed layer (~ 200 nm thick); they have the same Ar working pressure (~ 2.6 mtorr), deposition rate (~ 1 nm/s), and total film thickness (~ 2 μm).The deposition rate in each sample was controlled by changing Ar working pressure and sputtering power. The X-ray diffraction (XRD) analyses of as-deposited films were performed by a Panalytical Empyrean X'pert PRO MRD X-ray diffractometer with a Cu Kα1 source.The film surface was characterized by an FEI Nova NanoSem 450 scanning electron microscope (SEM) operated at 20 kV.Plan-view and cross-section transmission electron microscope (TEM) specimens were prepared by polishing, dimpling, and low energy Ar ion milling.All the TEM specimens were subsequently examined by an FEI Talos 200X TEM operated at 200 kV.Besides, the Si substrate radii, R 0 and R 1 before and after film deposition, were measured using a profilometer, and film residual stress σ was calculated based on the Stoney formula [29] where T Si is the Si substate thickness (~ 500 μm), and M s is the biaxial modulus of the substrate, ~ 217 GPa for Si (110) [30]. (1) Type 1: nanovoid-nanotwinned Cu (111) directly deposited on Si (110) in different deposition conditions Following our previous studies [24,25], we first investigated the effects of sputtering condition parameters on the evolutions of texture of Cu films directly deposited on Si (110) substrates.Figure 1 compiles the normal XRD 2θ-scan profiles of as-deposited films.Apart from Si (220), the X-ray spectra only show two strong peaks, namely Cu (111) and Cu (222), indicating the formation of epitaxial Cu (111) films in all cases, regardless of the variation of Ar pressure from 1.7 to 5.5 mtorr, the increasing deposition rate from 0.2 to 2.9 nm/s, and the increasing film thickness from 1.2 to 6.5 μm.The corresponding XRD φ-scan profiles of Cu {111} in Fig. 2 show strong peaks with a six-fold symmetry.These peaks must arise from two sets of variants with a 60° rotation angle along the out-of-plane direction, that is the Cu <111> crystallographic direction, indicating the formation of a significant fraction of growth twins in sputtered Cu films.The plan-view TEM micrographs of as-deposited Cu films in Fig. 3 show that the films contain polygonal domains, and there are abundant nanovoids randomly distributed along domain boundaries.Moreover, the inset selected area diffraction (SAD) patterns demonstrate the formation of single crystal-like grains oriented along <111> direction, consistent with the XRD profiles presented in Fig. 1.The cross-section TEM micrographs in Fig. 4 show columnar domains consisting of high-density CTBs.These boundaries appear every few nanometers and are normal to the film growth direction.The inset SAD patterns confirm the formation of growth twins, also in good agreement with the φ-scan profiles in Fig. 2. Since the NT samples have abundant nanovoids, hereafter we will refer to them as nanovoid-nanotwinned (NV-NT) Cu.It seems that varying deposition parameters can hardly change the texture of NV-NT Cu film.Other microstructure features indeed vary, including twin spacing, void size and density, as well as domain size.As shown in Fig. 5a-c, the twin spacing t, void size V, and domain size D fluctuates slightly with varying Ar pressure P Ar and deposition rate R Dep. .However, t, V and D all enlarge with increasing film thickness T Flim .This variation could be caused by annealing, as it takes a longer time to deposit a thicker film when heat cannot be transferred instantly from Si wafer.It is worth noting that the void size V is comparable to the twin spacing t.This aspect will be discussed later in more detail.It is also noted that the void density, ρ V changes prominently with deposition conditions.As shown in Fig. 5d-f, ρ V increases with increasing P Ar and R Dep. , but it decreases with increasing T Flim .The variation of void density with deposition conditions were also reported elsewhere [31].The evolutions of film residual stress σ are plotted in Fig. 6.All films have tensile stresses, ranging from 0.4 to 0.9 GPa.Like microstructure features, the residual stress also exhibits a complex dependence on deposition parameters.Detailed statistics on the evolutions of microstructures and residual stresses are summarized in Table 1. Type 2: poly-crystalline Cu deposited on Si (111) substrate without seed layer The texture and microstructure of sputtered Cu film can be substantially modified by changing the orientation of single crystal Si substrate.Figure 7 illustrates a polycrystalline Cu deposited on HF-etched Si (111).The 2θ-scan profile in Fig. 7a shows the existence of several peaks arising from a couple of crystallographic planes of Cu, including (100), (111), (110), and (113).This XRD pattern indicates the formation of polycrystalline grains in as-deposited film.The plan-view TEM micrograph and the inset ring-like SAD pattern presented in Fig. 7b also confirm the formation of polycrystalline Cu.The as-deposited Cu shows a bimodal grain size distribution, with large grain size on the order of 1 μm and small grains less than 100 nm.The enlarged TEM image in Fig. 7c reveals the nanograins are roughly equiaxed without nanovoids at Type 3: nanovoid-nanotwinned Cu (111) deposited on Si (111) with an Ag seed layer The texture and microstructure of NV-NT Cu film can be regained on Si (111) substrate by adding a seed layer.For instance, Fig. 8 demonstrates an epitaxial Cu (111) deposited on Si (111) with an Ag seed layer (~ 200 nm thick).The XRD 2θ scan in Fig. 8a only shows the strong peaks of {111} planes, suggesting the epitaxial growth of Cu (111) on Ag/Si (111).The plan-view TEM micrograph in Fig. 8b shows the formation of nanovoids at domain boundaries, and the inset SAD pattern confirms the growth of epitaxial Cu (111) film.The cross-section TEM micrograph, together with the inset SAD pattern, in Fig. 8c reveals the formation of high-density growth twins in as-deposited NV-NT Cu (111) film. Discussion The microstructural evolution during film deposition has been extensively studied, and the influence of deposition variables (e.g., deposition rate, Ar pressure) can be found in a number of reviews [3,[32][33][34].It is generally recognized that a wide range of textures and microstructures can be developed in Cu films depending on preparation methods and conditions [35,36].In the current case of sputter-deposited NV-NT Cu (111), however, we found that varying deposition conditions alone can only modify the size, density, and distribution of microstructural features to some extent (see Sect. 3.1).The formation of nanovoids seems to be more strongly dependent on film texture (see Sects.3.2 and 3.3).For this reason, in the following paragraphs, we will focus our attention on the influence of growth twins on the void formation in epitaxial Cu (111) film.First, considering that the evolution of film microstructure is intimately related to the film growth process, we carefully inspected the growth front of NV-NT Cu film to obtain some hint of growth kinetics.As shown in Fig. 9a, the NV-NT Cu film surface exhibits 'island' configurations when observed from the top-down view.In addition, the cross-section (side view) TEM micrograph in Fig. 9b reveals a cycloid surface profile and a high density of the CTBs underneath the surface.The enlarged view in Fig. 9c shows small voids that are vertically aligned at the domain boundary.Such surface characteristics suggest that the boundary voids may form as columnar domains come into contact with each other.To reveal where the voids are nucleated along the columnar boundary, we further performed high-resolution TEM analysis.As shown in Fig. 10, there are three twins (Twins 1-3) separated by CTBs in the matrix.Also, there are two boundary voids, Voids 1 and 2 formed at the ends of Twin 1 and Twin 2, respectively.Between the voids, however, the matrix is almost joined with little spacing.It seems that the voids are likely to nucleate at the regions where the twin and matrix impinge.Based on these observations, we finally explain the growth of epitaxial NV-NT Cu (111) within the framework of island coalescence model [37].According to this model, the film grows by the nucleation and coalescence of discrete islands when they come to impinge on each other [38,39].Also, this model predicts a large elastic strain in the film, which is in qualitative agreement with our experimental results in Fig. 6. Following the island model, we can attribute the void formation to the increased energy barrier against coalescence when growth twins are present inside islands.The underlying mechanisms are illustrated schematically in Fig. 11.Here, we are first concerned with the microstructure of a polycrystalline Cu and attempt to understand why it is void-free.As shown in Fig. 11a1, when Cu is deposited onto Si (111) substrate, crystallites with random orientations h i k i l i (I = 1, 2, 3, Fig. 9 Topography of film growth front of NV-NT Cu (111).The film is labeled as 'b2_1.3nm/s' in Table 1. a SEM micrograph of the film surface.b Cross-section TEM micrograph of the film growth front with a cycloid surface.c Enlarged view of the boxed area in (b) demonstrating voids separately aligned along a domain boundary 4…) are initially nucleated.During deposition, these individual crystallites grow continuously until they coalescence into a continuous polycrystalline film, as schematically shown in Fig. 11a2.At this point the residual tensile stress might be very high, and some voids might also be formed due to the shadowing effects [40].At the later stages of film growth, however, the stress can be relaxed, and the voids can be removed by incorporating additional adatoms into grain boundaries through fast diffusions along surface-grain boundary network, as shown in Fig. 11a3.Meanwhile, grain boundary migration and secondary grain growth could be caused by local substrate heating.This process results in a bimodal grain size distribution, as observed in Fig. 7 or reported elsewhere [35,41]. In comparison, when Cu is deposited onto Si (110), or onto Si (111) with an Ag seed layer, the individual crystallites are all preferentially <111>-orientated, as shown in Fig. 11b1.The orientation relationship could be determined by the geometrical lattice match rule [42].As {111} is the twin plane for an FCC metal, and the growth twins are favored to nucleate on this plane for Cu due to its low stacking fault energy (45 mJ/m 2 ) [43,44].The crystallites, therefore, are composed of fine growth twins, as shown in Fig. 11b2.Although these twins and their matrix are oriented the same along the film growth direction, they have a large rotation angle (60°) in the film plane (twin plane).Hence, as the NT crystallites snap together on side walls, incoherent twin boundaries (ITBs) are expected to form at the intersections where twins meet the matrix, while at other locations where matrix faces matrix (or twin faces twin), no grain boundaries would form.The maximum gap size between adjoining crystallites can be estimated based on a simple energy criterion that the reduction of surface energy balances the increase of boundary energy and elastic energy [37].For the twin-matrix segments, the maximum gap size Δ T −M is described as [39] where D is the crystallite (domain) size, E is the Young's modulus, is the Poisson's ratio, sv is the surface energy, and ITB is the incoherent twin boundary energy.For the matrix-matrix (or twin-twin) segments, the maximum gap size Δ M−M (or Δ T −T ) is in the form of Combining Eqs. ( 2) and (3), we can conclude that Δ M−M > Δ T −M , indicating that the joining between matrix and matrix (or twin and twin) is energetically favorable over the joining between twin and matrix.As a result, a void tends to nucleate at the twin-matrix intersection region when the regions below and above are ready to join (snap) together, as illustrated in Fig. 11b3.Upon nucleation, the void will be buried beneath the surface and is cut off from the surface diffusion.However, whether the buried void can remain intact thereafter also depends on the bulk diffusion.Note that, unlike the polycrystalline Cu that is composed of regular grain boundaries, the NT Cu (111) is dominated by CTBs, the special low-energy and high-coherent boundaries along which the diffusion is limited.Therefore, the buried void is able to survive the film growth, insomuch as the net diffusion into it is restricted. The foregoing mechanism of void formation suggests that the NT Cu (111) film residual stress cannot be relaxed timely by incorporating additional adatoms.This is consistent with our experimental measurements.As shown in Fig. 6, all the sputtered NV-NT Cu have a large tensile stress ranging from 0.4 to 0.9 GPa.It has been pointed out that tensile stress might promote void nucleation and growth [45].Consequently, more nanovoids are expected to form along domain boundaries provided that twins are present in the growing domains, as confirmed by our high-resolution TEM micrograph in Fig. 10.This mechanism also suggests that the void size and twin thickness should be comparable, as the formation of nanotwins precedes the nucleation of nanovoids.Indeed, Figs. 5a-c demonstrate that the twin size and void size are similar in most of the epitaxial NT Cu films regardless of the deposition conditions.The systematic studies presented here thus provide a practical method to manufacture NV-NT Cu.The discovery reported in epitaxial Cu may be applicable to other epitaxial NT metals. Conclusion Polycrystalline and epitaxial Cu thin films were synthesized by direct current magnetron sputtering deposition technique.The texture and microstructure of as-deposited films can be tailored by varying deposition conditions, changing orientation of Si substrate, or adding an Ag seed layer.The polycrystalline Cu film exhibits a typical bimodal distribution of grains with almost no nanovoids.In comparison, for the epitaxial Cu (111) films grown on Si (110) or on Si (111) with an Ag seed layer, their microstructures are characterized by high-density nanotwins and nanovoids.The nucleation of nanotwins inside columnar domains can be attributed to the low stacking fault energy of Cu, while the nucleation of (2) nanovoids at domain boundaries is caused by the high energy barrier at the twin-matrix intersections.The formation of these nanovoids in the epitaxial Cu films can be ascribed to the cutoff of surface diffusion and the restriction of bulk diffusion.Consequently, the nanovoid formation mechanism in NT Cu can be rationalized based on the proposed island coalescence model.This study suggests that texture and twin boundaries can play an important role in tailoring the formation of nanovoids in NT metals. Fig. 2 ( Fig. 2 (Color online) The XRD φ-scan profiles of Cu {111} with a six-fold symmetry indicating the formation of high-density growth twins in as-deposited films Fig. 3 Fig. 3 Plan-view TEM micrographs displaying abundant nanovoids that are mostly distributed along domain boundaries.The inset SAD patterns clearly show single crystal-like diffraction along the Cu <111> zone axis Fig. 4 Fig. 4 Cross-section TEM micrographs captured from Cu <110> zone axis, revealing columnar domains and high-density growth twins in epitaxial Cu (111) films.The inset SAD patterns confirm the formation of twin structures Fig. 5 aFig. 6 Fig. 7 Fig. 8 Fig. 5 a-c Variations of twin spacing T, void size V, and domain size D with increasing Ar pressure P Ar , deposition rate R Dep. , and film thickness T Flim .d-f Void density ρ V plotted as a function of deposition parameters Table 1 Sputtering conditions, microstructures, and residual stress of the Type 1 NT-NV Cu (111) films grown on Si (110) substrates P Ar Ar pressure, R Dep. deposition rate, T Flim film thickness, D domain size, t twin spacing, V void size, ρ V void density, σ film residual stress
4,538.8
2024-03-12T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
Applying Corrections in Single-Molecule FRET Single-molecule Förster resonance energy transfer (smFRET) experiments can detect the distance between a donor and an acceptor fluorophore on the 3-10nm scale. In ratiometric smFRET experiments, the FRET efficiency is estimated from the ratio of acceptor and total signal (donor + acceptor). An excitation scheme involving two alternating lasers (ALEX) is often employed to discriminate between singly– and doubly-labeled populations thanks to a second ratiometric parameter, the stoichiometry S. Accurate FRET and S estimations requires applying three well-known correction factors: donor emission leakage into the acceptor channel, acceptor direct excitation by the donor excitation laser and the “gamma factor” (i.e. correction for the imbalance between donor and acceptor signals due to different fluorophore’s quantum yields and photon detection effciencies). Expressions to directly correct both raw FRET and S values have been reported in [1] in the context of freely-diffusing smFRET. Here we extend Lee et al. work providing several expressions for the direct excitation coeffcient and highlighting a clear interpretation in terms of physical parameters and experimental quantities. Moreover, we derive a more complete set of analytic expressions for correcting FRET and S. We aim to provide a clear and concise reference for different definitions of correction coeffcients and correction formulas valid for any smFRET experiment both in immobilized and freely-diffusing form. Förster resonance energy transfer (FRET) is a Coulombic interaction between the dipoles of two uorophores, which results in the resonant and non-radiative transfer of excitation energy from a donor to an acceptor uorophore (and the energy transfer probability decreases with the sixth power of the distance). Donor de-excitation via FRET competes with the donor's intrinsic radiative and non-radiative de-excitation paths. erefore, in the presence of a nearby acceptor, the lifetime of the donor is reduced. e quantum yield, or e ciency, of the FRET process can be computed as [2]: where τ F RET is the D lifetime in presence of FRET and τ D is the intrinsic D lifetime without any acceptor nearby. Computing E following eq. 1 requires measuring the D excited-state lifetime, for example using a TCSPC setup. A simpler method of estimating E consists in measuring only the intensity of donor and acceptor uorescence (F D and F A respectively) and computing the FRET e ciency ratiometrically as: e previous eq. 1 and 2 require that the uorescence lifetimes or intensities be relative to a single specie. In biological samples, where almost inevitably multiple FRET populations are present, single-molecule FRET (smFRET) experiments allows identifying di erent sub-populations and, for each of them, estimating the FRET eciency [3]. e ratiometric approach of computing E is very common in smFRET owing to its modest hardware requirements (compared to TCSPC measurements) and has been extensively applied both to freely-di using and to surface-immobilized experiments. Unfortunately, unlike lifetime-based experiments, ratiometric FRET is a ected by three systematic errors (or biases) intrinsic to the way F A and F D are measured. e rst, a fraction of the donor emission spectrum almost inevitably falls in the acceptor detection band, causing spurious increase in acceptor-channel signal named "donor leakage". Additionally, the acceptor signal is contaminated by a fraction of uorescence due to direct excitation of the acceptor uorophore by the donor laser (ideally the acceptor should only be excited by the donor). Finally, the relative (detected) donor and acceptor uorescence intensity is biased because of the di erent uorescence quantum yields and photon detection e ciencies in the two detection channels (requiring the so-called "gamma factor" correction). ese biases are wellknown and expressions for their correction have been derived [1]. Contrary to ensemble measurements, single-molecule experiments can resolve di erent subpopulations and recover mean/peak FRET e ciencies of each single conformational or binding state (at least in cases where there are no conformations that interconvert much faster than di usion times). However, obtaining accurate mean FRET e ciencies also requires applying corrections for the aforementioned biases. is paper extends the ratiometric FRET corrections reported in Lee et al. [1]. We de ne the acceptor direct excitation as a function of di erent observable. For each de nition we derive the direct excitation coe cient as a function of physical parameters and discuss its physical interpretation. We also derive a complete set of formulas for computing E or S as a function of the raw E and S as well as of the aforementioned correction factors. Note that the expression here presented are valid for any ratiometric smFRET or ALEX-smFRET experiment, being it immobilized or freely di using [2,3]. De nitions 2.1 Fluorescence intensities We start by de ning the uorescence intensity signal as a function of the physical parameters. For surface-immobilized measurements the signal can be donor or acceptor counts acquired in a camera frame for a given uorophore. For freely-di using experiments the signal can be the counts detected in the donor and acceptor channel during a "burst" (i.e. a single molecule crossing the excitation volume). Following [1] we de ne: Eq. 3 and 5 are the detected quantities (e.g. counts, or camera intensity) a er background correction in the DexDem and AexAem photon streams respectively. e n a quantity (eq. 4) needs to be estimated correcting the measured counts n * a in the DexAem stream (see eq. 10). e factors I, σ, φ and η are, respectively, the excitation intensity, the absorption cross-section, the uorophore quantum yield and the photon detection e ciency. e label D ex (resp. A ex ) indicates a coe cient computed at the donor-laser excitation wavelength. D det (resp. A det ) indicated the donor detection band. Finally, in the σ coe cient the superscript D or A indicates the uorophore. In addition to these quantities, we need to introduce the correction coe cient γ and β which are de ned as follows: Brie y, γ makes the DexDem and DexAem signals commensurable (i.e. on the same scale) taking into account di erence in dyes quantum yields and photon detection e ciencies. Similarly, the β factor is used to make the total Dex signal commensurable with the AexAem signal by taking into account the di erences in excitation intensities (I Aex vs I Dex ) and in dyes absorption cross-sections (σ A Aex vs σ D Dex ). is expression of the β coe cient has been derived in [1] during the derivation of the ing procedure for γ-factor. It is also useful to introduce the total corrected signal during D-excitation which we can de ne equivalently with one of the following expressions: e choice between eq. 8 and 9 is only ma er of convention. Finally, in a real experiment we cannot measure n a directly, instead we acquire a value n * a that is contaminated by donor leakage (Lk) and acceptor direct excitation (Dir). We de ne n * a and the correction terms as follows: Consistently with the n d and n aa de nitions (eq. 3 and 5), the quantity n * a is assumed already background corrected. FRET and Stoichiometry We start de ning the FRET e ciency E and the proximity ratios E P R and E R : E P R = n a n a + n d E = n a n a + γ n d where n d , n a are the donor and acceptor detected counts a er all the corrections (see eq. 3 and 4), while n * a are the acceptor counts with only background correction of eq. 10 (no leakage and direct excitation corrections). Similarly, for the stoichiometric ratio we can have di erent de nitions depending on the degree on corrections that are applied: S P R = n d + n a n d + n a + n aa (17) S R (eq. 16) is the raw stoichiometry without any correction except for background (see de nition of n * a in eq. 10). S P R (eq. 17) is the stoichiometry corrected for leakage and direct excitation (see n a de nition in eq. 4). S γ (eq. 18) is the stoichiometric ratio corrected for leakage, direct excitation and γ (so that FRET populations have stoichiometry centered around a constant value, typically close to 0.5). S γβ (eq. 19) includes a β correction ensuring that FRET populations have stoichiometry centered around 0.5. Since β (eq. 7) is equal to the ratio n aa /(n a + γn d ) (eq. 4 and 9) it follows that: From eq. 20 follows that when β is known it is possible to compute the S γ value around which all FRET populations are distributed. As noted before, for S γβ this value is always 0.5. De nition of direct excitation e term Dir can be equivalently expressed as a fraction of any uorescence intensity components (i.e. counts in the donor or acceptor channel during donor or acceptor excitation). Here we present ve di erent de nitions and their physical interpretation. De nition 1 De ning Dir as a function of n aa we have: e coe cient d AA can be computed from an acceptor-only population in ALEX measurement, because in this case eq. 10 becomes n * a = Dir . In terms of physical parameters, recalling eq. 12 and 5, we can express d AA as: Since computing Dir through d AA requires the spectroscopic quantity n aa (see eq. 5), it cannot be used in case of single-excitation measurements. In this case the de nitions in the next section can be used. Note that d AA is indicated as d in [1]. De nition 2 De ning Dir as a function of the "corrected total signal" as de ned in eq. 8 ((n a + γ n d )) results in: From eq. 23, it follows that: To derive the expression of d T as a function of physical parameters, consider the case of 100% FRET molecule. In this case, knowing that n d = 0 and recalling the expression of n a from eq. 4, we obtain: Noting that, for E < 1 the "corrected total signal" n a + γ n d (e.g. the corrected burst size in freely-di using measurements) will not change when γ is constant. erefore the previous expression is valid for any E. In ALEX measurements is easier to estimate d AA from the data. erefore, expressing d T as a function of d AA allows to easily estimate the former coe cient from the data. From the de nitions of eq. 22, 25 and 7 we obtain: is relation follows from the de nition of β reported in the previous section and originally de ned in [1]. De nition 3 De ning Dir as a function of the "corrected total signal" as de ned in eq. 9 (n a /γ + n d ) we have: e coe cient d T can be obtained from the d T expression noting that we simply divide the "corrected total signal" by γ: e coe cient d T is indicated as d in [1] (main text p. 2943 and SI). Note that the de nition of d given in eq. (27) of [1] has been derived for a E = 0 population (for which n d + n a /γ = n d ). However, by using the corrected total signal, it is possible to use the same coe cient to express the Dir contribution for any FRET population (and independently from E). De nition 4 De ning Dir as a function of n d we have: e coe cient d D is a function of E as well as the physical parameters. Taking the ratio of the physical de nitions of Dir and n d we obtain: De nition 5 De ning Dir as a function of n a : e coe cient d A is a function of E as well as the physical parameters. Taking the ratio of the physical de nitions of Dir and n a we obtain: Discussion of De nitions 1-5 De nitions 4 and 5 are inconvenient because the coe cient depends on E. De nition 3 does not depend on E but depends on γ, while De nition 2 depends only on the ratio of two absorption cross sections and is therefore the most general form. De nition 1 can only be used in an ALEX measurement but it is easy to t from the S value of the A-only population. So, for non-ALEX measurements, De nition 2 (d T ) gives the simplest and most general coe cient. It can be computed from datasheet values or from d AA estimated from an ALEX measurement using the same dyes pair and D-excitation wavelength (d T = β d AA ). As physical interpretation, de nitions 2 and 3 are similar. In De nition 2, when E = 1, the "corrected total signal" is n a . When E < 1, the "corrected total signal" does not change (at the same excitation intensity, and xed γ) being the sum of acceptor and γ-corrected donor counts. Similar considerations hold for De nition 3 (starting from E = 0). Note that using eq. 25 to estimate Dir requires the knowledge of the corrected total signal of eq. 8 (including A-direct excitation correction). For practical purposes, using a signal only corrected for γ and leakage to compute Dir via eq. 25, is a very good approximation. Alternatively, using eq. 33 (see next section) it is possible to compute corrected E values without any approximation. Correction formulas We can expressing E as a function of E R and the three correction factors as follows: is expression is the same of eq. S9 in [1] when we replace d T γ with d . Similarly we can express S as a function of S R , but in this case the expression will also depend on E R in addition of the correction parameters: A similar formula has been reported in [1] (SI) expressing S as a function of S P R and E P R . Here the expression is simply expanded as a function of E R and S R , resulting in an explicit dependence on lk and d T . e derivation of these formulas only involves using algebraic manipulations of the E and S expression. To avoid trivial errors, these expression have been derived with computer-assisted algebra (CAS). We also provide text-based version of the formula (python syntax) that is tested and easy to copy and paste in most other text-based language. For derivation details see Appendix: Derivation of the formulas. Conclusion We have introduced ve de nitions of acceptor direct excitation as a function of different experimental observable, and discussed that out of the ve, two have the most useful in practice. In particular, eq. 25 can be used to correct for A-direct excitation even in single-laser measurements provided the coe cient d T can be estimated independently. Furthermore, eq. 33 and 34 allows to apply corrections to E and S values, only knowing the raw E and S and the correction factors. With eq. 33 and 34 it is possible to correct the ed E or S values as a last independent step of the analysis, without the need to modify (i.e. correct) the distributions prior ing. is is important because, from a statistical point of view, the t of the raw E and S peaks can provide more reliable estimates due to simpler modeling (e.g. using a Binomial distribution) which requires less assumptions. For example, methods such as shot-noise [4] and probability distribution analysis [5,6] and Gopich-Szabo likelihood analysis [7,8] can be directly applied to raw FRET distributions. Conversely, applying these methods to the corrected FRET distributions requires unnecessary complex statistical models which include the e ect of each correction factor. In practice, the bene t of a more complex model is dwarfed by the inaccuracies arising from the additional approximations (even implicit) and from reliance on estimated correction parameters in the model itself. Using eq. 33 and 34, instead, allow to decouple the correction of E and S values from the population-level statistical modeling, resulting in more robust models and more accurate estimates.
3,782.8
2016-10-26T00:00:00.000
[ "Physics" ]
Research on U-shaped relationship between short-term debt for long-term use and supply chain enterprise default risk: Evidence from Chinese listed firms This paper empirically investigates the impact mechanism of short-term debt for long-term use and the default risk of supply chain firms with the data of Chinese A-share listed firms from 2007 to 2021. The study shows that there is a significant U-curve relationship between short-term debt for long-term use and supply chain firms’ default risk, and too high or too low a level of short-term loans and long-term investments will worsen firms’ default risk. In addition, firm performance plays an mediating effect in the process of short-term debt for long-term investment affecting the default risk of supply chain firms. Finally, customer effect and firm heterogeneity play a moderating role in the impact of short-term loans and long-term investments on the default risk of supply chain firms, and the U-shaped relationship will be strengthened under the high-intensity customer effect. This study has important theoretical and practical significance for analyzing the impact of default risk contagion in supply chain enterprises. Introduction On April 6, 2022, the People's Bank of China issued the Financial Stability Law of the People's Republic of China (Draft for public Comments), which clearly proposed to strengthen the prevention and defusing of financial and credit risks and steadily improve risk management.Meanwhile, the central government further pointed out that we should adhere to the macro strategic goals of preventing major risks and resolutely not touching the bottom line of systemic financial risks.To achieve the above macro-control goals, the micro-guarantee is to deal with the problem of short-term debt for long-term use of enterprises, and prevent and resolve the default risk of enterprises.At the same time, with the development of supply chain, although the integration and optimization of supply chain enterprises help to cope with the challenges of economic globalization, the increasingly dependent cooperative relationship of supply chain enterprises gradually constitutes an important factor of the vulnerability of supply chain system.The default risk of a certain actor in the supply chain can easily rely on the intermediary carriers such as material flow, information flow and financial flow in the supply chain network.The transmission spreads to the upstream and downstream enterprises of the supply chain, and eventually evolves into the default risk accident of the whole supply chain system, which hinders the development of the supply chain system and the economy.Therefore, it is of great theoretical and practical significance to correctly understand the influencing mechanism of the level of short-term debt for long-term use of enterprises on the default risk of supply chain enterprises, for preventing supply chain risk accidents and achieving macro goals such as financial risk regulation. Short-term debt for long-term use refers to the investment and financing phenomenon in which enterprises continuously roll short-term debt to support long-term project investment [1].Some scholars also call it "short-term loan for long investment" or "maturity mismatch of investment and financing".On the one hand, a certain degree of short-term debt for long-term use can provide working capital support for the investment of supply chain enterprises and alleviate the financing constraints of supply chain enterprises [2][3][4].In addition, compared with long-term debt, the advantages of short-term loans, such as low interest rate and fast lending, are not only conducive to the reduction of corporate financing transaction costs [5], can also transmit positive signals to the external supply chain enterprises, further improve financing flexibility, and inhibit the default risk of supply chain enterprises to a certain extent; On the other hand, since the financial crisis and the outbreak of COVID-19, the long-term "financial repression" environment has made it difficult for most of China's supply chain smes to get effective support for long-term investment and financing funds [6], which negatively improves the level of short-term debt for long-term use" of supply chain enterprises [7], and thus amplifies the short-term debt repayment pressure of enterprises.It causes liquidity risk and operational risk of supply chain enterprises [8,9], which ultimately intensifies the default risk of enterprises [10].However, all kinds of risk effects caused by the default risk of supply chain enterprises will be accumulated through the specific contagion mechanism and network structure of supply chain, amplified and even mutated in the form of "domino" effect, and spread to the whole supply chain network, which hinders the development of supply chain system and economy, and brings potential huge losses to the industry, the country and even the world economy.However, the existing studies have not provided in-depth empirical evidence on the economic consequences of short-term loans and long-term investments, and whether they can inhibit or promote the long-term stable development of enterprises, which need to be further explored.This paper investigates the impact of "short-term debt for long-term use" on enterprise risk, in order to further clarify the mechanism through which the mismatch of investment and financing term structure plays a role, which is helpful to supplement the research on the economic consequences of the mismatch of investment and financing term structure, and clarify how "short-term loan and long-term investment" affects enterprise development.Based on this, this paper takes the Shanghai and Shenzhen A-share listed enterprises from 2007 to 2021 as the research sample, through the construction of the "investment short-term loan" sensitivity model, objectively verifies the existence of "short-term debt long-term" in China's supply chain enterprises, and empirically tests the relationship between short-term debt longterm and the default risk behavior of supply chain enterprises, as well as the role of financing constraints.In the current complex and changeable supply chain context, what impact will this investment and financing phenomenon of continuously rolling short-term debt support longterm project investment have on the default risk of supply chain enterprises?What mechanism affects the default risk of supply chain enterprises? The main contributions of this paper are as follows: (1) Starting from the phenomenon of long term use of short bonds of micro enterprises, it proves from various angles that there is indeed a U-shaped relationship between short-term debt for long-term use and the default risk of supply chain enterprises, rather than a simple linear relationship, and makes important theoretical contributions to the relevant research on short-term debt for long-term use and the default risk of supply chain enterprises.(2) This study not only verifies the direct influence of the U-shaped relationship between the short-term debt for long-term use and the default risk of supply chain enterprises, but also examines the intermediary conduction effect and regulating mechanism between the two.(3) The research conclusions of this paper enrich the relevant studies on the short-term debt for long-term use and the default risk of supply chain enterprises, and provide new evidence on how to reasonably optimize the short-term debt for longterm use of supply chain enterprises.Under the background of frequent default events of supply chain enterprises and incomplete financial system, the influence of the short-term debt for long-term use on the default risk of supply chain enterprises is analyzed.It plays an important role in arranging financing structure reasonably for supply chain enterprises, strengthening risk control, preventing default risk and maintaining the safety and stability of China's financial system. Theoretical analysis and research hypothesis The matching principle of investment and financing term, as its name implies, requires supply chain enterprises to consider the matching principle of investment term and debt maturity structure when conducting investment and financing activities [11].Generally speaking, supply chain enterprises prefer prudent investment to avoid the problem of high liquidity risk caused by the term structure of aggressive investment and financing.However, in the practice of investment and financing of supply chain enterprises in China, they are troubled by financial constraints such as financing difficulties, high financing costs and financing constraints, and their long-term investment and financing funds cannot be effectively supported [12], resulting in the frequent investment and financing strategies of "short-term debt and longterm use" of supply chain enterprises [13,14].It is not difficult to find that as a special investment and financing strategy, the short-term debt for long-term use not only has risk characteristics, but also has income characteristics. On the one hand, according to the theory of agency cost hypothesis, the short-term debt for long-term use within a reasonable range helps to improve the regulatory responsibility of banks and other financial institutions as creditors of supply chain enterprises.Because the term of short-term debt is usually short, supply chain enterprises need to often sign credit contracts with creditors.Before each signing of short-term debt contracts, banks and other financial institutions need to reevaluate the operation risk and liquidity risk of supply chain enterprises to reduce the occurrence of bad debt losses.In addition, creditors of banks and other financial institutions can also supervise the aggressive and high-risk investment activities of corporate managers through the use of short-term loans [15,16], thus reducing the principle-agency cost [17] and further reducing the moral hazard of senior managers and the operational risk of enterprises.On the other hand, according to the financing transaction cost theory, short-term debt for long-term use to a certain extent helps to improve debt contracts and reduce financing transaction costs of supply chain enterprises [18].Kahl (2015) found that most American enterprises prefer to use short-term commercial paper to support long-term capital investment expenditure in the early stage of investment [5].So as to reduce transaction costs and improve investment performance.Moreover, compared with long-term debt financing, short-term loans are easier to obtain approval [19], more conducive to improving the success rate of financing, and the interest rate is relatively lower [5].However, with the continuous accumulation of short-term loans, when the dependence of supply chain enterprises on short-term loans exceeds the optimal level, debt problems such as excessive debt will also occur, which inevitably increases the possibility of default of supply chain enterprises [20,21].More importantly, short-term loans also have renewal risks [22].When the project itself has high investment risk and low development prospect, its future profitability will be difficult to support the debt paying ability of the supply chain enterprise, and the capital turnover risk of the enterprise will gradually increase, thus expanding the loan renewal risk of the enterprise, making the enterprise fall into operating difficulties and financial crisis.Although companies can take measures such as rolling over debt and raising new debt to temporarily ease the crisis, they still face liquidity pressure.In other words, the excessive "short-term debt and long-term use" has imposed certain restrictions on the liquidity of supply chain enterprises' funds, weakening the cash flexibility of supply chain enterprises.When there are large fluctuations and uncertainties in the macro-economy, this will inevitably affect the potential risk of default of supply chain enterprises. Based on the above analysis, this paper believes that compared with the previous research results on the linear impact of short-term debt and long-term debt on corporate default risk [5,23], considering the dual characteristics of short-term debt and long-term debt with both income effect and risk effect, although the analysis of the causes of the phenomenon of shortterm loan and long-term investment helps to explain which methods can influence the behavior of enterprises' short-term loan and long-term investment, it can not effectively clarify whether this radical term mismatch has a positive or negative effect on the long-term development of enterprises, and the academic research on this aspect is still blank.Therefore, it is necessary for us to further investigate the economic consequences of short-term loan and longterm investment, and clarify how it affects the long-term development of enterprises, so as to provide some reference for how to promote the stable development of the real economy. In general, to a certain extent, the short-term debt for long-term use can give full play to the flexibility and low cost advantages of short-term debt and reduce the cost of financing transactions.However, driven by the motive of chasing profits, excessive use of short-term borrowing to support long-term project investment will increase the debt repayment pressure of supply chain enterprises, which will lead to business difficulties and liquidity risks of enterprises.It is not difficult to find that how the short-term debt for long-term use affects the default risk of enterprises depends on the level of long-term use of short-term debt.There is an obvious "double-edged sword" effect between the short-term debt for long-term use and the default risk of supply chain enterprises.There is not a simple linear relationship between the two, but there may be a U-shaped curve relationship.Therefore, this paper puts forward the following research hypotheses: H1: There is a significant U-shaped relationship between the short-term debt for long-term use and the default risk of supply chain enterprises. Qualitative change is an accumulation process of quantitative change, as is the behavior of short-term debt for long-term use of enterprises.Since the reform and opening up, under the background of the rapid expansion of fixed asset investment scale, China's economy has ushered in a golden age of rapid growth.However, for a long time, compared with the rapidly developing economic environment, China's financial system has been in the position of "financial repression," and the financing needs of enterprises cannot be met, resulting in the behavior of short-term debt for long-term use.To a certain extent, the short-term debt for long-term use is not only conducive to timely alleviating the financing pressure of enterprises, but also conducive to giving full play to the benefits of external supervision and standardizing the behavior of agents, so as to improve the scientific and effective decision-making of enterprise managers, improve the investment efficiency of enterprises, and have a positive impact on the performance of enterprises, so as to achieve the governance purpose of reducing the default risk of enterprises.In view of the fact that debt financing enterprises need to perform debt interest obligations on a fixed and regular basis, which reduces the management's free control over cash flow to a certain extent, short-term debt for long-term use can reduce the agency cost between enterprise shareholders and enterprise managers, improve enterprise investment efficiency, strengthen enterprise performance, and then alleviate the default risk of enterprises. When the short-term debt for long-term use exceeds the optimal level, debt problems such as excessive debt will also occur, and the short-term debt for long-term use will inevitably change qualitatively, which has a negative impact on the possibility of default of supply chain enterprises.Excessive short-term debt for long-term use will increase the liquidity risk of supply chain enterprises, reduce the productivity of enterprises [24] and investment efficiency, have an adverse inhibitory effect on the R&D investment of enterprises [25], damage the operating performance of enterprises [26,27], and affect the credit loans and commercial credit rationing of supply chain enterprises, As a result, enterprises with excessive short-term debt for long-term use can only seek higher cost commercial loans, and ultimately increase the default risk of supply chain enterprises [28]. Based on this, this paper believes that the U-shaped relationship between the short-term debt for long-term use and the default risk of supply chain enterprises further affects the default risk of supply chain enterprises by affecting the performance of supply chain enterprises.Therefore, this paper puts forward the following research hypothesis: H2: enterprise performance plays a mediating role in the process of short-term debt for longterm use affecting the default risk of supply chain enterprises. To sum up, the empirical model of this paper is shown in Fig 1. Sample selection and research design In 2007, China implemented the new accounting standards for Business enterprises.The research samples of this paper are China's A-share listed enterprises from 2007 to 2021, and the financial data are from the CSMAR database, excluding the samples of financial listed enterprises.(All relevant data are within the manuscript and its Supporting Information files.)At the same time, in order to control the deviation caused by outliers on the estimation results, continuous variables are subjected to up and down 1% Winsorize processing.Finally, this paper obtains a total of 24,897 annual samples of 3847 listed enterprises from 2007 to 2021.In order to explore the impact of " short-term debt for long-term use" on the default risk of supply chain enterprises, and study whether enterprise performance plays an intermediary role in the impact of short-term debt for long-term use on the default risk of supply chain enterprises, this paper believes that the first priority should be to verify the existence of the phenomenon of " short-term debt for long-term use" in China's capital market.Therefore, Based on the research of McLean and Zhao (2014), we construct our "investment short-term loan" sensitivity model using the "investment cash flow" sensitivity model to test the dependence of supply chain enterprises on short-term debt.The sensitivity model constructed is as follows: Model 1: Secondly, in order to analyze the impact of short-term debt for long-term use on the default risk of supply chain enterprises, we constructs the empirical model (2) of this paper for analysis. Model 2: Finally, in order to test the Mesomeric effect of enterprise performance in the process of long-term use of short-term debt on the default risk of supply chain enterprises, we introduced the U-shaped Mesomeric effect test method, and built the transmission mechanism measurement model (3) and model ( 4) based on the empirical model (2) Model 3: Model 4: The explained variables and explanatory variables of Models (1) to (4) are as follows: Explained variables: investment "INV", default probability "EDF" and enterprise performance "TQ": In Model (1), the explained variable investment "INV" comes from the annual cash flow statement of supply chain enterprises-"cash paid for the purchase and construction of fixed assets, intangible assets and other long-term assets," and eliminates the scale effect through the total assets of the previous year; In Model (2), the explained variable default probability "EDF" is calculated by the KMV model.KMV model is a major reform of the traditional default rate measurement method.This paper uses KMV model as the basis to measure the default probability of supply chain node enterprises, and its advantages are as follows: First of all, compared with the historical data based on the previous book information of enterprises, the KMV model selects the realtime data based on the stock market, which not only makes the model data more forwardlooking, but also can better reflect the current debt default situation of the supply chain node enterprises, so as to better predict the default probability of enterprises.Secondly, KMV model is based on the "structural model" of modern corporate finance and option theory, which is more convincing in predicting the default probability of enterprises to a certain extent.Finally, the KMV model uses the publicly disclosed financial data of node enterprises to estimate the default probability, which makes the model more suitable for listed enterprises. The KMV model regards the book value of the debt of the node enterprise as the strike price of a European call option.When the book value of the debt of the node enterprise is higher than the total market value of the enterprise's assets, the enterprise has the risk of default.The asset market value and asset market value volatility of node enterprises can be solved from the following simultaneous equations, and the specific calculation is as follows: In this equations, Market value of enterprise stock"E" = (number of outstanding shares × price of outstanding shares + number of non-outstanding shares × price of non-outstanding shares)."σE" is the annual standard deviation of stock return rate, "V" is the market value of enterprise assets, and the book value of enterprise debt"D" = current liabilities +0.5× long-term liabilities."T" is the debt maturity, "σV" is the market value volatility of firm assets, and "r" is the risk-free interest rate.Calculate the expected default probability "EDF" of the enterprise at time "T": In Formula (20), the distance between the value of enterprise assets and the value of debt is defined as the default distance"DD" Obviously, "N" is the cumulative standard normal distribution function.The larger the calculated "EDF"value is, the higher the default risk of the enterprise.On the contrary, the smaller the E value, the lower the default risk of the enterprise. In Model (3), enterprise performance "TQ" is the explained variable, and Tobin's Q value is selected to measure enterprise performance with reference to most literature practices. Explanatory variables: net cash flow "CFO", long-term credit increment "Long-Debt", short-term credit increment "Short-Debt", short-term debt for long-term use "SFLI": In model (1), the explanatory variable net cash flow "CFO"comes from the annual cash flow statement of supply chain enterprises-"net cash flow from operating activities", and the scale effect is eliminated through the total assets of the previous year; The explanatory variables long-term credit increment "Long-Debt", short-term credit increment"Short-Debt", which represent the new short-term and long-term credit volume of the supply chain enterprise in the current period, are calculated and solved by using the annual balance sheet and annual cash flow statement data of the supply chain enterprise, and the scale effect is eliminated by the total assets of the previous year; The definition of other control variables will be described in detail below (see Table 1 for details). In model ( 2), the explanatory variables of short-term debt for long-term use "SFLI", "SFLI"and"SFLI2"are the linear term and quadratic term of short-term debt for long-term use of supply chain enterprises respectively, which are calculated and solved by using the data of annual balance sheet and annual cash flow statement of supply chain enterprises, and the scale effect is eliminated by the total assets of the previous year; Referring to the method proposed by Zhong Kai et al. (2016) and Liu xiaoguang and Liu yuanchun (2019) [23,29], this paper depicts the measurement index of "short-term debt for long-term use", and uses the measurement method of Zhong Kai et al.For reference, calculates the method of "short-term debt for long-term use" to define proxy variables, and takes the measurement method of Liu xiaoguang and Liu yuanchun(2019) as the robustness test index of this paper.The relevant definitions of other control variables and specific variables are shown in Table 1. Descriptive statistics According to the descriptive statistical results in Table 2, first of all, from the statistical results of investment and credit amount, it is not difficult to find that China's supply chain enterprises do have a serious phenomenon of short-term debt for long-term use, and the maximum value of short-term debt for long-term use is 0.456, the minimum value is -5.108, and the standard deviation is 0.303, indicating that there are obvious differences in the short-term debt for longterm use behavior of supply chain enterprises.To some extent, it shows that the long-term credit funds obtained by Chinese listed companies are significantly lower than short-term credit funds; Secondly, the average default risk of supply chain enterprises is about 0.177.Combined with the sorted data, especially in the period of economic fluctuations (such as the subprime mortgage crisis in 2008, the new crown impact in 2020 and other nodes), the default INV Investment Amount Cash paid for the purchase and construction of fixed assets, intangible assets and other long-term assets; Take its logarithm EDF Default Risk The default probability is measured by KMV model TQ Enterprise Performance Tobin Q value CFO Net Cash Flow Net cash flow from operating activities LongDebt Long term Credit Increment Current long-term credit increment = current long-term borrowings+non current liabilities due within one year-previous long-term borrowings; Take its logarithm ShortDebt Short term Credit Increment Current short-term credit increment = cash received from borrowings-current increase in long-term borrowings; Take its logarithm SFLI short-term debt for long-term use Short term debt and long-term use = Cash Expenditure for investment activities such as the purchase and construction of fixed assets-(current increase in long-term borrowings+current increase in equity+net cash flow from operating activities+cash inflow from the sale of fixed assets) Sensitivity analysis of short-term debt and long-term use In order to explore the impact of "short-term debt for long-term use" on the default risk of supply chain enterprises and study the intermediary effect of financing constraints, this paper first verifies the existence of the radical financing phenomenon of "short-term debt for long-term use" in China's capital market.This study refers to the analysis idea of "investment cash flow", and uses the model (1) to conduct a full sample regression on the sensitivity test of "investment short-term loans", which provides the necessary objective confirmation and theoretical basis for this study.Table 3 reports the sensitivity analysis results of short-term debt for long-term use.The preliminary study found that the sensitivity analysis of "investment short-term loan" supports the view of this study, showing a significant positive correlation, which indicates that there is indeed "short-term debt for long-term use" behavior in China's supply chain enterprises.Enterprise investment depends not only on investment opportunities, but also on various financing sources.High growth companies will not easily adopt the radical financing method of "short-term loan and long-term investment"; Short-term debt for long-term use and default risk of supply chain enterprises (2) to (4) of Table 4 show the correlation regression analysis results of short-term debt for long-term use and default risk of supply chain enterprises.According to the regression results in column (2) of Table 4, first of all, without introducing other relevant control variables, there is a significant correlation between the short-term debt for long-term use and the default risk of supply chain enterprises, and the linear term coefficient of 0.0266 for short-term debt for long-term use is significantly negative at the level of 1%; The quadratic term coefficient of 0.00697 is significantly positive at the 1% level, indicating that the relationship between the short-term debt for long-term use and the default risk of supply chain enterprises is not a simple linear relationship, but a U-shaped relationship.Secondly, according to the regression results in column (4) of Table 4, after introducing the relevant control variables, there is still a significant correlation between the short-term debt for long-term use and the default risk of supply chain enterprises, and the linear term coefficient of 0.0241 for short-term debt for longterm use is significantly negative at the level of 1%; The quadratic term coefficient of 0.00544 is significantly positive at 1% level.The short-term debt for long-term use still has a significant U-shaped relationship with the default risk of enterprises: on the left side of the U-shaped relationship, the short-term debt for long-term use has a negative correlation with the default risk of supply chain enterprises; On the right side of the U-shape, there is a positive correlation between the short-term debt for long-term use and the default risk of supply chain enterprises (see Fig 2).This research result shows that whether the use of short-term debt for a long time is too high or too low, it will have an adverse impact on the default risk of supply chain enterprises, so H1 hypothesis in this paper is true.It shows that short-term loans have a certain governance effect, can inhibit the moral hazard of enterprise management participating in highrisk investment projects, and help reduce enterprise risk.However, with the increasing degree of short-term debt and long-term use, enterprises are more likely to face the test of project investment payback period, which is more likely to lead to difficulties in capital turnover, and have to "borrow the new to repay the old" at a higher interest cost, The new debt financing is more used to meet the needs of enterprise financial operation than actual business investment and R&D innovation.It is likely to enter the dilemma of "borrowing new to repay old" !"borrowing new to repay interest" !"balance sheet deterioration", thereby worsening enterprise performance and debt risk.At the same time, it is worth noting that the relevant control variables such as cash flow, return on assets and other indicators are significantly negatively correlated with the default risk of supply chain enterprises, which also explains the importance of return and cash flow to the default risk of supply chain enterprises.Especially in recent years, affected by the epidemic and Sino US trade friction, most of the important reasons for the default of supply chain enterprises are due to the shortage of cash flow. Short-term debt for long-term use, enterprise performance and default risk of supply chain enterprises Table 5 (5) to ( 7) lists the mediating effect of enterprise performance on the default risk of supply chain enterprises in short-term debt for long-term use.It is worth noting that in order to exclude the highly correlated effect between the enterprise capital structure and other financing constraint indicators in this paper, this paper eliminated the influencing factor of enterprise capital structure in the analysis of intermediary effect, so as to ensure the reliability of the research results more accurately.Among them, the regression analysis in column (5) of Table 5 reported that the linear term coefficient of 0.025 for short-term debt and long-term debt was significantly negative at the level of 1%; The quadratic term coefficient of 0.00598 is significantly positive at the 1% level, which supports the significance analysis of β 1 and β 2 in the research model (2).The regression results in column (6) of Table 5 show that the linear term coefficient of 0.187 for short-term debt for long-term use is significantly negative at the 1% level; The quadratic term coefficient of 0.0945 is significantly positive at the 1% level, which supports the significance analysis of η 1 and η 2 in the model ( 3) of this study.The regression results in column (7) of Table 5 show that the effect of enterprise performance on the default risk of supply chain enterprises is significantly positive at the level of 1% (the influence coefficient is 0.00242), indicating that enterprise performance plays a partial intermediary role in the effect of short-term debt for long-term use on the default risk of supply chain enterprises, and also supports the significance analysis of λ 1 and λ 2 in the research model ( 4).This shows that the long-term use of short-term debt may affect the overall performance of enterprises by increasing the risk compensation premium for stakeholders or aggravating the distortion of enterprise innovation incentives, and then affect the default risk of enterprises.To sum up, the research results of this paper verify the transmission path of short-term debt for long-term use !enterprise performance !supply chain enterprise default risk, and the research hypothesis of this paper is H2. Customer effect analysis As the core element of supply chain enterprises, customer effect (customer concentration) not only affects the investment, financing and the Cost Stickiness of supply chain enterprises, but also affects the short-term debt for long-term use utilization level and default risk of supply chain enterprises.On the one hand, the theory of "value co creation" holds that positive customer effect will help strengthen the positive development of supply chain cooperation and promote the value creation of supply chain.The highly concentrated customer effect has contributed to the main performance income of supply chain enterprises.The more high-quality customers with high consumption, the more considerable the operating income of enterprises.The enhancement of profitability establishes reliable protection barriers for the performance of supply chain enterprises and market changes, and further releases positive signals of high value to the capital market and banking system, so as to improve the financing ability of supply chain enterprises, optimize the debt financing structure and term limit of supply chain enterprises, and make supply chain enterprises more flexible in controlling the level of short-term debt for long-term use, And then affect the default risk of enterprises to a certain extent.On the other hand, according to the theory of "value plunder", the higher the customer concentration of a supply chain enterprise, the higher the degree of exclusive payment and dependence of the enterprise on its main cooperative customers, and the vulnerability of the enterprise will also increase.Once the customer relationship breaks down and customers have serious business risks, the negative impact of the customer effect will easily rely on the intermediary carriers such as material flow, information flow and financial flow in the supply chain network, It will cause huge losses to the performance of supply chain enterprises.As a result, financial institutions such as banks will also more strictly restrict the financing loans of such supply chain enterprises, such as setting higher loan interest thresholds and financing loan terms (wangjunqiu et al., 2015 [30]).To a large extent, this also affects the level of short-term debt and long-term debt and default risk of supply chain enterprises.Therefore, on the basis of the original research, we take supply chain customer concentration (buy) as the proxy variable of customer effect, and introduce the moderating effect of customer effect and the linear term and quadratic term interactive terms of short-term debt for long-term use to further investigate whether the relationship between short-term debt for long-term use and supply chain enterprise default risk is affected by customer effect.Table 6 lists the moderating effect of customer effect.The test results show that customer effect × The square of short-term debt for long-term use has a significant positive impact on the default risk of supply chain enterprises (interaction coefficient 0.0000787, p<0.05).This shows that the U-shaped relationship between short-term debt for long-term use on the default risk of supply chain enterprises is positively regulated by the customer effect, that is, when the customer concentration is at a high level, the U-shaped relationship between shortterm debt for long-term use on the default risk of supply chain enterprises will be strengthened (see Fig 3). Analysis of enterprise heterogeneity The difference in the nature of property rights is the most significant manifestation of enterprise heterogeneity.It is undeniable that state-owned enterprises are far superior to private enterprises in terms of resources, reputation and market position, no matter from the perspective of background or financing ability.It is precisely based on these advantages that stateowned enterprises often have higher bargaining chips in negotiations.It is worth noting that due to the relationship between government and enterprises, state-owned enterprises are more conducive to the implicit guarantee of the government, so they have more competitive advantages in the market.Under the influence of macroeconomic fluctuations, due to the natural enjoyment of some special local preferential policies and the endorsement and guarantee of local government departments, it is easier for state-owned enterprises to obtain the favor and support of core enterprises to a large extent; In addition, compared with private enterprises, the governance system of state-owned enterprises is generally more sound, the property right structure is also more clear, which is more helpful to make up for the negative impact caused by information asymmetry, and help them have a better position in dealing with the problem of credit risk contagion in the supply chain.Therefore, based on the research results and the heterogeneity of enterprises, this paper makes a regression test.First, in order to determine whether the short-term debt for long-term use by private enterprises stems from "financial discrimination", this paper conducts a sensitivity analysis on the "investment short-term loans" of state-owned and private enterprises in batches by dividing the nature of property rights.Finally, it tests the differential effect of short-term debt for long-term use on supply chain Robustness test In order to test the robustness of the U-shaped relationship between the short-term debt for long-term use and the default risk of supply chain enterprises, this paper makes further tests in the following aspects: (1) fixed effect test (FE).So as to avoid the relevant default risk of supply chain enterprises being interfered by factors such as enterprise cultural background, local economic regulations and policies, financial policies and so on, and then affect the accuracy of the study, this paper conducts a fixed effect test on panel data, After inspection, the main conclusions have not changed substantially, and the specific empirical results are listed in column (14) of Table 6; (2) To reconstruct the variable test (CV), this paper refers to the practice of Liu xiaoguang (2019) to measure the long-term use of short-term debt of supply chain enterprises by constructing the ratio difference between short-term debt and short-term assets (primary term SFLI (1) and secondary term SFLI (1) 2).The indicator believes that the smaller the difference between the ratio of short-term liabilities and short-term assets of an enterprise, the lower the level of short-term debt for long-term use.After inspection, the main conclusions have not changed substantially.See (15) in Table 6 below for specific empirical results; (3) The lag period test (LT1), in order to avoid the reverse causality of this research conclusion, that is, the supply chain enterprises' reduction or excessive short-term debt and long-term use behavior is caused by the high default risk of supply chain enterprises.This paper conducted a lag period test on the core variable (short-term debt for long-term use).After the test, the main conclusion has not changed substantially.See (16) in Table 8 below for the specific empirical results. Conclusion and enlightenment Based on the research sample of Shanghai and Shenzhen A-share listed enterprises from 2007 to 2021, this paper objectively verifies the existence of "short-term debt for long-term use" in China's supply chain enterprises by constructing the sensitivity model of "investment shortterm loan", empirically tests the impact of short-term debt for long-term use on the default risk behavior of supply chain enterprises, and further examines the mediating effect and regulatory effect between the two.It is found that there is a significant U-shaped curve relationship between the short-term debt for long-term use and the default risk of supply chain enterprises, rather than a simple linear relationship.To a certain extent, the short-term debt for long-term use can give full play to the flexibility and low-cost advantages of short-term debt, reduce the cost of financing transactions, and thus help reduce the risk of default; However, the excessive use of short-term loans to support long-term project investment will increase the debt repayment pressure of supply chain enterprises, further lead to business difficulties and liquidity risks, which will worsen the default risk of enterprises.In addition, this study also deduces the mediating effect of further study on the impact of short-term debt and long-term debt on the default risk of supply chain enterprises, and finds that there is an intermediary transmission path between the two, namely, short-term debt for long-term use !enterprise performance !supply chain enterprise default risk.Finally, customer effect and enterprise heterogeneity play a moderating role in the impact of short-term debt and long-term debt on the default risk of supply chain enterprises.Since the outbreak of the financial crisis and the new crown epidemic, the long-term "financial repression" environment has made the long-term investment and financing funds of most small and medium-sized enterprises in China's supply chain unable to be effectively supported.Although a certain degree of short-term debt for long-term use can provide liquidity support for the investment of supply chain enterprises and alleviate the financing constraints of supply chain enterprises, the excessive level of "short-term debt for long-term use" is easy to amplify the short-term debt repayment pressure of enterprises, trigger the liquidity risk and operation risk of supply chain enterprises, and finally aggravate the default risk of enterprises.Based on this, this paper attempts to give some suggestions: first, we should strengthen the risk awareness of enterprises, establish the risk oriented concept of enterprises, and at the same time, we should be alert to the harm of potential risks and strengthen the monitoring of enterprise risks while pursuing enterprise profits; Second, we should re-examine the function of short-term debt for long-term use, make overall planning for enterprise investment, recognize the dual characteristics of short-term debt and long-term use of income effect and risk effect, and build the corresponding risk prediction system based on the enterprise's own development factors; Third, the government and relevant financial institutions should work together to improve the relevant systems and structures of the capital market, strengthen the standardized examination and approval management of long-term loans to enterprises, and strive to help enterprises broaden financing channels and strengthen the management of working capital loans. Finally, although this study first proposed the U-shaped relationship between short-term debt and long-term debt and corporate default risk, there are still many deficiencies and limitations.Future research can continue to deepen on the basis of this article, such as further controlling the factors of China's macroeconomic changes (fiscal policy, leverage factor, etc.).More importantly, the existing research does not consider the widespread phenomenon of short-term debt and long-term use of Chinese enterprises and its interaction with macro policy factors, which may lead to deviation of relevant conclusions, and even be detrimental to the national macro policy regulation.To sum up, in view of the long-term imbalance between the investment and financing term structure of China's supply chain enterprises under the macroeconomic impact, we should not only play a regulatory role at the macro level, but also strengthen the internal management of enterprises at the micro level.
9,410.8
2023-10-23T00:00:00.000
[ "Business", "Economics" ]
Research on dairy products detection based on machine learning algorithm . In this study, an electronic nose model composed of seven kinds of metal oxide semiconductor sensors was developed to distinguish the milk source (the dairy farm to which milk belongs), estimate the content of milk fat and protein in milk, to identify the authenticity and evaluate the quality of milk. The developed electronic nose is a low-cost and non-destructive testing equipment. (1) For the identification of milk sources, this paper uses the method of combining the electronic nose odor characteristics of milk and the component characteristics to distinguish different milk sources, and uses Principal Component Analysis (PCA) and Linear Discriminant Analysis , LDA) for dimensionality reduction analysis, and finally use three machine learning algorithms such as Logistic Regression (LR), Support Vector Machine (SVM) and Random Forest (RF) to build a milk source (cow farm) Identify the model and evaluate and compare the classification effects. The experimental results prove that the classification effect of the SVM-LDA model based on the electronic nose odor characteristics is better than other single feature models, and the accuracy of the test set reaches 91.5%. The RF-LDA and SVM-LDA models based on the fusion feature of the two have the best effect Set accuracy rate is as high as 96%. (2) The three algorithms, Gradient Boosting Decision Tree (GBDT), Extreme Gradient Boosting (XGBoost) and Random Forest (RF), are used to construct the electronic nose odor data for milk fat rate and protein rate. The method of estimating the model, the results show that the RF model has the best estimation performance( 2 =0.9399 R for milk fat; 2 =0.9301 R for milk protein). And it prove that the method proposed in this study can improve the estimation accuracy of milk fat and protein, which provides a technical basis for predicting the quality of dairy products. Introduction In addition to water, fat, phospholipid, protein, lactose and inorganic salt, milk also contains at least 100 kinds of chemical components, the content of which is very complex [1]. The mixture of low-grade fatty acids, acetone, acetaldehyde, carbonic acid and other volatile substances in milk affects the flavor of milk, and sulfide is the main component of fresh milk flavor [2]. Dairy cows in different farms have different flavor due to different feed and growth environment [3]. Milk protein, milk fat and lactose are the key indicators to evaluate the quality of milk [4]. The degradation of their components or the interaction between their derivatives affect the flavor compounds of milk [5] [6]. Therefore, the establishment of milk detection model is of great significance for the identification of dairy farms and the improvement of milk quality. The traditional way to trace the origin of milk is through physical tracking such as manual recording. In recent years, many chemical methods have been used in milk origin identification, such as stable isotope ratio analysis, trace element content analysis, nuclear magnetic resonance, etc The methods of milk quality detection are mainly divided into two aspects: one is the detection of milk freshness; The second is the identification of milk components. Sensory evaluation is the most direct way to judge the freshness of milk. The method can judge whether the milk is deteriorated by observing the physical information such as color, smell and condensation state of the milk. However, the accuracy of this method is low. In order to further improve the detection accuracy, physical and chemical analysis method was used for milk quality detection. At present, near infrared spectroscopy [10], microbiological physicochemical analysis [11], DHI (dairy production performance measurement) laboratory detection [12] and other methods are used in domestic and foreign research to realize the quantitative detection of milk components, and have achieved good results. But these methods are expensive, low detection efficiency, vulnerable to damage, unable to achieve real-time detection of dairy products. Therefore, it is very important to find a fast and efficient nondestructive testing method. As a new gas detection and analysis instrument, electronic nose has strong portability and simple operation, which makes food nondestructive testing easier [13] [14] [15]. Electronic nose is a kind of electronic instrument simulating human olfactory. It is an ideal digital electronic device, which can quickly evaluate complex volatile gas mixture. At present, it has been widely used in milk recognition [16], discrimination [17], detection [18]. The array composed of multiple sensors makes up for the defect of single sensor, which can detect different components in the gas at the same time. Although electronic nose has obtained some research results in dairy detection, it is still a systematic and complex project to use electronic nose technology to detect dairy products, and most of the current research only focuses on the single feature of dairy products, and lacks the systematic analysis of electronic nose [19]. Therefore, this paper proposes a rapid detection method based on electronic nose technology and machine learning for milk source (cattle farm) recognition and milk fat rate, protein rate content evaluation. Three different classification algorithms including logistic regression (LR), support vector machine (SVM) and random forest (RF) were used to build the milk source recognition model, evaluate and compare the classification effect of the model. Gradient lifting tree (gbdt), extreme gradient enhancement (xgboost) and random forest (RF) are used to build the estimation model of milk fat rate and protein rate to improve the accuracy of evaluation, verify the effectiveness of electronic nose detection method and realize equivalent detection. Independently developed electronic nose model Electronic nose is an electronic system which simulates the olfactory organs of animals and uses the response image of sensor array to identify odor. The electronic nose model used in this paper (30cm in length, 20cm in width and 20cm in height) is composed of gas sensor array, signal acquisition module, data acquisition module and signal processing and pattern recognition module (Fig. 1 According to the sensitivity of each sensor in the array to the gas to be measured, the response is different, so the electronic nose system uses its response resistance value to identify the odor [20]. There are seven metal oxide sensors in the electronic nose. Table 1 lists the names of gas sensors and the corresponding sensitive substances. In the developed electronic nose system, the two main functions of Arduino software module are: (1) to obtain the response value of the sensor.(2) Process data and communicate with computer. The microcontroller on the development board is programmed by Arduino programming language, compiled into binary files, and passed MATEC Web of Conferences 355, 03008 (2022) ICPCM2021 https://doi.org/10.1051/matecconf/202235503008 into the microcontroller. The response values of each sensor in the sensor array to different volatile substances are digitally converted by a multiplexer analog-to-digital converter (ADC), and the obtained data are stored for subsequent computer analysis and identification, as well as the extraction of related features. The processed digital signal is transmitted to the host computer through the serial port, and finally presented in the serial port monitor. The flow control unit in the electronic nose is responsible for gas capture and cleaning. The cleaning time was 60s, the gas capture time was 90s, and the gas flow rate was 1.1l/min. Sample collection and data acquisition Milk samples from 10 farms were selected. Firstly, the original samples were classified to remove the samples with low liquid level or unqualified temperature. In the DHI experimental instrument detection, the detection results occasionally appear zero value phenomenon, so it is necessary to remove the interference value before the experiment. Finally, 100 groups of milk samples were collected from each dairy farm, and 1000 groups of samples were collected from 10 dairy farms to detect the fusion characteristics of DHI and electronic nose. In the process of the experiment, the average value of the three measurements is taken to reduce the error. The composition data of dairy products are measured by the imported biochemical detection equipment of DHI laboratory, including milk fat percentage (%), protein percentage (%), lactose percentage (%), total solids percentage (%), somatic cell count (* 104 / ml) and urea nitrogen (mg / dl). Milk fat contains linolenic acid, arachidonic acid, various fat soluble vitamins and phospholipids [21]. The content of fat and protein is an important indicator of milk quality, and the low ratio of milk fat to protein indicates that rumen acidosis is very likely in dairy cows [22]. The lactose content in milk is usually between 4.5% and 5%. Its content not only affects milk yield, but also relates to rumen function. Cells are the general name of macrophages, lymphocytes and polymorphonuclear neutrophils in milk. The number of somatic cells is an indicator of the degree of mastitis infection in dairy cows, representing the health status of milk and milk quality [23]. Milk urea nitrogen comes from blood urea nitrogen, and high urea nitrogen content proves that cows are more likely to suffer from acidosis [24] [25]. The electronic nose detection experiment was carried out in the environment of 22 and 19% humidity. 20ml of each milk sample was extracted and stored in a sealed test tube, standing for 10 minutes to ensure that the volatile matter of the milk sample filled the whole test tube. Before the volatile gas capture, clean the airway and gas chamber of the electronic nose with fresh air to eliminate the interference gas. During the detection, the electronic nose probe and the balance air pressure tube were simultaneously extended into the test tube headspace air. After the capture process, the gas was fully absorbed by the sensor for 2 minutes, and the voltage response value increased and tended to be stable. During the cleaning process, with the gradual removal of volatile gas, the response value decreased and stabilized to a constant value, completing a sample measurement. Data analysis In this experiment, 10 milk samples from different places were selected, and volatile gases were collected from milk samples by electronic nose, and the odor data were stored in computer. The model analysis was conducted with 1000 groups of data after standardized processing: 800 groups were training data and 200 groups were test data. For the cattle farm classification model, principal component analysis (PCA) and linear discrimination analysis (LDA) are used to reduce the dimension of the data, and retain the For the fitting model of electronic nose and DHI, the fitting model is established by three regression algorithms: gradient tree (gbdt), extreme gradient enhancement (xgboost), random forest, and the fitting effect of the model is evaluated and compared by using evaluation indexes. SVM SVM is a supervised learning model, which can perform pattern recognition, classification and regression problem analysis. The principle of SVM is to find the separating hyperplane which can correctly divide the classes in the training data set and has the maximum geometric distance. For the nonlinear classification problem, the kernel (mapping) function of SVM can map the samples from the original space to the high-dimensional space, so that the samples can be linearly separated in the new space. The main kernel functions are linear kernel function, polynomial kernel function, Gaussian radial basis function and so on. RF Random forest is an important ensemble learning method based on bagging. It consists of many decision trees (CART). It can be used to solve classification and regression problems, has a strong anti noise ability, and can avoid over fitting. The process of building RF model is as follows: firstly, m sample points are extracted from training sample set s to form a new training subset; Secondly, a classification decision tree or regression model is established for each training subset, which is obtained by randomly selecting K features from all features as segmentation nodes. The output of the model is the category with the highest number of votes (classification) or the average output of each decision tree (regression). LR Logistic regression is a supervised machine learning algorithm used to solve classification problems. The principle is to find the minimum value of the loss function to make the prediction function more accurate, so as to achieve the purpose of classification. Penalty term is an important super parameter of LR model, and the solver parameters can optimize the loss function. Logistic regression is a supervised machine learning algorithm to solve classification problems Analysis of response curve and radar chart of electronic nose According to the obtained electronic nose data, the continuous 90 s sampling values of one group of samples are randomly selected as the electronic nose response curve (Fig. 1). G/g0 is the ratio of the sensor response resistance value (g) of the gas acquisition to the sensor response resistance value of purified air (G0). As the sampling time accumulates, the g / G0 value of each sensor in the electronic nose changes. The sensor response value is stable at about 60s. Among them, the response values of sensors 2, 3, 1 and 6 vary greatly, and the response values of sensors 4, 5 and 7 change little or no change. The response steady state value of electronic nose sensor at 90s of a group of samples is selected in each cattle farm to make the electronic nose response radar diagram (Fig. 2). Each longitudinal axis represents a sensor. It can be seen that the response values of sensor 1, sensor 2 and sensor 4 are obviously different among different cattle fields. By observing the response curve of electronic nose and radar, different cattle farms can be easily separated. Therefore, it is proved that it is feasible to realize the recognition of cattle farm by using electronic nose model. In order to further prove the effectiveness of this method, more accurate analysis is needed. Data dimension reduction results PCA was used to reduce the dimensions of DHI fusion feature (6 dimensions), electronic nose fusion feature (7 dimensions) and DHI and electronic nose fusion feature (13 MATEC Web of Conferences 355, 03008 (2022) ICPCM2021 https://doi.org/10.1051/matecconf/202235503008 dimensions). After dimension reduction, the cumulative variance contribution rate of the first three principal components (PC) including sufficient effective information about the sample was 99.909%, 99.09% and 98.19% respectively. The contribution rates of PC1, PC2 and PC3 were 99.9%, 0.008% and 0.001%, respectively; The contribution rates of PC1, PC2 and PC3 were 88.38%, 7.58% and 3.13%, respectively; The contribution rates of PC1, PC2 and PC3 were 55.72%, 39.09% and 3.38% ( Figure 3 -5) In Figure 3, the distribution of DHI fusion features after dimensionality reduction is scattered, and farms cannot be distinguished according to these features. Compared with figure 3, the aggregation degree of dimension reduction results of electronic nose fusion features in Figure 4 is higher, but it is still unable to clearly distinguish the cattle farms. In Figure 5, the fusion effect of DHI and electronic nose is poor. PCA dimension reduction effect is poor, unable to achieve preliminary judgment. Therefore, the cumulative variance of the first three principal components (LD) is 99.79%, 93.94% and 95.87% respectively. Among them, the contribution rates of LD1, LD2 and ld3 were 98.84%, 0.69% and 0.26% respectively; The contribution rates of principal components LD1, LD2 and ld3 were 84.63%, 8.48% and 3.83%, respectively; The contribution rates of LD1, LD2 and ld3 were 51.93%, 39.57% and 4.37% respectively( Figure 6 -figure 8) Although the original data is retained more completely after PCA dimensionality reduction, from the three cases after LDA dimensionality reduction, the difference of data distribution between different farms is very obvious, especially the fusion of DHI and electronic nose, which can achieve rapid differentiation, proving that the observed samples have enough representativeness, and LDA dimensionality reduction method can be applied to milk sample data. Model validation and analysis Each farm of 100 groups of samples, randomly divided into 80 groups of training samples and 20 groups of test samples. A total of 800 training samples and 200 test samples were collected from 10 cattle farms. Support vector machine (SVM), random forest (RF) and logistic regression (LR) were used to build cattle farm classification model. The accuracy of the test results is shown in Table 4, where the input is the fusion feature after dimension reduction by PCA and LDA. The classification effect of PCA dimensionality reduction model is worse than that of LDA dimensionality reduction model, because PCA does not consider categories in the dimensionality reduction process, while LDA is a supervised learning method, and each sample of its data set has a category output [26]. Compared with PCA method, LDA dimension reduction method is more suitable for milk samples, which proves the above point. When the input is the fusion feature of DHI and e-nose after LDA dimensionality reduction, the classification effect of the model is the best, and the accuracy rate of support vector machine model and random forest model is up to 96%. When the fusion feature of electronic nose is used as input, the classification model based on SVM algorithm is the best, which is 91.5%. When DHI fusion features are used as input, the model classification effect is the worst. The experimental results show that the electronic nose can realize the accurate classification of cattle farms. Fitting of electronic nose features and DHI features In this paper, the odor characteristics of electronic nose are fitted with the corresponding DHI characteristics, and the fitting models based on different algorithms are established, and the fitting effect is analyzed. If the content of milk fat and protein in dairy products is too low, it can be inferred that the rumen function of dairy cows is poor and it is suspected of acidosis. Therefore, from the six indicators of DHI data, the indicators related to milk quality and dairy cow health, namely fat and protein, are directly selected as the fitting model characteristics. In order to explore the fitting effect of electronic nose features and DHI features, five evaluation indexes including mean absolute error (MAE), root mean square error (RMSE), coefficient of determination (R2), mean absolute percentage error (MAPE) and symmetric mean absolute percentage error (smape) were used to evaluate the fitting effect of different machine learning algorithms. Using the above five evaluation indexes, the fitting effect of three algorithm models based on gradient lifting tree (gbdt), extreme gradient enhancement (xgboost) and random forest (RF) is evaluated, and the best model is selected. Milk fat rate and protein rate were selected as the output of the model, and electronic nose odor data was used as the input to establish the fitting model. The evaluation indexes of fitting models of milk fat rate and protein rate based on different algorithms are shown in Tables 5 and 6, and the fitting model results and error curves are shown in Figures 9 and 10 In Table 5 and 6, the fitting effect based on RF algorithm model is the best, MAE, MSE, MAPE and smape are smaller than the other two algorithm models, and R2 is the largest, close to 1. Therefore, the RF model is proved to be effective for fitting the electronic nose and DHI data. From the fitting error curve of milk fat rate and protein rate, we can see the fitting effect of the three models intuitively. Among them, RF model has smaller prediction error and the best fitting effect, followed by xgboost and gbdt model. When the data mutation, linear regression and support vector machine model can't make accurate prediction, and RF can achieve accurate judgment. Conclusion In this study, an electronic nose model based on seven kinds of gas sensors, Arduino development board and flow unit is proposed to realize the differentiation of different milk farms and the fitting of electronic nose data and DHI data. In the classification detection of cattle farm, LR, SVM and RF machine learning algorithms are used to build the model, and single DHI data, single electronic nose data and the combination of the two data are used as the input of the model. The accuracy of the model is based on 200 test samples. The results are as follows (1) In the data dimension reduction processing, LDA dimension reduction method is better than PCA dimension reduction method. The classification accuracy of LDA is also higher than that of PCA. (2) The experimental results show that when the input data of the model is the combination of DHI and electronic nose after LDA dimensionality reduction, the classification effect of the model is the best, and the accuracy of SVM and RF model is as high as 96%; When the electronic nose data after LDA dimension reduction is used as the model input, the SVM model has the highest classification accuracy, which is 91.5%. The results show that the SVM model can effectively distinguish farms by electronic nose. In the fitting of electronic nose data and DHI data, gbdt, xgboost, RF three algorithms are used to establish the fitting model, electronic nose data as input, milk fat rate, protein rate as output. The results are as follows (1) The fitting effect of RF model is the best, MSE, MAE, MAPE, smape are less than the other two algorithms, and R2 value is the highest, 0.9399 and 0.9301 respectively. Especially when the variable mutation, can make accurate judgment. (2) The experimental results show that the RF fitting model can effectively fit the electronic nose and DHI data, but the fitting effect of each feature needs to be improved.
4,963
2022-01-01T00:00:00.000
[ "Agricultural and Food Sciences", "Computer Science" ]
Service Gaps of a Banking System: A Case Study on Basic Bank Financial liberalization has led to intense competitive pressures and private banks dealing in retail banking are consequently directing their strategies towards increasing service quality level which fosters customer satisfaction and loyalty through improved service quality. This article examines the influence of perceived service quality on customer satisfaction. In this paper, we have used SERVQUAL as a technique to measure service quality and identify gaps in a BASIC Bank. The results of this study showed that there are service quality gaps between customers’ expectations and their perceptions in six dimensions. In this issue paying attention to the effective factors on customers’ expectations and its relationship with services quality is one of the important issues of the evaluation of services quality. For this purpose, the recent research was performed based on gap analysis model with the purpose of investigating the quality of banking services on the level of BASIC Bank. It was concluded after determining the desirable services from the standpoints of the customers (investigating customers’ expectations) and its effective factors and also the examination of the current status of services quality (customers’ understandings) that BASIC Bank responses to customers’ expectations in all of the branches under investigation and the understood services quality has been always more than services quality expected by the customers. Introduction The economy of a country is largely dependent on banking sector. In this case, Bangladesh is not exceptional. BASIC Bank is playing a great role to develop the economy of Bangladesh. Client expectations are increasing day by day from this bank. That is why; managers in BASIC Bank are under increasing pressure to demonstrate that their services are customer-focused and that continuous performance improvement is being delivered. In spite of having resource constraints, the banks must concentrate whether the customer expectations are properly understood and measured. They will also focus on whether any gaps from client point of view are identified. This information then assists a manager in identifying cost-effective ways of closing service quality gaps and of prioritizing which gaps to focus on-a critical decision given scarce resources. One of the aims of this study involves the use of service gap model in order to ascertain any actual or perceived gaps between customer expectations and perceptions of the service offered. Another aim of this paper is to point out how management of these banks can close these gaps effectively. 1985). Parasuraman et al., Liljander, and Tore agreed that service quality is the difference between expectation and the performance of the service or the perception of the customer. Liljander and Tore (1992) defined service quality as "the difference between what a service company should offer and what it actually does offer." In some earlier studies, service quality has been referred as the extent to which a service meets customers' needs or expectations (Lewis & Mitchell, 1990;Dotchin & Oakland, 1994). Service quality is so important that companies have gone to great efforts to evaluate and keep records of service quality levels (Hauser & Clausing, 1988;Phillips et al., 1983;Zeithaml et al., 1990). Many industries are paying greater attention to service quality and customer satisfaction, for reasons such as increased competition and deregulation (Reichheld & Sasser, 1990;Schlesinger & Heskett, 1991). The academic literature proposes that customer satisfaction is a function of the discrepancy between a consumer's prior expectation and his or her perception regarding the purchase (Churchill & Surprenant, 1982;Oliver, 1977). As reported in the relevant literature high quality service helps to generate customer satisfaction, customer loyalty, and growth of market share by soliciting new customers, and improved productivity and financial performance (Lewis, 1993;Andereson et al., 1994). Parasuraman and his colleagues developed the service quality measurement model known as SERVQUAL. This model is based on a comparison between the customer's expectations of the standard of service he/she will receive and his/her perception of the standard of service that is actually delivered. Furthermore, Parasuraman et al. see their service quality measurement model as one of the models that have been shown to enjoy a high degree of validity and stability. The model attempts to show the salient activities of the service organization that influence the perception of quality. Moreover, the model shows the interaction between these activities and identifies the linkages between the key activities of the service organization or marketer, which are pertinent to the delivery of a satisfactory level of service quality. The links are described as gaps or discrepancies: that is to say, a gap represents a significant hurdle to achieving a satisfactory level of service quality (Ghobadian et al., 1994). Zeithaml, V., & Berry, L. L. (1985, 1998. The upper part of the model (Figure 1) includes phenomena tied to the consumer, while the lower part shows phenomena tied to the supplier of services. The primary thesis of this model is that the service quality shortfall (i.e., Gap 5, the gap between customer service expectations and perceptions) is the result of a series of shortfalls within the service provider's organization (i.e., Gaps 1-4). Thus, improving the quality of service experienced by customers (i.e., closing Gap 5) requires diagnosing the causes of and correcting the internal deficiencies (i.e., Gaps 1-4) (Parasuraman et al., 2004). Luk and Layton (2002) developed the traditional model of Parasuraman et al. (1998) by adding two more gaps. They reflect the differences in the understanding of consumer expectations by manager and front-line service providers and the differences in consumer expectations and service providers' perception of such expectations. This model is illustrated in Figure 2. www.ccsenet.org/ijms International Journal of Marketing Studies Vol. 7, No. 5;2015 109 Figure 2. Conceptual model of service quality gap Source: Parasuraman et al., 1985;Curry, 1999;Luk and Layton, 2002. These seven gaps are described below briefly. GAP 1: Customers' expectations versus management perceptions: This gap occurs because of the lack of a marketing research orientation, inadequate upward communication and too many layers of management. GAP 2: Management perceptions versus service specifications: It happens because of inadequate commitment to service quality, a perception of unfeasibility, inadequate task standardization and an absence of goal setting. GAP 3: Service specifications versus service delivery: The third gap takes place because of role ambiguity and conflict, poor employee-job fit and poor technology-job fit, inappropriate supervisory control systems, lack of perceived control and lack of teamwork. GAP 4: Service delivery versus external communication: It arises from inadequate horizontal communication and propensity to over-promise. GAP 5: The discrepancy between customers' expectations and their perceptions of the service delivered: Because of the influences exerted from the customer side and the shortfalls (gaps) on the part of the service provider, the fifth gap that is known as customer gap, happens. GAP 6: The discrepancy between customers' expectations and employees' perceptions: This gap is created because of the differences in the understanding of customers' expectations by front-line service providers. Gap7: The discrepancy between employee's perceptions and management perceptions: The seventh gap happens because of the differences in the understanding of customers' expectations between managers and service providers. According to Brown and Bond (1995), "the gap model is one of the best received and most valuable contributions to the services literature". The model identifies seven key discrepancies or gaps relating to managerial perceptions of service quality, and tasks associated with service delivery to customers. The first six gaps (Gap 1, Gap 2, Gap 3, Gap 4, Gap 6, and Gap 7) are identified as functions of the way service is delivered, whereas Gap 5 pertains to the customer and as such is considered to be the true measure of service quality. Objectives of the Study The main objective of the study is to measure Service Gap of BASIC Bank with SERVQUAL Model. Besides there are some other major objectives:  To find the most important dimension of service quality that affect customer satisfaction.  To measure the satisfaction level of current customers of this bank.  To recommend some guidelines to ensure quality services. Research Question and Hypothesis Development The entire report tried to find out whether there is any service gap that customers feel while taking service from BASIC Bank. Thus, this report has focused on following research question. RQ: Is there any gap between expected service and perceived service by the account holders of BASIC bank? To find out the answer to the research question as well as to test whether the response is logical, following hypotheses have been developed. H o : There is a gap between the expected and perceived service H 1 : There is no gap between the expected and perceived service Type of Research and Data Sources 'Descriptive Research' has been conducted to measure the extent of the problem. 'Paired Sample t-test' has been used here to show how the provider gaps affect the customer gap. This study covered two types of data, which are: Primary data (Survey method, Personal observation) and Secondary data (Web information, Journals, Published reports on service quality of BASIC Bank etc.). Sampling and Sample Size Stratified Sampling Technique has been used to collect data. The entire sampling frame is divided into four strata: Student, Service Holders, Business People and Professionals. From each stratum, 32, 20, 20, and 28 respondents are selected respectively. Here data has been collected from respondents through personal interviews and e-mail. Questionnaire Development A structured questionnaire has been used here to collect data. The questionnaire has been developed in a way that reveals the respondent's response related to each of the independent variables. The questionnaire has been formed on 5-points Likert Scale to measure the degree of perception of respondents on each variable. The respondents were asked to rate statements based on their perception; from 1 to 5 where 1 signifies Strong Disagreement and 5 indicates Strong Agreement. Statistical Analytics and Tools The study has been conducted based on two groups. The first group entails the expected services by the account holders of BASIC bank. On the other hand, the second group explains the perceived services of those account holders. As there are two groups in the analysis and it is required to find out the gap among their responses about the service quality of BASIC bank, Paired Sample t-test has been conducted for Two Samples. Before testing the hypothesis, an apparent view has been given on the mean values of the responses to validate the findings. Statistical package SPSS 16.0 has been used to analyze the data. Paired t-Test for Two Samples: At a Glance A t-test is any statistical hypothesis test in which the test statistic follows a Student's t distribution if the null hypothesis is supported. It can be used to determine if two sets of data are significantly different from each other, and is most commonly applied when the test statistic would follow a normal distribution if the value of a scaling www.ccsenet.org/ijms International Journal of Marketing Studies Vol. 7, No. 5;2015 term in the test statistic were known. A paired sample t-test is used to determine whether there is a significant difference between the average values of the same measurement made fewer than two different conditions. Both measurements are made on each unit in a sample, and the test is based on the paired differences between these two values. The usual null hypothesis is that the difference in the mean values is zero. In statistics, a paired difference test is a type of location test that is used when comparing two sets of measurements to assess whether their population means differ. A paired difference test uses additional information about the sample that is not present in an ordinary unpaired testing situation, either to increase the statistical power, or to reduce the effects of confounders. Analysis and Findings Since the first purpose of this study is to measure the provider gaps, it is necessary to track the difference among the responses based on the services customers expect and what they actually perceive. In doing so, the means and the standard deviations of the responses have been compared. The average (mean) values of each sub-variable under the independent variables (provider gaps) have been calculated and then the mean of expected service and perceived service is defined. The test value of t-statistic is 3 indicating Neutral response. Any value over 3 indicates favorable response whereas any value below 3 denotes dissatisfactory responses. It has been assumed that the closer the mean values of expected and perceived service, the greater will be the customer satisfaction and the lower will be the service gap. Pair 1: The pair between customers' expected service matching with customers' perceived service shows that the mean values are very close, 3.82 and 3.75, respectively. Both the means fall in the satisfactory zone and are very close that indicates that there is less gap between what customers' expected and what they actually perceived. The value of standard deviation is also favorable that lower case of deviation falls in favorable zone for both the statements. Pair 3: The pair between customers' expected service are matching with customers' perceived service shows that the mean values are not very close, 3.55 and 3.93, respectively. Though both the means fall in the satisfactory zone and are not too close which indicates that there is a gap between what customers' expected and what they actually perceived. At the lower case of standard deviation, still the gap between expectation and perception sustain. Pair 4: The pair between customers' expected service matching with customers' perceived service shows that the mean values are very close, 3.79 and 3.50, respectively. Both the means fall in the satisfactory zone and are very close that indicates that there is less gap between what customers' expected and what they actually perceived. The value of standard deviation is also favorable that lower case of deviation falls in favorable zone for both the statements. Pair 5: The pair between customers' expected service matching with customers' perceived service shows that the mean values are very close, 3.95 and 3.55, respectively. Both the means fall in the satisfactory zone and are very close that indicates that there is less gap between what customers' expected and what they actually perceived. The value of standard deviation is also favorable that lower case of deviation falls in favorable zone for both the statements. Pair 6: The pair between customers' expected service matching with customers' perceived service shows that the mean values are very close, 3.90 and 3.59, respectively. Both the means fall in the satisfactory zone and are very close that indicates that there is less gap between what customers' expected and what they actually perceived. The value of standard deviation is also favorable that lower case of deviation falls in favorable zone for both the statements. Pair 7: The pair between customers' expected service matching with customers' perceived service shows that the mean values are very close, 3.91 and 3.54, respectively. Both the means fall in the satisfactory zone and are very close that indicates that there is less gap between what customers' expected and what they actually perceived. The value of standard deviation is also favorable that lower case of deviation falls in favorable zone for both the statements. Pair 8: The pair between customers' expected service are matching with customers' perceived service shows that the mean values are not very close, 4.09 and 3.37, respectively. Though both the means fall in the satisfactory zone and are not too close which indicates that there is a gap between what customers' expected and what they actually perceived. At the lower case of standard deviation, still the gap between expectation and perception sustain. Pair 9: The pair between customers' expected service matching with customers' perceived service shows that the mean values are very close, 3.85 and 3.35, respectively. Both the means fall in the satisfactory zone and are very close that indicates that there is less gap between what customers' expected and what they actually perceived. The value of standard deviation is also favorable that lower case of deviation falls in favorable zone for both the statements. Pair10: The pair between customers' expected service are matching with customers' perceived service shows that the mean values are not very close, 3.91 and 3.34, respectively. Though both the means fall in the satisfactory zone and are not too close which indicates that there is a gap between what customers' expected and what they actually perceived. At the lower case of standard deviation, still the gap between expectation and perception sustain. Pair11: The pair between customers' expected service are matching with customers' perceived service shows that the mean values are not very close, 4.11 and 3.45, respectively. Though both the means fall in the satisfactory zone and are not too close which indicates that there is a gap between what customers' expected and what they actually perceived. At the lower case of standard deviation, still the gap between expectation and perception sustain. Pair12: The pair between customers' expected service are matching with customers' perceived service shows that the mean values are not very close, 4.13 and 3.48, respectively. Though both the means fall in the satisfactory zone and are not too close which indicates that there is a gap between what customers' expected and what they actually perceived. At the lower case of standard deviation, still the gap between expectation and perception sustain. Vol. 7, No. 5;2015 Pair13: The pair between customers' expected service are matching with customers' perceived service shows that the mean values are not very close, 4.09 and 3.53, respectively. Though both the means fall in the satisfactory zone and are not too close which indicates that there is a gap between what customers' expected and what they actually perceived. At the lower case of standard deviation, still the gap between expectation and perception sustain. Pair14: The pair between customers' expected service are matching with customers' perceived service shows that the mean values are not very close, 4.02 and 3.44, respectively. Though both the means fall in the satisfactory zone and are not too close which indicates that there is a gap between what customers' expected and what they actually perceived. At the lower case of standard deviation, still the gap between expectation and perception sustain. Paired t-test has been used to test the hypotheses that there is a gap between expected and perceived services for all the 14 pairs. For Pair 1: Step 1: H0: There is a gap between expected customer service and perceived customer service H1: There is no gap between expected customer service and perceived customer service Step 2: Level of Significance is considered at 0.05 level. The table value for t-distribution under corresponding degrees of freedom is 1.9842. Step 3: The decision rule is, if t cal >t cri , H0 will be rejected or if the significance value for a variable is less than 0.05, the null hypothesis will be rejected. Step 4: Here, the calculated value 1.186 is not greater than the table value and thus, null hypothesis cannot be rejected. Even, the significance value is 0.239 which is more than 0.05, it ascertains that the null hypothesis is not rejected. Step 5: Thus, it can be said that there is a gap between expected and perceived customer service of BASIC Bank. It denotes that there is still a way of improvement to satisfy customers. For Pair 2: Step 1: H0: There is a gap between expected customer service and perceived customer service H1: There is no gap between expected customer service and perceived customer service Step 2: Level of Significance is considered at 0.05 level. The table value for t-distribution under corresponding degrees of freedom is 1.9842. Step 3: The decision rule is, if t cal >t cri , H0 will be rejected or if the significance value for a variable is less than 0.05, the null hypothesis will be rejected. Step 4: Here, the calculated value 1.186 is greater than the table value and thus, null hypothesis can be rejected. As the significance value is 0.000 which is less than 0.05, it ascertains that the null hypothesis can be rejected. Step 5: Thus, it can be said that there is no gap or very little gap between expected and perceived customer service of BASIC Bank. For Pair 3: Step 1: H0: There is a gap between expected customer service and perceived customer service H1: There is no gap between expected customer service and perceived customer service Step 2: Level of Significance is considered at 0.05 level. The table value for t-distribution under corresponding degrees of freedom is 1.9842. Step 3: The decision rule is, if t cal >t cri , H0 will be rejected or if the significance value for a variable is less than 0.05, the null hypothesis will be rejected. Step 4: Here, the calculated value 1.186 is greater than the table value and thus, null hypothesis can be rejected. Vol. 7, No. 5;2015 As the significance value is 0.000 which is less than 0.05, it ascertains that the null hypothesis can be rejected. Step 5: Thus, it can be said that there is no gap or very little gap between expected and perceived customer service of BASIC Bank. For Pair 4: Step 1: H0: There is a gap between expected customer service and perceived customer service H1: There is no gap between expected customer service and perceived customer service Step 2: Level of Significance is considered at 0.05 level. The table value for t-distribution under corresponding degrees of freedom is 1.9842. Step 3: The decision rule is, if t cal >t cri , H0 will be rejected or if the significance value for a variable is less than 0.05, the null hypothesis will be rejected. Step 4: Here, the calculated value 1.186 is greater than the table value and thus, null hypothesis can be rejected. As the significance value is 0.004 which is less than 0.05, it ascertains that the null hypothesis can be rejected. Step 5: Thus, it can be said that there are very little gap between expected and perceived customer service of BASIC Bank. For Pair 5: Step 1: H0: There is a gap between expected customer service and perceived customer service H1: There is no gap between expected customer service and perceived customer service Step 2: Level of Significance is considered at 0.05 level. The table value for t-distribution under corresponding degrees of freedom is 1.9842. Step 3: The decision rule is, if t cal >t cri , H0 will be rejected or if the significance value for a variable is less than 0.05, the null hypothesis will be rejected. Step 4: Here, the calculated value 1.186 is greater than the table value and thus, null hypothesis can be rejected. As the significance value is 0.000 which is less than 0.05, it ascertains that the null hypothesis can be rejected. Step 5: Thus, it can be said that there is no gap or very little gap between expected and perceived customer service of BASIC Bank. For Pair 6: Step 1: H0: There is a gap between expected customer service and perceived customer service H1: There is no gap between expected customer service and perceived customer service Step 2: Level of Significance is considered at 0.05 level. The table value for t-distribution under corresponding degrees of freedom is 1.9842. Step 3: The decision rule is, if t cal >t cri , H0 will be rejected or if the significance value for a variable is less than 0.05, the null hypothesis will be rejected. Step 4: Here, the calculated value 1.186 is greater than the table value and thus, null hypothesis can be rejected. As the significance value is 0.001 which is less than 0.05, it ascertains that the null hypothesis can be rejected. Step 5: Thus, it can be said that there are very little gap between expected and perceived customer service of BASIC Bank. For Pair 7: Step 1: H0: There is a gap between expected customer service and perceived customer service H1: There is no gap between expected customer service and perceived customer service Step 2: Level of Significance is considered at 0.05 level. The table value for t-distribution under corresponding degrees of freedom is 1.9842. Step 3: The decision rule is, if t cal >t cri , H0 will be rejected or if the significance value for a variable is less than www.ccsenet.org/ijms International Journal of Marketing Studies Vol. 7, No. 5;2015 0.05, the null hypothesis will be rejected. Step 4: Here, the calculated value 1.186 is greater than the table value and thus, null hypothesis can be rejected. As the significance value is 0.000 which is less than 0.05, it ascertains that the null hypothesis can be rejected. Step 5: Thus, it can be said that there is no gap or very little gap between expected and perceived customer service of BASIC Bank. For Pair 8: Step 1: H0: There is a gap between expected customer service and perceived customer service H1: There is no gap between expected customer service and perceived customer service Step 2: Level of Significance is considered at 0.05 level. The table value for t-distribution under corresponding degrees of freedom is 1.9842. Step 3: The decision rule is, if t cal >t cri , H0 will be rejected or if the significance value for a variable is less than 0.05, the null hypothesis will be rejected. Step 4: Here, the calculated value 1.186 is greater than the table value and thus, null hypothesis can be rejected. As the significance value is 0.000 which is less than 0.05, it ascertains that the null hypothesis can be rejected. Step 5: Thus, it can be said that there is no gap or very little gap between expected and perceived customer service of BASIC Bank. For Pair 9: Step 1: H0: There is a gap between expected customer service and perceived customer service H1: There is no gap between expected customer service and perceived customer service Step 2: Level of Significance is considered at 0.05 level. The table value for t-distribution under corresponding degrees of freedom is 1.9842. Step 3: The decision rule is, if t cal >t cri , H0 will be rejected or if the significance value for a variable is less than 0.05, the null hypothesis will be rejected. Step 4: Here, the calculated value 1.186 is greater than the table value and thus, null hypothesis can be rejected. As the significance value is 0.000 which is less than 0.05, it ascertains that the null hypothesis can be rejected. Step 5: Thus, it can be said that there is no gap or very little gap between expected and perceived customer service of BASIC Bank. For Pair 10: Step 1: H0: There is a gap between expected customer service and perceived customer service H1: There is no gap between expected customer service and perceived customer service Step 2: Level of Significance is considered at 0.05 level. The table value for t-distribution under corresponding degrees of freedom is 1.9842. Step 3: The decision rule is, if t cal >t cri , H0 will be rejected or if the significance value for a variable is less than 0.05, the null hypothesis will be rejected. Step 4: Here, the calculated value 1.186 is greater than the table value and thus, null hypothesis can be rejected. As the significance value is 0.000 which is less than 0.05, it ascertains that the null hypothesis can be rejected. Step 5: Thus, it can be said that there is no gap or very little gap between expected and perceived customer service of BASIC Bank. For Pair 11: Step 1: H0: There is a gap between expected customer service and perceived customer service H1: There is no gap between expected customer service and perceived customer service Step 2: Level of Significance is considered at 0.05 level. The Vol. 7, No. 5;2015 degrees of freedom is 1.9842. Step 3: The decision rule is, if t cal >t cri , H0 will be rejected or if the significance value for a variable is less than 0.05, the null hypothesis will be rejected. Step 4: Here, the calculated value 1.186 is greater than the table value and thus, null hypothesis can be rejected. As the significance value is 0.000 which is less than 0.05, it ascertains that the null hypothesis can be rejected. Step 5: Thus, it can be said that there is no gap or very little gap between expected and perceived customer service of BASIC Bank. For Pair 12: Step 1: H0: There is a gap between expected customer service and perceived customer service H1: There is no gap between expected customer service and perceived customer service Step 2: Level of Significance is considered at 0.05 level. The table value for t-distribution under corresponding degrees of freedom is 1.9842. Step 3: The decision rule is, if t cal >t cri , H0 will be rejected or if the significance value for a variable is less than 0.05, the null hypothesis will be rejected. Step 4: Here, the calculated value 1.186 is greater than the table value and thus, null hypothesis can be rejected. As the significance value is 0.000 which is less than 0.05, it ascertains that the null hypothesis can be rejected. Step 5: Thus, it can be said that there is no gap or very little gap between expected and perceived customer service of BASIC Bank. For Pair 13: Step 1: H0: There is a gap between expected customer service and perceived customer service H1: There is no gap between expected customer service and perceived customer service Step 2: Level of Significance is considered at 0.05 level. The table value for t-distribution under corresponding degrees of freedom is 1.9842. Step 3: The decision rule is, if t cal >t cri , H0 will be rejected or if the significance value for a variable is less than 0.05, the null hypothesis will be rejected. Step 4: Here, the calculated value 1.186 is greater than the table value and thus, null hypothesis can be rejected. As the significance value is 0.000 which is less than 0.05, it ascertains that the null hypothesis can be rejected. Step 5: Thus, it can be said that there is no gap or very little gap between expected and perceived customer service of BASIC Bank. For Pair 14: Step 1: H0: There is a gap between expected customer service and perceived customer service H1: There is no gap between expected customer service and perceived customer service Step 2: Level of Significance is considered at 0.05 level. The table value for t-distribution under corresponding degrees of freedom is 1.9842. Step 3: The decision rule is, if t cal >t cri , H0 will be rejected or if the significance value for a variable is less than 0.05, the null hypothesis will be rejected. Step 4: Here, the calculated value 1.186 is greater than the table value and thus, null hypothesis can be rejected. As the significance value is 0.000 which is less than 0.05, it ascertains that the null hypothesis can be rejected. Step 5: Thus, it can be said that there is no gap or very little gap between expected and perceived customer service of BASIC Bank. Recommendations From this study, we can learn that there exist some gaps in service providing process. To remove this gap, the overall organizational system should be changed and improved. Following criteria are recommended from clients' viewpoint to reduce the service gaps and increase customer satisfaction. Vol. 7, No. 5;2015  The company mission should include a focus on customers.  Training and motivational programmers' should be introduced to improve the employee skill.  The bank should take steps to minimize the operation time.  The bank has to be more conscious of building customer relationship. To do this, the bank can consider "Customer Involvement Program."  The bank management should ensure that all employees posses skill and quality.  Employees have to express solidarity with the customer.  Employees have to be aware of customer expectations.  As this is the largest commercial bank of country, more employees have to be engaged here to serve the customers properly.  Every client should be focused individually.  There is a bureaucratic practice in this bank. This bureaucratic relationship between the management and front line personnel should be removed.  More modern equipments have to be introduced here to provide better service.  To provide better transaction facilities, new ATM booth should be established.  In this bank, the employees are always in a pressure. They may be provided some recreational facilities to offer them mental satisfaction.  E-banking facilities of this bank have to be more improved.  The bank should develop own ATM or CDM system to provide service quickly and comfortably.  The number of ATM booth should be increased  Often the ATM booths of the bank become disabled.  The concerned authority should remove this problem immediately. Conclusion BASIC Bank Limited is a state owned bank. It is committed to provide high quality financial services / products to contribute to the growth of GDP of the country through stimulating trade and commerce, accelerating the pace of industrialization, boosting up export, creating employment opportunity for the educated youth, poverty alleviation, raising standard of living of limited income group and overall sustainable socio-economic development of the country. BASIC Bank has made a strong position through its varies activities. Its number of clients, amount of deposit and investment money increases day by day. This bank already has shown impressive performance in investment. Consumer are more or less satisfied with the present services of the bank now should think to start new services and take different types of marketing strategy to get more customers in this competition market of banking. If they found gap could be removed from the process the level of satisfaction would be even greater. At last it can be said that BASIC Bank Ltd. is growing fast and its contribution in our economy is also considerable. I hope that BASIC Bank will widen its services by expanding its branch all over the country.
7,952
2015-10-02T00:00:00.000
[ "Business", "Economics" ]
A Regularization of the Backward Problem for Nonlinear Parabolic Equation with Time-Dependent Coefficient We study the backward problem with time-dependent coefficient which is a severely ill-posed problem. We regularize this problem by combining quasi-boundary value method and quasi-reversibility method and then obtain sharp error estimate between the exact solution and the regularized solution. A numerical experiment is given in order to illustrate our results. Introduction We consider the inverse time problem for the nonlinear parabolic equation u t x, t − a t u xx x, t f x, t, u x, t , x, t ∈ 0, π × 0, T , 1.1 where a t is thermal conductivity function of 1.1 such that there exist p, q > 0 satisfying 0 < p ≤ a t ≤ q, 1.4 for all t ∈ 0, T . International Journal of Mathematics and Mathematical Sciences In other words, from the temperature distribution at a particular time t T final data , we want to retrieve the temperature distribution at any earlier time t < T. This problem is called the backward heat problem BHP , or final-value problem.As known, this problem is severely ill-posed in Hadamard's sense; that is, solutions do not always exist, and when they exist, they do not depend continuously on the given data.In practice, datum g is based on physical measurements.Hence, there will be measurement errors, and we would actually have datum function g δ such that g δ − g L 2 0,π ≤ δ.Thus, form small error contaminating physical measurements, the solutions corresponding to datum function g δ have large errors.This makes it difficult to make numerical calculations with perturbed data. In our knowledge, there have been many papers on the linear homogeneous case of the backward problem, but there are a few papers on the nonhomogeneous case and the nonlinear case such as 1-6 ; especially, the nonlinear case with time-dependent thermal conductivity coefficient is very scarce.Moreover, the thermal conductivity is the property of a material's ability to conduct heat.Therefore, the thermal conductivity is not a constant in some cases.In this paper, we extend the result in 7 for the case of the time-dependent thermal conductivity a t .In future, we will research the BHP problem for the case of the time-and spacedependent thermal conductivity a x, t . In 8 , the authors used the quasi-reversibility method to regularize a 1D linear nonhomogeneous backward problem.Very recently, in 9 , the methods of integral equations and of Fourier transform have been used to solve a 1D problem in an unbounded region.In recent articles considering the nonlinear backward heat problem, we refer the reader to 10 .In 11 , the authors used the quasi-boundary value method to regularize the latter problem.However, in 11 , the authors showed that the error between the regularized solution and the exact solution is 1.5 It is easy to see that the convergence of the error estimate between the regularized solution and the exact solution is very slow when t is in a neighborhood of zero.For this reason, the error estimate in initial time is given by We can easily find that the exact solution of 1.1 -1.3 satisfies where In this paper, we will approximate 1.1 -1.3 by using the regularization problem: where α k δ δk 2 .Actually, in 12 , we also considered the problem 1.1 -1.3 for the homogeneous case f 0 in R. Hence, we want to extend for the nonlinear case f x, t, u in bounded region 0, π , and this is the biggest different point in this paper. The remainder of this paper is organized as follows.In Section 2, we shall regularize the ill-posed problem 1.1 -1.3 and give the error estimate between the regularized solution and the exact solution.Then, in Section 3, a numerical example is given. Regularization and Error Estimates For clarity of notation, we denote the solution of 1.1 -1.3 by u x, t and the solution of the regularized problem 1.12 by u δ x, t .Throughout this paper, we denote λ 1 max{1, λ T }, and, δ be a positive number such that 0 < δ < λ T .Hereafter, we have a number of inequalities in order to evaluate error estimates.Lemma 2.1.Let a t be a function satisfying 1.4 , α k δ δk 2 , B δ δ ln λ T /δ −1 and λ t be as in 1.11 .Then for k > 0 and 0 ≤ t ≤ s ≤ T , one gets Proof of Lemma 2.1.The proof of Lemma 2.1 can be found in 12 . International Journal of Mathematics and Mathematical Sciences Then problem 1.12 has a unique weak solution u δ ∈ W C 0, T ; L 2 0, π satisfying the following equality: where 2.5 We claim that, for every u, v ∈ C 0, T ; L 2 0, π , k ≥ 1, we have where C max{1, T} and • C 0,T ; L 2 0,π is supremum norm in C 0, T ; L 2 0, π .We shall prove this inequality by induction.For k 1, we have International Journal of Mathematics and Mathematical Sciences 5 2.7 From Lemma 2.1, we get 2.8 It follows International Journal of Mathematics and Mathematical Sciences Thus, 2.6 holds for k 1. Supposing that 2.6 holds for k n, we shall prove that 2.6 holds for k n 1.In fact, we get 2.11 Hence, we obtain 2.12 Thus, we have 2.13 Therefore, by the induction principle, we have International Journal of Mathematics and Mathematical Sciences 7 We consider F : C 0, T ; L 2 0, π → C 0, T ; L 2 0, π .Since when k → ∞, there exists a positive integer number k 0 such that and By the uniqueness of the fixed point of F k 0 , one has F u δ u δ ; that is, the equation F u u has a unique solution u δ ∈ C 0, T ; L 2 0, π .Step 2. If u δ ∈ W satisfies 2.2 , then u δ is the solution of the problem 1.12 .For 0 ≤ t ≤ T , we have 2.17 We can verify directly that u δ ∈ C 0, T ; L 2 0, π ∩ L 2 0, T; H 1 0 0, π ∩ C 1 0, T; H 1 0 0, π .In fact, u δ ∈ C ∞ 0, T ; H 1 0 0, π .Moreover, one has 2.18 Hence, u δ is the solution of 1.12 . International Journal of Mathematics and Mathematical Sciences It follows that 2.20 By using the result in Lees and Protter 13 , we get w δ •, t 0. This completes the proof of Step 3. Finally, by combining three steps, we complete the proof of Theorem 2.2. Theorem 2.3 stability of the modified method .Let f be as in Theorem 2.2, α k δ δk 2 , g and let g δ in L 2 0, π satisfy g δ − g ≤ δ.If one supposes that u δ and v δ defined by 2.2 are corresponding to the final values g and g δ in L 2 0, π , respectively, then one obtains 2.21 Proof of Theorem 2.3.Using the inequality a b 2 ≤ 2 a 2 b 2 and Lemma 2.1, we get the estimate 2.22 Thus, we get 2.23 Hence, we obtain 2.24 Thus, we get 2.25 By using Gronwall's inequality, we have 10 International Journal of Mathematics and Mathematical Sciences It follows 2.27 This completes the proof of Theorem 2.3. 2.28 for all t ∈ 0, T .Letting α k δ δk 2 and v δ •, t given by 2.2 corresponding to the perturbed data g δ , then one has for every t ∈ 0, T u 29 Proof of Theorem 2.4.From 1.1 , we construct the regularized solution corresponding to the exact data and the perturbed data where 2.33 Since 1.8 and 2.32 , we get 2.35 By applying the inequality a b c 2 ≤ 3 a 2 b 2 c 2 , we get International Journal of Mathematics and Mathematical Sciences Using Lemma 2.1, we obtain 2.38 Hence, we get the following estimates 2.39 International Journal of Mathematics and Mathematical Sciences 13 From the estimate 2.39 , we get 2.40 Hence, we have 2.43 From 1.1 , we have where k 2 e k 2 λ s u s x, s − a s u xx x, s 2 dx ds. 2.45 Therefore, we get the estimate 2.46 Let v δ be the solution of 1.12 corresponding to the perturbed data g δ , and let u δ be the solution of 1.12 corresponding to the exact data g.From Theorem 2.3 and 2.46 , we can get where This completes the proof of Theorem 2.4. Numerical Experiment Consider the nonlinear parabolic equation with time-dependent coefficient: 3.2 The exact solution of the equation is u x, t e t sin x cos x . 3.3 Letting t 0, from 3.3 , we have u x, 0 sin x cos x . 3.4 Consider the measured data Then we have g δ − g δ.From 2.31 and 3.5 , we have the regularized solution for the case t 0 in the form of iteration We have in Figure 1 the graphs of the regularized solution v δ i n •, t , i 1, 2, 3 and n 10. We have in Figure 2 the graphs of the regularized solution v δ i n •, t , i 4, 5, 6 and n 10. We have in Figure 3 the graphs of the exact solution u •, t and of the regularized solution v δ i n •, t , i 7, 8 and n 10.Now, Figure 4 can represent visually the exact solution and regularized solutions corresponding to δ i , i 1, . . ., 8 at initially time t 0. Notice that, in Figure 4, the curve number 0 expressing the exact solution is indistinguishable from the curve number i expressing the regularized solution corresponding to δ i , i 6, 7, 8. Now, we cannot calculate the formula 3.9 exactly we need to find f k w s while we have not known w yet .From Theorem 2.2, we use the iteration for 3.9 at initial time t 0 as follows: w n ω, 0 3.12 Then we get the error in the Table 2. We can see that the error w δ i 5 •, 0 − u •, 0 2 is very large.Therefore, the problem is ill-posed and a regularization is necessary. t, u x, t u 4e t sin x cos x 2t 1 , g x e sin x cos x . Figure 3 : Figure 3: The regularized solutions corresponding δ i , i 7, 8 and the exact solution. Figure 4 : Figure 4: The exact solution and the regularized solutions corresponding to δ i , i 1, . . ., 8 at initial time t 0.
2,604
2012-10-17T00:00:00.000
[ "Mathematics" ]
Turbulent flows over porous lattices: alteration of near-wall turbulence and pore-flow amplitude modulation Abstract Turbulent flows over porous lattices consisting of rectangular cuboid pores are investigated using scale-resolving direct numerical simulations. Beyond a certain threshold which is primarily determined by the wall-normal Darcy permeability, ${{\mathsf{K}}_y}$, near-wall turbulence transitions from its canonical regime, marked by the presence of streak-like structures, to another marked by the presence of Kelvin–Helmholtz-like (K–H-like) spanwise-coherent structures. The threshold agrees well with that previously established in studies where permeable-wall boundary conditions had been used as surrogates for a porous substrate (Gómez-de Segura & García-Mayoral, J. Fluid Mech., vol. 875, 2019, pp. 124–172). In the smooth-wall-like regime, none of the investigated substrates demonstrate any reduction in drag relative to a smooth-wall flow. At the permeable surface, a notable component of the flow is that which adheres to the pore geometry and undergoes modulation by the turbulent scales of motions due to the interaction mechanism described by Abderrahaman-Elena et al. (J. Fluid Mech., vol. 865, 2019, pp. 1042–1071). Its resulting effect can be quantified in terms of an amplitude modulation (AM) using the approach of Mathis et al. (J. Fluid Mech., vol. 628, 2009, pp. 311–337). This pore-coherent flow component persists throughout the porous substrate, highlighting the importance of a given substrate's microstructure in the presence of an overlying turbulent flow. This geometry-related aspect of the flow is not accounted for when continuum-based models for a porous medium or effective representations of them, such as wall boundary conditions, are used. The intensity of the AM effect is enhanced in the K–H-like regime and becomes strengthened with larger permeability. As a result, structured porous materials may be designed to exploit or mitigate these flow features depending upon the intended application. Introduction The interaction between fluid flows and porous media has in recent years witnessed a notable increase in attention.This is owing to their wide-ranging practical applications from heat exchangers (Kuruneru, Vafai, Sauret & Gu 2020) to nuclear reactors (Hassan & Dominguez-Ontiveros 2008) as well as their pronounced presence in geophysical flows (Kazemifar, Blois, Aybar, Perez Calleja, Nerenberg, Sinha, Hardy, Best, Sambrook S. & Christensen 2021).They are also interesting from a purely flow physics perspective, particularly with regards to turbulent flows, as their permeable quality can lead to the alteration of turbulence (2023) it was demonstrated that such porous structures can affect the overlying flow.Further examination of anisotropic permeability was carried out in the DNS study by Gómez-de Segura & García-Mayoral (2019), where the effect of a permeable wall was incorporated using wall boundary conditions derived from the Darcy-Brinkman equation for modeling porous media flows.For highly streamwise-preferential configurations ( ≫ ), it was shown that the wall-normal permeability is the principal component responsible for the breakdown of the near-wall streak cycle and the emergence of the spanwise coherent structures associated with the Kelvin-Helmholtz-type instability observed by Breugem et al. (2006).The threshold marking the onset of this K-H-like regime was estimated to be √︁ + ≈ 0.4, with the flow falling into the fully unstable regime beyond √︁ + ≈ 0.6.In this regime, drag performance became degraded compared to smooth-wall turbulence.Reductions in drag however where demonstrated for cases where the wall-normal permeability was √︁ + < 0.4.The purpose of this study is to more closely examine the interaction between porous substrates and turbulent flows, both in the vicinity of the permeable surface and inside the porous substrate.This is investigated numerically using DNS which resolves the scales of motion from the bulk flow down to the pore-scale.The substrates examined span both the canonical wall turbulence and K-H-like regimes.This allows for determining which aspects of the flow are mainly due to the change in turbulence and whether there are flow features which persist across both flow regimes, particularly in the substrates.The effect of surface geometry is also investigated. The structure of the paper is as follows.In §2, the porous substrate geometries considered and their characteristics along with the numerical methods are introduced.How the overlying bulk turbulence becomes modified due to the presence of substrates and the resulting consequences in terms of drag are discussed in §3, where an assessment of outer-layer similarity and characterization of the bulk turbulence regime with respect to permeability is also described.In §4, the surface flow is examined to highlight the differences that exist in terms of flow structure over the substrates falling into different turbulence regimes.The flow in the substrates is detailed in §5, and §6 discusses the amplitude modulation of this flow by the overlying turbulence.The results are summarized and discussed in §7. Numerical method The configuration used in this study is an open-channel as depicted in figure 1.It is comprised of a bulk flow region with height and a porous substrate region of depth ℎ.The direct numerical simulations using this set-up were conducted using the open-source solver PARIS simulator (Aniszewski et al. 2019) with in-house modifications.The code solves the incompressible Navier-Stokes equations, (2.1a) (2.1b) where = (, , ) and are the velocity vector and pressure, respectively.The bulk Reynolds number is = / with being the bulk mean velocity in 0 < < , and the kinematic viscosity. The dimensions of the computational domain are ( , , ) = (6.3,1.3, 3.15) in the streamwise (), wall-normal () and spanwise () directions.The boundaries are periodic in and with symmetry and no-slip boundaries imposed at the upper and lower domain boundaries along , respectively.The thickness of the porous substrate is ℎ = 0.3.The simulations were conducted at a constant mass flow rate, which was adjusted to achieve a nominal = / = 360 for most of the cases.Here, = √︁ / is the friction velocity at = 0 (the substrate surface).Note that the total stress at the surface plane, , will have both viscous and Reynolds shear stress components due to the surface permeability, therefore (2.2) Throughout this paper, the superscript '+' indicates inner units, where quantities are normalized using the friction velocity and kinematic viscosity.The numerical grid has a resolution of ( , , ) = (1620, 324, 810).The grid spacing is uniform along and while being non-uniform along .The grid is stretched in the region above the substrate using a hyperbolic tangent function and uniform within the substrate.The simulation results are grid independent at the chosen resolution.The grid independence was determined by conducting simulations at coarser and finer resolutions.The results of these simulations are gathered in appendix A. The equations are spatially discretized on a staggered Cartesian grid using central secondorder finite-differencing.The fractional-step method (Kim & Moin 1985) is used to solve the discretized incompressible Navier-Stokes equations.At each time step an intermediate nondivergence free velocity field is first calculated.The Poisson equation obtained by imposing the incompressibility constraint is then solved in Fourier space using a FFT-based solver (Costa 2018) to obtain the pressure correction.The pressure correction is then used to project the velocity field onto a divergence free vector space.The time integration uses a triple-substep Runge-Kutta method where both the advective and diffusive terms are treated explicitly. The immersed boundary method of Breugem & Boersma (2005) is used to numerically realize the porous substrates.Unlike direct-forcing or penalization-based methods, it involves modifying the discretized advective and diffusive flux terms of the Navier-Stokes equations such that the no-slip and no-penetration conditions become exactly imposed for the regions of the numerical domain which are defined as solids.This makes it a more accurate method for realizing geometries which are Cartesian conforming, as demonstrated by Paravento, Pourquie & Boersma (2008).This IBM along with PARIS have also been used in the DNS study of turbulent flows over liquid-infused surfaces of Sundin, Zaleski & Bagheri (2021). In addition to the simulations including porous substrates, an open-channel flow over a smooth wall at = 360 was also simulated to serve as a baseline for comparisons.Quantities normalized using the smooth-wall friction velocity and kinematic viscosity are indicated by the subscript ''. Porous geometries The porous geometries investigated are lattices where the pores are repeating rectangular cuboids (figure 2).The solid skeleton is made of rods with a cross-section of × .The spacings (pitch-lengths) , and determine the cross-sections of the pores and their resulting void volume.The components of the resulting anisotropic permeability tensor, , were calculated by conducting Stokes flow simulations of representative volumes of each substrate with the same solver that was used for the turbulent flow simulations.The dimensions of the REVs were ( , , ) = (0.3, 0.3, 0.3) and the grid resolution similar to the turbulent simulations in terms of spacing.The components of the permeability tensor were then obtained by using Darcy's law, As similarly defined by Breugem et al. (2006), ⟨⟩ and ∇⟨⟩ are the superficially volumeaveraged velocity and intrinsically (fluid-phase) volume-averaged pressure, respectively.Intrinsic averages are linearly related to their superficial counterparts, such that Here, the porosity, , represents the void fraction of a substrate's volume.Similar to porous structures studied by Kuwata & Suga (2017), the permeability tensor becomes diagonal for the geometries under consideration here due to the symmetry along the , and directions, giving the three permeability components , and .The lengths √ , √︁ and √ then characterize each porous geometry.This choice of characterization is made to remain consistent with that of prior experimental and numerical studies (Manes et al. 2011;Kuwata & Suga 2017;Gómez-de Segura & García-Mayoral 2019).Additionally, since the Darcy permeabilities are defined in the Stokes regime, they are only dependent on the geometry and are therefore suitable parameters for encapsulating the geometrical features of different porous media.The main cases (1, 2, 3, , 1, 2, 3) span a range of wall-normal permeabilities, , to facilitate investigating how nearwall turbulence becomes altered due to a progressively weakening wall-impedance The simulation parameters along with the characteristics of their porous substrates are gathered in table 1. Representative elements for the and porous geometries examined are shown in figure 3. The substrates will first be assessed with regards to certain aspects of permeable-wall turbulence which have been reported in the literature.This includes the transition of turbulence from the canonical near-wall regime to the K-H-like regime (Gómez-de Segura & García-Mayoral 2019) and how streamwise-preferential anisotropy affects the drag (Rosti et al. 2018;Gómez-de Segura & García-Mayoral 2019).An important difference exists between pore-scale resolving simulations, and other simulation approaches such as volume-averaged Navier-Stokes (VANS) simulations (Breugem et al. 2006 2018) and permeable-wall boundary conditions (Jiménez et al. 2001; Gómez-de Segura & García-Mayoral 2019).In the latter approaches, the permeability and porosity are predefined numerical parameters and independent from one another.For the cases considered in this work, the substrate's geometry determines both the permeability and the porosity.This imposes constraints on the permeabilities and the degree of anisotropy that can be obtained.For example, a change in will change the pore cross-section along both and , leading to changes in and .The Reynolds numbers of DNS studies are also generally low relative to experiments.As such, achieving effective permeabilities which are large becomes challenging. Surface region flow The changes in the overlying turbulent flow are first assessed qualitatively by examining the flow-field above the permeable surface at + ≈ 5. Figure 4 shows instantaneous snapshots of the velocity and pressure fluctuations for 1 and 1.These two cases have the highest √︁ + of the and groups, respectively.This allows for a better assessment of the role wall impedance plays in causing changes in turbulence near the surface.Note that at the surface, the relevant permeability component is as the flow must first be able to penetrate into the substrate before becoming redirected into the horizontal directions, for which and are important. Beginning with observations for the streamwise velocity component, 1 (figure 4b) exhibits regions of elongated positive and negative fluctuations which are similar to the streaky pattern observed over a smooth-wall (figure 4c).This streamwise coherency is diminished in 1 (figure 4a), where the aforementioned regions are instead more clumplike and also exhibit a degree of regularity along the spanwise dimension of the domain.This is indicative of the near-wall turbulence becoming altered.For the wall-normal velocity component, the similarity between 1 (figure 4e) and the smooth-wall (figure 4f) is S. M. Habibi Khorasani, M. Luhar and S. Bagheri reflective of the similarity seen in their streamwise velocity fields.1 has a greater degree of intensity but remains structurally similar to its smooth-wall counterpart.1 (figure 4d) however shows not only a noticeably stronger intensity in wall-normal velocity fluctuations but also structural differences, with there seemingly being an emergent spanwise coherency. The differences in the pressure fields are consistent with the observations made for the velocity fields.1 (figure 4g) demonstrates a spanwise patch along + ≈ 1200 which is coincident with the spanwise coherent regions seen at the same position in its and fields.The pressure fluctuations of 1 are also more intense compared to 1 (figure 4h) and the smooth-wall case (figure 4i).Another distinctive quality of 1 is the visible signature of the permeable surface in its flow-field, particularly when examining the wall-normal velocity (figure 4d).This indicates that the surface granularity becomes perceived by the turbulent flow, such that it leaves a visible footprint in the flow-field.This is similar to what has been observed in flows over roughness (Abderrahaman-Elena et al. 2019) and canopies (Sharma & García-Mayoral 2020), albeit at lower heights, and is attributed to the flow induced by the surface elements.That this footprint remains visible at + ≈ 5, suggests that the surface-induced flow above 1 is of notable strength. Changes in mean flow, velocity fluctuations and Reynolds shear stress Surfaces which depart from a hydrodynamically smooth behavior change the overall level of momentum carried by the bulk flow, i.e., the amount of drag generated at the surface changes.As explained by Spalart & McLean (2011) and Chung et al. (2021), a suitable metric for quantifying this is the shift in the logarithmic region of the mean velocity profile, Δ + , which was introduced by Hama (1954) and Clauser (1954) and is more commonly known as the roughness function.For the substrates considered, this is examined in figure 5a where the mean velocity profiles are shown.Table 2 lists the corresponding Δ + for each configuration.All of the porous substrates examined increase drag compared to the baseline smooth-wall case, although the cases do not impose a significant drag penalty.The structural differences observed in the flow over the different substrates are also reflected here, such that they become distinguishable into two overall groups.The substrates with lower wall-normal permeability (1, 2, 3, ) do not deviate greatly from the smoothwall (−Δ + ⪅ 1) and the cases are almost indistinguishable from the smooth-wall case. The substrates with higher wall-normal permeability (1, 2, 2 ′ , 3, 3 ′ ) on the other hand result in Δ + values which are notable (−Δ + ⪆ 2).The same distinction can be made for the Reynolds shear stress (figure 5b), where substrates 1, 2, 3 and result in slightly greater levels of turbulent activity close to the permeable surface whereas this activity is more pronounced for the substrates with a gap emerging between the former and the latter substrates in terms of their surface-level − ′ ′ + activity.Following the approach of MacDonald et al. (2016) and García-Mayoral et al. (2019), to assess the contributing factors to Δ + , the mean momentum equation for the bulk flow region above the substrate is considered Scaling (3.1) in inner units gives Integrating (3.2) between + = 0 (the substrate surface) and a position within the logarithmic region, + = + , results in For a smooth-wall flow, there will be no mean velocity at + = 0 (no-slip) and hence Taking the difference of (3.3) between a porous case and the smooth-wall case allows for quantifying the contributions of the interfacial slip velocity and changes in Reynolds stress to Δ + , (3.4) The term quantifies the contributions from additional turbulence scales and emerges due to the differences in , i.e. + porous and + smooth .However, it remained negligible for the simulations conducted here and is therefore omitted. In figure 6, it can be observed that the slip velocity contribution, slip , to Δ + does not vary significantly across the different cases.The Reynolds shear stress contribution, , is drag degrading and hence always negative.The magnitude of decreases monotonically from 1, which has the overall highest permeability, to the cases which are similar in terms of Δ + .The dominant component across all cases is and grows larger for substrates with greater wall-normal permeabilities.It is notably larger in magnitude than slip for the cases.These results are consistent with what was observed for the profiles of the mean velocity (figure 5a) and Reynolds shear stress (figure 5b).A jump in is also seen here when going from the cases to the cases, suggestive of additional contributions resulting from the structural changes in turbulence that were observed in figure 4. The same trend holds for 2 ′ and 3 ′ which have streamwise-preferential anisotropy unlike their baseline counterparts 2 and 3.However, their increased anisotropy leads to increased drag.In the cases on the other hand, as the anisotropy increases from 3 to 1 the drag becomes reduced, although the changes are small.Overall, none of the substrates allow for the development of any significant slip velocity at their surfaces.Therefore, the differences in Δ + are primarily due to the differences in − ′ ′ + close to the permeable surface. Outer layer similarity Throughout the literature, assessments have been made about the applicability of the logarithmic law of the wall to permeable wall turbulence with inconsistent results.The logarithmic law of the wall is as introduced by Clauser (1954) for rough wall turbulence and takes into account Δ + .The argument being that the only difference between a smooth and non-smooth turbulent flow is Δ + while the von Kármán constant, , remains the same.Commonly, when assessing the log-law over rough or permeable walls, a displacement height, 0 , is introduced and the wall-coordinates are changed such that = − 0 becomes the effective wall origin.Here, the approach of Orlandi & Leonardi (2006) is employed instead, where the mean velocity is measured relative to the slip velocity which exists at the permeable surface of the porous substrates, giving In this manner, introduction of a displacement height is avoided and the porous cases share a common definition with the smooth-wall case at the surface since the relative mean velocity there becomes zero for all of them.The values of estimated by fitting the modified law of the wall (3.6) for the various cases are gathered in table 2. The estimated values fall within the range of those typically reported for smooth-wall turbulent boundary layers, and is approximately the same for all examined cases.This suggests that outer-layer similarity is retained for the cases investigated here.Examining the velocity fluctuations and Reynolds shear stress distributions in outer-scaled wall-coordinates (figure 7) reinforces the existence of outer-layer similarity for the turbulent flow over all the porous substrates considered.This is in line with the DNS study of turbulent flows over acoustic liners done by Shahzad et al. (2023) who obtained a constant value for both their acoustic liner and smooth-wall cases.Prior observations made of differing from its typical smooth-wall value for turbulence over porous media could therefore be due to the absence of sufficient scale-separation, such that the overlying flow is altered up to a significant distance away from the substrate region.This was also suggested by Shahzad et al. (2023) as the probable reason behind the discrepancy in values reported by Breugem et al. (2006) and Kuwata & Suga (2017).Manes et al. (2011) and Chen & García-Mayoral (2023) suggest that the fitting approach adopted by Breugem et al. (2006) and Kuwata & Suga (2017) for estimating could be a source of error. Flow regime distinction with regards to permeability The separation of the substrates into two groups becomes more clearly distinguishable when plotting the drag change, Δ + , against the wall-normal permeability (figure 8a).The leap in Δ + when going from the to the cases has been attributed to the near-wall turbulence dynamics undergoing a transition and departing the canonical regime for a K-H-like one (Gómez-de Segura & García-Mayoral 2019).Using linear stability analysis with boundary conditions derived from the Darcy-Brinkman equation, Gómez-de Segura et al. (2018) proposed the following relation for quantifying the influence of a substrate's permeability on triggering this transition: For sufficiently deep substrates, the second hyperbolic tangent term becomes ≈ 1 and the relation becomes simplified, with the dominant term becoming + .Gómez-de Segura & García-Mayoral (2019) determined √︁ 1 + ≈ 0.4 − 0.6 as the threshold in which the onset and transition to the K-H-like regime occurs.This agrees with the simulation data in this work, shown in figure 8c, where √︁ 1 + ≈ 0.31 for 1, and beyond which the K-H-like instability becomes triggered and subsequently grows in strength (shaded region in figure 8c).Cases 2 ′ and 3 ′ which differ in terms of their Φ anisotropy from the other cases (figure 8b) also conform to (3.7).While (3.7) does seem suitably applicable to the porous substrates considered here, it was originally obtained for conditions where In related work by Sharma et al. (2017), it was demonstrated that for , linear stability analysis leads to different results, expressed using where the hyperbolic tangent terms again become ≈ 1.The results from applying (3.8) to the simulation data of the cases in table 1 are shown in figure 8d.They are similar to those in figure 8c obtained using (3.7).Anisotropy becomes reflected more prominently when characterising the substrates using (3.8), as 2 ′ and 3 ′ become more separated from 2 and 3 in figure 8d compared to figure 8c.Also, the change when going from 3 to is captured with more abruptness when using (3.8).et al. (2017) are good predictors of when the turbulence dynamics over porous structures changes from its canonical nature.The first-order influence of + becomes evident when taking into account that the weakening of the wall-blocking effect at the surface is directly tied to this permeability component.Once permeability at the surface is present to a sufficiently large degree to permit the penetration of momentum into the substrate, the fluid moving below the surface must then contend with the horizontal blockage imposed by the substrate which is characterized by + and + , giving these permeability components second-order significance.The two criteria of (3.7) and (3.8) will probably exhibit more distinguishable results from one another for substrates with higher anisotropy, whereas here they are mostly similar to one another. Turbulence structure In figure 5b, it was shown that the cases are distinguished by higher Reynolds shear stresses in the region close to the permeable surface.Quadrant analysis of the velocity fluctuations can be leveraged to examine the change in turbulence intensities with respect to flow events, in particular the contribution from ejections (2, ′ < 0 and ′ > 0) and sweeps (4, ′ > 0 and ′ < 0).This is shown for the + ≈ 5 plane above the surface in figure 9. Going from an impenetrable smooth-wall (figure 9a) to permeable case 1 (figure 9b) and finally the greater permeable case of 1 (figure 9c), a tilting and expansion of the joint probability distributions of ′ and ′ is seen.As explained by Manes et al. (2011), the tilting is attributable to an increase in ′ activity, which is also evident when viewing the r.m.s.velocity fluctuations in figure 10, particularly when going from the to the cases.In terms of flow events, sweeps become increasingly dominant as the permeability increases, in both strength and number of occurrences, contributing to a greater generation of Reynolds shear stress in the near-surface region.This is a feature common to flows over permeable surfaces, be they canopies (Finnigan, Shaw & Patton 2009) or porous media (Manes et al. 2011). The experiments of Manes et al. (2011) also showed the distribution of points initially growing and then subsequently shrinking when going from their lowest permeability to their highest permeability case (refer to figure 15 of their manuscript).They attributed this behaviour to the near-surface flow mechanism changing to a mixing-layer type for the highest permeability porous media they investigated, similar to what occurs for turbulence over canopies (Finnigan 2000).The highest permeability examined by Manes et al. (2011), for which they observed mixing-layer behavior, was √ + ≈ 17 at ≈ 3848.This is considerably greater than the highest effective wall-normal permeability investigated here ( √︁ + ≈ 3.4), suggesting that the cases in this study do not fall into the category of mixinglayer type behavior. The differences in turbulence structure become better established by examining the spectral energy densities of the velocity fluctuations.The spectra of cases 1 and 1 in figure 11 are different form one another, with 1 exhibiting energetic scales at large spanwise wavelengths which are absent in 1.For case , the onset of the instability can be inferred from the emergence of energetic scales at large + , but it is not yet intensified.This spanwise coherent component is attributable to the existence of spanwise rollers associated with a K-H-like instability over permeable boundaries (Gómez-de Segura & García-Mayoral 2019).The presence of these roller structures for the cases can be observed in the vortex visualization of figure 12. Additionally, by using spectral proper orthogonal decomposition (SPOD), modes are obtained for 1 which capture the spanwise coherent rollers but for 1, and indeed all of the cases, no such modes are obtained.These SPOD modes for 1 may be viewed in appendix B, but have been omitted from the main text for brevity.The emergence of these K-H-like structures is the cause behind the intensification of turbulent activity in the proximity of the substrate surface and drag increase. Recalling the drag decomposition in figure 6, it was observed that increased streamwisepreferential anisotropy had a drag reducing effect in the cases, albeit a small one.As it is clear now that these cases belong to the smooth-wall-like turbulence regime, this makes the observed drag change in line with the results of Gómez-de Segura & García-Mayoral (2019).In the smooth-wall-like regime, the changes in drag are due to the "virtual-origin" effect (Luchini et al. 1991;Jiménez 1994;Luchini 1996;García-Mayoral et al. 2019;Ibrahim et al. 2021;Habibi Khorasani et al. 2022), where the Reynolds stress generating quasi-streamwise vortices become displaced farther away from the surface where a slip velocity is present, resulting in a net drag reduction.This linear mechanism of passive drag reduction is only achievable so long as the flow in the immediate proximity of the surface remains Stokes-like (Luchini 2015), and becomes negated beyond it. Surface flow The focus is now placed on the permeable surface of the substrates at + = 0, which is the region directly in contact with the overlying turbulent flow.Figure 13 shows the velocity and pressure fluctuations at + = 0; the differences that were observed in the flow-fields of 1 and 1 above the surface (figure 4) are also reflected here.It can be observed from figure 13a that 1 lacks the streaky patterns of 1, shown in figure 13b, but has more intense activity.The imprint of the surface geometry is also more clearly visible in the flow-field of 1.For the wall-normal velocity, spanwise coherent patterns are observable for 1 (figure 13c) whereas such coherency is not discernible for 1 (figure 13d). The spectral energy densities at the surface level in figure 14 reveal the signature of the K-H-like structures (indicated with the red lines) observed in figure 11 for 1.For 1, no such signature is visible and overall far less flow activity occurs at the permeable surface.Additionally, the spectra show energetic regions at wavelengths equal to the horizontal substrate spacings + and + (indicated using the green lines) along with their sub-harmonic wavelengths.These regions represent the pore-coherent flow which is modulated by the ambient turbulence, as similarly occurs over rough surfaces (Abderrahaman-Elena et al. 2019).The pore-coherent flow component forming along the spanwise direction repeats periodically along the streamwise direction in intervals of + = + .This flow component is modulated by the ambient turbulence and becomes amplified over a broad range of spanwise wavelengths as can be seen in figures 14a and 14b for the and spectra of 1.A similar effect takes place for the pore-coherent flow component forming along the streamwise direction.While the pore-coherent flow does exist for 1, it is significantly weaker compared to 1.Regarding the 2 ′ and 3 ′ cases, some delicate differences can be observed compared to 2 and 3, but the assessment of them is not done here and is instead gathered in appendix C for the interested reader. Ultimately, the amplified pore-coherent flow (areas enclosed by green lines in the spectra of figure 14) and its sub-harmonics for the cases can be attributed to the existence of energetically coherent ambient turbulent scales, particularly those associated with the K-H-type structures.This would explain why the cases, despite having streamwise and spanwise pitch-lengths of comparable size to those of the cases, do not exhibit similarly strong pore-coherent flows.Broadband excitation of any flow component induced by the geometry of the porous substrate is contingent upon the existence of broadband energetic turbulent structures.The observations made here have important implications for the sub- Figure 15: Mean velocity profiles inside the substrates for the cases of table 1. surface flow since the pore-coherent flow, which undergoes modulation by the ambient turbulence, factors into the scale-selection that takes place at the surface and therefore the scales of motion that occur inside the substrates. Sub-surface flow Thus far, the effects due to the presence of a porous substrate have been examined for the overlying flow.Attention is now given to the sub-surface flow that develops inside the substrates. Mean flow, fluctuating velocities and Reynolds shear stress First, the mean velocity along with the fluctuations of the different velocity components are examined.Figure 15 demonstrates that the mean flow develops a shear-layer beneath the surface where the flow decays rapidly over a short distance in an exponential manner.This is mainly notable for the cases, since a very weak mean flow develops inside the substrates for the cases.Within the shear-layer, the mean flow exhibits flow reversal.Larger wallnormal pore spacings cause this region to become extended and the position at which the flow undergoes reversal corresponds to the bottom of the first pore layer, i.e. + ≈ − + .The exponential decay exhibits similarity across the different cases, but seemingly requires both a large enough wall-normal pore spacing and a strong surface flow to develop since the cases do not exhibit such a quality.The Reynolds shear stress undergoes a similar pattern of sign change as the mean flow below the surface (figure 16).The cases once again develop a region of rapid change which the cases do not demonstrate; the former cases also have notably greater magnitude.As the first pore layer becomes deeper, this reversal region becomes extended.2 ′ and 3 ′ exhibit the same pattern as the other cases. For the velocity fluctuations (figure 17), all of them gradually decay toward the floor of the porous substrates where they become forcibly dampened due to the no-slip condition.However, both ′ and ′ undergo dampening at the bottom of each pore layer as evident by their oscillatory patterns, whereas ′ largely demonstrates a monotonic decay.Considering the downward path from a surface pore-opening; the wall-normal flow entering an opening will not come across any barriers along the way toward the substrate floor since it moves through what is essentially a narrow duct.For wall-parallel flow however, at the bottom of each pore layer the interconnected rods of the substrate's geometry will impede any in-plane motion.The overall magnitude of the spanwise velocity fluctuations is less than those of the streamwise and wall-normal velocity fluctuations, which follows from the spanwise velocity also being the least energetic velocity component at the surface of the substrates.2 ′ and 3 ′ have stronger streamwise fluctuations compared to 2 and 3.This can primarily be attributed to their larger + , resulting in less impedance of streamwise momentum, but may also be attributable to the stronger turbulence at the surface (figure C.2), resulting in a greater modulation of the sub-surface flow, an aspect which will be examined in §6. Some of the observations made here have been similarly reported for turbulent flows over engineered dense canopies (Sharma & García-Mayoral 2020), such as the gradual decay of the wall-normal fluctuations.Periodic dampening of the fluctuations were not reported for the canopy flows, but this is attributable to the porous substrates having layers of interconnected solid elements whereas the canopy filaments were isolated from one-another and do not place similar restrictions on in-plane fluid motion. Flow structure and features The surface flow was described in §4 and bearing in mind the observations made there the sub-surface flow is now examined by assessing the instantaneous fluctuations within the first pore layer at + ≈ −15 in figure 18.The flow fields of both 2 and 3 show spanwise elongated patterns in both their streamwise and wall-normal velocity fluctuations (figures 18a, 18e, 18b, 18f).The pressure fluctuations of both 2 (figure 18i) and 3 (figure 18j) reflect the patterns of their streamwise and wall-normal velocity fluctuations, suggesting that the velocity fluctuations inside the substrates are induced by the pressure fluctuations.Kuwata & Suga (2016) attributed the velocity fluctuations occurring within the porous substrate to the pressure fluctuations caused by the K-H-like instability at the surface.The observations made here seem to agree with this, as the turbulence in the near-surface region of both 2 and 3 falls into the K-H-like regime.The flow field of 3 ′ (figures 18d, 18h, 18l) is similar to 3 (figures 18b, 18f, 18j) with no discernible differences existing between them.The flow field of 2 ′ in figures 18c, 18g, 18k shows a stronger spanwise coherency than 2 in figures 18a, 18e, 18i (this greater coherency can also be observed in the flow-field at + = 0 in figure C.1, and is attributed to the stronger K-H-like scales visible in the spectra of figure C.2). More details are revealed by examining the spectra of the fluctuations within the first pore layer at + = −15, shown in figure 19.The spectra of 𝐻𝑃2 and 𝐻𝑃3 (figures 19a-d and 19e-h) show that almost no ambient turbulence scales penetrate into the substrate.Only the pore-coherent flow remains energetically discernible, as seen by its spectral signature enclosed by the green lines of figure 19.The pore-coherent flow is also diminished compared to the surface region (figure C.2), but not as strongly as the ambient turbulence.For 2 ′ , its stronger K-H-like scales at the surface level lead to the survival of those scales down to this depth within the substrate (the regions enclosed by the red lines in figures 19i, 19j, 19l), although they are quite weak.Despite having the same wall-normal permeability as 2, the streamwise favorable anisotropy of 2 ′ leads to stronger turbulent scales at the surface which are then able to penetrate deeper into the substrate. Owing to the fact that turbulence does not survive this deep into the porous substrates for 2, 3, 2 ′ and 3 ′ , the coherent patterns observed in their flow fields (figure 18) must be attributed to the pore-coherent flow which remains detectable at this depth.The broadband spanwise intensification of the pore-coherent flow however is imparted to it from the surface level turbulence which possess spanwise coherent energetic scales.The modulation persists throughout the substrate and hence why the spectra in figure 19 show long patches of spanwise energetic scales, particularly for the wall-normal velocity.Ultimately, for the porous substrates under consideration here, there exists a notable pore-coherent flow component below the surface, and in some cases weak scales of ambient turbulence related to the K-H-like instability.Similarly, in flows over canopies the fluctuations below the canopy tip-plane are attributed to the strong overlying cross-flow rollers that develop due to the existence of a perturbed mixing layer (Sharma & García-Mayoral 2020).As mentioned previously in §3.5, for a mixing layer to emerge over porous media very high effective surface permeability (or permeability Reynolds is required (Manes et al. 2011).Outside of this mixing layer regime, the major flow below surface seems to mainly be due to the pore-coherent flow which undergoes modulation by the turbulence at substrate's surface.Manes et al. (2011) examined whether the resulting eddy structures over their porous foams shared the same characteristics as those reported over canopies and which are associated with an inflectional instability of the mean velocity (White & Nepf 2007).They observed this to not be the case for low to intermediate ranges of permeability.This also applies to the porous substrates examined in this paper and the analysis done to quantify this is gathered in appendix B. Before proceeding further, as a final examination to see whether the flow structure inside the substrates undergoes any notable change deeper inside the substrate, the instantaneous fluctuations as well as the spectra at + ≈ −55 for case 1 alone are shown in figure 20.One can witness that the patterns are overall similar to those observed at + ≈ −15 for the rest of the cases, with the most notable scales of motion again being those of the pore-coherent flow, while a weak footprint of the K-H-like scales are also present.The existence of the latter at this depth is of course attributable to the stronger overlying K-H-like structures of 1.In addition, 1 also lacks interconnected rod layers inside the substrate which would impede downward directed flow.The explanation for the spanwise coherence of the flow-field is similar to what was previously described for the flow at the shallower depth of + ≈ −15.The spectral signature of the pore-coherent flow which encompasses a range of streamwise scales is repeated along the spanwise direction at intervals equal to the spanwise spacing ( + = + ), i.e. between two consecutive along this direction.In essence, the patches are collections of narrow fingers of streamwise velocity which are confined to the pores due to a micro-channelization effect.These flow elements appear as a spanwise coherent region macroscopically due to being modulated in amplitude, which is what will be examined next. Surface-flow induced amplitude modulation of sub-surface flow The pore-coherent flow is subject to amplitude modulation (AM) by the scales of the overlying ambient turbulence, evidence of which was provided in the experimental investigation of Kim, Blois, Best & Christensen (2020).This phenomena will now be examined for the substrates considered in this study. The presence of any solid structure introduces spatial inhomogeneities within the flow field.A conventional method for isolating this effect is the triple decomposition of Reynolds & Hussain (1972) where is the total velocity, the time-and space-averaged mean velocity and ′ the fluctuating velocity.The fluctuating component itself then consists of a spatially inhomogenous time-averaged component, ũ, called the dispersive velocity, and a turbulent component, ′′ .Amplitude modulation is a dynamic effect that is not reflected in the time-averaged dispersive velocity field.The fluctuating velocity along with its different components from (6.2) are shown in figure 21 for case 1.The time-averaged dispersive velocity field is weak and does not have irregularities, but as is evident in the spectra of figure 20d, the pore-coherent flow which resides in ′′ has a different spatial pattern.Abderrahaman-Elena et al. (2019) proposed a modified triple decomposition and used it to quantify the AM effect for rough-wall turbulence.However, for the porous substrates examined here the wavelengths of the pore-coherent flow and those of the ambient turbulence do not significantly overlap and regular Fourier filtering can be used to isolate this effect.This is demonstrated in figure 22, where the high-frequency (low-wavelength) amplitudemodulated signal of the pore-coherent flow has been removed from the streamwise velocity signal at the surface ( + = 0) using low-pass filtering.This recovers the low-frequency (long-wavelength) signal of the ambient turbulence.There does not seem to be a discernible AM effect above the surface at + ≈ 5, as the pore-flow component (the undulations of the black line) do not undergo notable changes in amplitude.The AM effect is similarly demonstrated for the wall-normal velocity in figure 23.Note that this AM phenomena is different from AM observed in canonical turbulent flows between the inner and outer flow regions (Mathis, Hutchins & Marusic 2009).That effect is due to the existence of large-scale structures within the log-layer which will emerge when the Reynolds number is sufficiently large ( > 1700). The approach undertaken here to quantify AM follows that of Mathis, Hutchins & Marusic (2009), where the correlation between the low-pass filtered (large-scale) streamwise velocity fluctuations, ′ , and the long-wavelength envelope of the high-pass filtered (small-scale) velocity fluctuations, ( ′ ), taken at two different fixed positions ( 1 for the ′ and 2 for ′ ) quantifies the degree of AM (note that ′ can be any of the velocity components) In (6.3), denotes the envelope of a signal and is acquired using the Hilbert transform. The Hilbert transform of a real-valued function, (), produces another real-valued function, f ().Together, () and f () form a harmonic conjugate pair and define the complex analytic signal of (), ) .(6.4)This provides the instantaneous envelope, (), and phase, (), allowing for the demodulation of the original modulated signal, ().More details regarding the Hilbert transform may be found in Mathis et al. (2009) and the references contained therein.Other approaches can also be used to the assess modulation effects, such as wavelet analysis used by Baars et al. (2015) to quantify AM and FM effects.The Hilbert-based approach however remains robust.The long-wavelength envelope of small-scale velocity, ( ′ ), obtained after taking the Hilbert transform of the velocity time-signal, is then high-pass filtered to keep only the modulated small-scale velocity signal.Unlike in experimental measurements, the velocity signals here are not single-point measurements.Instead, spanwise-averaged one-dimensional velocity signals at different + planes are used to first obtain instantaneous correlations and then followed by ensemble-averaging over all temporal samples to obtain a single correlation coefficient. The plot at the top of figure 24 shows how the small-scale wall-normal velocity fluctuations at the permeable surface are modulated in amplitude by the large-scale streamwise velocity fluctuations of the ambient turbulence above it.The envelope of ′ rises and falls along with the variations in the amplitude of ′ .The high degree of correlation becomes clearer when considering the bottom plot, where the envelope of ′ is phase-shifted by , making it overlap to a significant extent with the signal of ′ .This also demonstrates that events of and are almost always in anti-phase with respect to one another close to the surface.Such a modulation effect is not observed between ′ and ′ in figure 25.The lack of AM from the outer to inner region is also demonstrated using this approach in appendix D. The instantaneous frequencies of the velocities can be calculated from their analytic signals obtained using the Hilbert transform.These can then be used to calculate the instantaneous phase-difference between the streamwise and wall-normal velocities.A probability density histogram of instantaneous phase differences is shown in figure 26b for case 1, demonstrating that ′ and ′ are predominately in anti-phase.The probability density histogram for the AM correlation coefficient (6.3) is shown in figure 26a and demonstrates the persistent presence of AM between the ′ and ′ as they evolve in time.Similar effects are not observed between ′ and ′ in figures 26d and 26c.Examination of the flow inside the substrates (figures 20 and 21) demonstrated that AM is present within them.As such, it is of interest to see how deep the effect persists and also how its strength differs between the various substrates.Figure 27 displays the AM effect on the wall-normal velocity at different depths for the substrates of table 1. AM remains quite strong half-way down into the substrates for the cases.It is much weaker overall for the cases, highlighting how the energetic surface level dynamics of the cases which fall into the K-H-like regime enhances this effect. Summary and discussion Direct numerical simulations of turbulent flows in an open channel geometry where the wall-side of the channel is covered by a porous substrate have been carried out in this work.Anisotropic porous substrates with permeability components of different values were first assessed in how they cause changes in the overlying turbulent flow.When the wall-impedance condition becomes weakened, near-wall turbulence undergoes a transition away from its canonical structure -characterized by the presence of streaks and quasi-streamwise vorticesto one where spanwise coherent structures reminiscent of the K-H instability emerge. The primary permeability component of significance in determining wall-impedance is .An analysis using the permeability criteria of Sharma et al. (2017) and Gómez-de Segura et al. (2018) for predicting when turbulence transitions to a K-H-like regime remains robust for the DNS data in this study.The condition was obtained using linear stability analysis of permeable wall boundary conditions derived using the Darcy-Brinkman equation but remain applicable to the pore-scale resolved DNS data in this study, indicating that the microstructure details of the porous substrates do not have a leading-order impact on the instability.This agrees with the argument made by White & Nepf (2007), who assessed that only the overall resistance of the porous layer is important and not the details of its geometry in bringing about and sustaining the instability. Past results in the literature using continuum-based approaches of representing a porous region or using permeable wall boundary conditions have suggested that a reduction in drag is perhaps attainable for certain combinations of permeability, particularly for streamwise preferential anisotropy (Gómez-de Segura & García-Mayoral 2019).However, none of the porous substrates examined here resulted in drag reduction (figure 5a).Drag reduction in a passive manner can be obtained if a surface can simultaneously weaken viscous dissipation while impeding turbulent mixing from taking place close to its vicinity, an effect which is quantified in the "virtual-origin" framework (Luchini 1996;Ibrahim et al. 2021).The weakening of viscous dissipation is typically quantified in terms of a slip velocity, which is negligible for the porous substrates tested in this study.Turbulent activity however increases in the vicinity of the surface such that the net effect becomes one of drag increase (figure 6).Streamwise-preferential anisotropy does not lead to a drag reducing effect, which can only occur in the canonical smooth-wall-like regime of near-wall turbulence, and which only the cases belong to.They do however exhibit a trend of drag decrease with increased streamwise-preferential anisotropy.Ultimately, drag reduction using porous media must be assessed in terms of the slip-velocity (or slip-length) they can cause at the surface. At the surface of the porous media, spectral analysis reveals the existence of flow signatures conforming to the geometry of the surface and with amplified levels of energy (figure 14).Inside the porous substrates, surviving scales become rapidly dampened and flow component energy-wise the pore-coherent flow figure 19.The of the pore-coherent flow is geometry dependent, which makes the microstructure of the porous medium, particularly at the surface, an important aspect of its design. The aforementioned pore-coherent flow undergoes significant amplitude modulation by the ambient turbulent motion present near the surface of the porous media.This AM effect extends deep into the porous media, perturbing its flow and becoming a principal means of inducing flow activity inside it.Stronger ambient turbulence at the surface strengthens this effect, such that it becomes more pronounced for the cases which fall into the K-H-like regime figure 26.This is because the K-H-like structures lead to greater momentum exchange between the surface and sub-surface flow.These flow features are conceptually illustrated in figure 28, where going from left to right indicates an increase in + and ultimately + . Knowledge of the regimes illustrated in figure 28 and the scale interaction which occurs between the porous media and turbulent flow can be leveraged in applications involving heat and mass transfer.Unlike flow momentum, heat transfer stands to benefit from more intense turbulent activity in the vicinity of the porous medium, as this will lead to greater thermal convection.To what degree this can be exploited, is one example of an interesting line of inquiry that can be pursued in relation to turbulence and porous media.3).The permeabilities with the subscript have a grid resolution of /Δ , = 20 and those without /Δ , = 10.The last three columns list the differences in the permeabilities obtained using the two resolutions. While the resolution requirements for regular channel flow simulations are well-established throughout the literature (Kim & Moin 1985;Lee & Moser 2015), problems involving fluid-solid interactions need to be assessed on a case-by-case basis.The baseline grid has a resolution of ( , , ) = (1620, 324, 810).This grid more than suffices for resolving the bulk flow region, but it must be determined whether or not the wall-parallel resolution is sufficient for resolving the solid phase of the porous substrates.With this baseline configuration, the number of wall-parallel grid-points per substrate rod thickness becomes /Δ , = 10.This was chosen based on the grid study results of Sharma & García-Mayoral (2020) which was conducted for turbulent flows over canopies.They showed A similar frequency characterization was performed for 1 which has the highest wallnormal permeability √︁ + = 3.4 out of all the cases in table 1.First, spectral proper orthogonal decomposition (SPOD) (Towne et al. 2018) is applied to the surface flow of 1.Examination of the SPOD eigenvalues in figure B.2 reveals a peak in the leading SPOD mode at = 0.22/ for both the streamwise and wall-normal velocity components.This is close to ≈ 0.22/ reported by Manes et al. (2011) for their metal foam which had a permeability of √ + = 3.2.Observing the first SPOD mode for both the streamwise and wall-normal velocity in figure B.1 reveals recurrent spanwise-elongated patterns.Such patterns are not recovered in the SPOD modes for any of the cases (not shown).This further demonstrates the regime distinction that was described in §3.4 and §3.5, and it can be concluded that for the porous substrates of table 1, the mixing-layer analogy does not hold, in agreement with the conclusion made by Manes et al. (2011). Figure 1 : Figure 1: Sketch of the computational domain. Figure 2 : Figure 2: (a) Schematic of general substrate geometry along with (b, c) its nomenclature. Figure 5 : Figure 5: (a) Mean velocity and (b) Reynolds shear-stress distributions of the bulk flow overlying the porous substrates.Symbols and colors follow the descriptions in table 1.The black line is a reference smooth-wall solution at = 360. Figure 7 : Figure 7: (a) Reynolds shear stress and (b) root mean square velocity fluctuations above the substrates in outer-scaled wall coordinates. Figure 10 : Figure 10: Distributions of the root mean square velocity fluctuations for the bulk turbulent flow overlying the porous substrates. Figure 12 : Figure 12: Vortex visualization using the -criterion for substrate 1 in the - plane.The vortex core (light color) is a region of > 0 (high vorticity) and is surrounded by a sheet (dark color) of < 0 (high shear).The hot and cold regions below the surface represent positive and negative wall-normal velocity fluctuations, respectively.Flow direction is from left to right. Figure 13 : Figure 13: Instantaneous fluctuations of (a, b) streamwise velocity, (c, d) wall-normal velocity, and (e, f) pressure at + = 0. First column, 1; second column, 1.Flow direction is from left to right.The white regions are due to the presence of the porous substrate's rods. Figure 16 : Figure16: Reynolds shear stress profiles inside the substrates for the cases of table 1. Figure 19 : Figure 19: Pre-multiplied spectral densities: (a, e, i, m) , (b, f, j, n) , (c, g, k, o) , and (d, h, l ,p) at + ≈ −15.First row, 2; second row, 3; third row, 2 ′ ; fourth row, 3 ′ .Note the differences in the overall magnitude of the contours for the different cases.The green lines enclose the most energetically significant parts of the pore-coherent flow and the red lines those of the K-H-like rollers. Figure 20 : Figure 20: Instantaneous velocity fluctuations and pre-multiplied spectral energy densities of case 1 at + ≈ −55: (a) streamwise fluctuations, (b) wall-normal fluctuations, (c) pressure fluctuations; (e) , (f ) (g) , and .The green lines enclose the most energetically significant parts of the pore-coherent flow and the red lines the surviving turbulence belonging to the K-H-like scales. Figure 24 : Figure 24: Amplitude modulation in action.The top plot shows large-scale streamwise velocity fluctuations (−−) at + ≈ +5 and the long-wavelength envelope of small-scale wall-normal velocity fluctuations (−−) at + = 0 of case 2; the bottom figure shows the same, only with the envelope (− −) now phase-shifted by . Figure 25 : Figure 25: Lack of amplitude modulation.The plots are similar to those in figure 24, but with the envelope of the spanwise velocity instead of the wall-normal velocity. Figure 26 :+ Figure 26: Probability density histograms of AM correlation between (a) ′ + and Figure 27 : Figure 27: Degree of amplitude modulation for the different substrates from the surface adjacent region down to the impenetrable floor of the substrates.(a) Modulation for wallnormal velocity, and (b) modulation for spanwise velocity.Light to darker shades correspond to + ≈ −5, + ≈ −50 and + ≈ −110, respectively. Figure 28 : Figure 28: Conceptual schematic showing the evolution of turbulence over porous substrates and the resulting flow phenomena. Table 1 : DNS cases of open-channel turbulent flow over porous substrates.The porosity for each substrate is given by .The pore spacings are + , + and Representative elements of the and substrates in table 1. 2 ′ and 3 ′ (not shown) are rotated versions of 2 and 3 around the axis by /2. . designates Higher Permeability substrates ( √︁ + > 2), designates Lower Permeability substrates ( √︁ + < 1), and a moderate permeability case residing between the other two groups 1 < √︁ + < 2. Cases 2 ′ and 3 ′ -where the wall-parallel spacings of 2 and 3 have been swapped-retain the wall-normal permeability of 2 and 3 but have increased streamwise permeability and hence anisotropy, Φ = / .+, while √︁ + , √︁ + and √︁ + are the effective permeabilites which are analogous to the permeability Reynolds number, .The ratio of streamwise to wall-normal permeability is Φ .The rod or filament thickness of the solid matrix is / = 0.039 or + = 14 for all cases.Labels, colors and symbols remain consistent throughout the manuscript. Table 2 : The von Kármán constant, , and log layer intercept, B, resulting from the fitting of the law of the wall (3.6) to the mean velocity profiles of the different cases in table 1 along with their respective values of Δ + .The last two columns report the root-mean-square errors and coefficient of determinations, respectively. • • • •, /Δ , = 5.Table A.1: Darcy permeability estimates for the substrates of table 1 using Stokes flow simulations of REVs (such as those shown in figure
12,742.8
2022-10-18T00:00:00.000
[ "Physics" ]
Auto-detection of the coronavirus disease by using deep convolutional neural networks and X-ray photographs The most widely used method for detecting Coronavirus Disease 2019 (COVID-19) is real-time polymerase chain reaction. However, this method has several drawbacks, including high cost, lengthy turnaround time for results, and the potential for false-negative results due to limited sensitivity. To address these issues, additional technologies such as computed tomography (CT) or X-rays have been employed for diagnosing the disease. Chest X-rays are more commonly used than CT scans due to the widespread availability of X-ray machines, lower ionizing radiation, and lower cost of equipment. COVID-19 presents certain radiological biomarkers that can be observed through chest X-rays, making it necessary for radiologists to manually search for these biomarkers. However, this process is time-consuming and prone to errors. Therefore, there is a critical need to develop an automated system for evaluating chest X-rays. Deep learning techniques can be employed to expedite this process. In this study, a deep learning-based method called Custom Convolutional Neural Network (Custom-CNN) is proposed for identifying COVID-19 infection in chest X-rays. The Custom-CNN model consists of eight weighted layers and utilizes strategies like dropout and batch normalization to enhance performance and reduce overfitting. The proposed approach achieved a classification accuracy of 98.19% and aims to accurately classify COVID-19, normal, and pneumonia samples. and can be transmitted to humans, leading to zoonotic diseases.Middle East respiratory syndrome coronavirus (MERS-CoV) and severe acute respiratory syndrome coronavirus (SARS-CoV) are two examples of coronaviruses causing severe respiratory diseases in humans 7 .As of April 24, 2023, the global tally of COVID-19 cases stood at 686,553,714, with 6,860,023 reported fatalities and 659,100,556 recoveries.Currently, there are 20,593,135 active cases, with 99.8% exhibiting mild symptoms and 0.2% classified as severe or critical 8 . COVID-19 is a recent respiratory illness caused by the coronavirus that can significantly impact individuals unexpectedly.Common symptoms of the disease include fever, cough, difficulty in breathing, and sore throat 9,10 .Some patients may also experience symptoms such as nasal blockage, body aches, fatigue, and loss of taste 11 .The incubation period, or the time between infection and the onset of the earliest symptoms, is typically around 14 days 12 . Real-time reverse transcription-polymerase chain reaction (RT-PCR) testing is the most widely used strategy for identifying and diagnosing COVID-19.It is considered the primary method for detecting the coronavirus infection 13 .In addition to RT-PCR, computed tomography (CT) scans and chest X-rays play a crucial role in the timely detection and management of contagious infections 14 .When an RT-PCR test yields a negative result, patients may undergo additional verification through radiological imaging to confirm or rule out the presence of the virus.This is necessary because RT-PCR testing has a relatively low sensitivity, ranging between 60 and 70% 15,16 .CT scans serve as an important screening tool alongside RT-PCR for identifying COVID-19, particularly in the early phase of the disease (around 0-2 days) when CT findings are more reliable than RT-PCR results 17,18 .Studies have shown that CT scans of patients who have recovered from COVID-19 pneumonia can reveal significant lung disease around 10 days after the onset of symptoms 19 . COVID-19 presents certain radiological signatures that can be observed in chest X-rays, making it crucial for radiologists to carefully examine these images.However, the process of manual chest X-ray analysis can be time-consuming and may not always be accurate.Therefore, there is a need for automated methods to analyze chest X-rays 12 .The goal of the present study is to develop a computerized approach based on deep learning techniques for detecting COVID-19 cases using X-ray images 20 . In recent years, machine learning (ML) has gained popularity in the field of medicine and has become a complementary tool for doctors 21 .Deep learning, a subfield of artificial intelligence (AI), is particularly well-suited for creating end-to-end models that can produce accurate results from input data without the need for manual feature extraction 22,23 .Deep learning techniques have been successfully applied to various medical tasks, such as identifying arrhythmia, classifying skin cancer, and diagnosing pneumonia using chest X-ray images [24][25][26] .While radiologists play a crucial role in medical diagnosis, AI technology can assist them in making accurate and efficient diagnoses 27 .Additionally, AI approaches can help address challenges related to the scarcity of RT-PCR test kits, testing costs, and result turnaround time [28][29][30] . The COVID-19 pandemic initially presented challenges due to the ambiguity surrounding its diagnosis, mode of infection, and appropriate treatment.Given the large number of infections, it became necessary to leverage modern technology, such as artificial intelligence, to quickly identify the disease using chest X-rays.Timely diagnosis is crucial as any delay could result in patient fatalities. The proposed approach in this study involves the development of a deep learning-based algorithm called a Custom Convolutional Neural Network (Custom-CNN) specifically designed for diagnosing COVID-19.Swift detection is essential due to the potential severity of COVID-19 if diagnosed late.Preprocessing of raw images plays a vital role in deep learning, and in this model, all X-ray images are resized to a standardized size of 224 × 224 pixels.The Custom-CNN model is constructed using network blocks and consists of eight weighted layers.Techniques like dropout and batch normalization are employed to enhance the algorithm's performance and reduce overfitting.The proposed model effectively addresses challenges such as vanishing and exploding gradients during the learning process.Stochastic gradient descent is utilized to train the model, with a cumulative batch size of 32 and a total of 30 training epochs. The main contributions of this study are as follows: 1. We introduced a novel CNN model, Custom-CNN, for COVID-19 detection using chest X-ray images.To optimize the proposed network, several tests were conducted on various network hyperparameters, including split ratio, batch size, learning rates, and optimizer, which can impact the performance of the network.2. A comparative study was performed using two public datasets to evaluate the proposed model against several state-of-the-art models, such as VGG16, VGG19, and others.The results demonstrated the superiority of the proposed algorithm over other algorithms. The following sequence was used to display the remaining parts of the paper: Related works appear in section "Related works".A summary of the dataset that was used and the suggested deep-learning approach are provided in section "Findings and interpretation".The experimental design, the collected data, and the discussion are highlighted in section "Findings and interpretation".Section "Conclusion" concludes the article and provides instructions for subsequent tasks. Related works Given the rapid spread of COVID-19 and its significant impact on public health and the global economy, there is a pressing need to develop effective tools for assessing the presence of the disease.Recently, artificial intelligence (AI) techniques in conjunction with radiological technologies have been adopted to automatically diagnose COVID-19 in affected individuals. Deep learning techniques have been particularly useful in analyzing chest X-rays quickly, as X-rays offer advantages such as low ionizing radiation exposure and portability compared to chest CT scans 31,32 .Ozturk et al. 33 proposed a deep learning model with an end-to-end architecture that directly utilizes raw chest X-ray data for COVID-19 diagnosis, eliminating the need for manual feature extraction.This model was trained using a dataset of 125 chest X-ray images, highlighting the need for more precise diagnostic techniques.One challenge in interpreting chest radiographs is the early detection of COVID-19 infection, as ground glass opacity (GGO), a common finding in COVID-19 cases, may have low sensitivity.However, well-trained deep learning models can focus on details that may be imperceptible to the human eye, potentially addressing this limitation. Hemdan et al. 34 33 presented a novel model for automatic COVID-19 diagnosis using raw chest X-ray images, achieving high accuracy (98.08%) for both multi-class classification (COVID vs. No-Findings vs. Pneumonia) and binary classification (COVID vs. No-Findings).In another study, the YOLO real-time object detection system was used, employing the DarkNet model with 17 convolutional layers, each having a separate filter 33 . Narayan Dasa et al. 37 utilized chest X-ray images to develop a new deep-transfer learning-based technique for automatic detection of coronavirus disease.They suggested that these techniques can be used to leverage the strengths of networks trained on large datasets and modify the parameters of already trained networks on small datasets.However, there are limitations on how these techniques can be applied to X-rays. Apostolopoulos and Mpesiana 38 In a similar context, Nishio et al. 39 employed transfer learning with CNN models pre-trained on large datasets to enhance the reliability and robustness of models trained on smaller datasets.The models they used included ResNet-50, VGG16, MobileNet, EfficientNet, and DenseNet-12.They specifically utilized the VGG16 model as a deep learning model for their proposed approach.Various data augmentation techniques, such as shifting, flipping, mixing up, rotating, random image cropping, and patching, were employed to compensate for the limited amount of data available and improve the model's performance.The method achieved a sensitivity of 90% for COVID-19 pneumonia and an accuracy of 83.6% when compared to non-COVID-19 pneumonia cases and healthy individuals. Li and Zhu 40 developed the COVID-Xpert technology, which leveraged chest X-ray radiography imaging properties from a larger dataset of pneumonia and normal cases, refined with a small number of COVID-19 patients, to identify coronavirus cases using CNN models.They utilized the DenseNet-121 deep neural network architecture for pre-training their models, addressing the lack of COVID-19 cases and improving the model's effectiveness.Instead of using a more general dataset like ImageNet, they trained the DenseNet-121 model on closely related datasets, specifically chest X-ray photographs with 108,948 samples.They tested the proposed model using 555 chest X-ray images categorized into three classes: 185 normal, 185 pneumonia, and 185 COVID-19 images.Their classification accuracy of 88.9% achieved an area under the ROC curve of 0.973. Oh et al. 41 tackled the issue of the absence of specialized COVID-19 chest X-ray images by developing a patch-based CNN approach for coronavirus assessment with a manageable number of trainable parameters.Their suggested model included a pre-processing step to normalize data heterogeneities and bias, a segmentation network to extract the lung region, and a classification network for patch-by-patch training and inference.The model achieved sensitivities of 90%, 93%, and 100% for normal, pneumonia, and COVID-19 images, respectively, with corresponding precision values of 95.7%, 90.3%, and 76.9%. To address the similarities between pneumonia and COVID-19 variables in chest X-rays, Khuzani 43 employed a dimensionality reduction method with a neural network classifier (CXR).The Kernel-Principal Component Analysis (PCA) technique was used to decrease the dimension of the feature space, and a total of 420 images (120 normal, 120 coronavirus, and 120 non-coronavirus pneumonia images) were collected to create the classifier. Gour 44 utilized X-ray and CT images to develop an automated COVID-19 detection system using layered ensemble convolutional neural networks.Multiple layered convolutional neural network sub-models were employed to diagnose COVID-19 based on these images.A softmax classifier was used to stack the submodels from the Xception and VGG19 models.To demonstrate the discriminating power of the stacked CNN model, 4645 CT scans from 65 patients were collected.Out of these, 2249 images were found to have COVID-19, while 2396 were assessed as being in excellent health.The stacked CNN model achieved a true positive rate of 97.62% for multi-class classification. For the categorization of X-ray images in diagnosing COVID-19, Karac 45 utilized pre-trained VGGCOV19-NET, VGG19, deep CNN models, and the Cascade model with the YOLOv3 detection technique.The accuracy of the models was evaluated using metrics such as the confusion matrix, ROC, precision, specificity, and F1-score, along with a fivefold cross-validation technique.The Cascade VGGCOV19-NET model achieved an overall Researchers in 56 used deep learning algorithms, VGG16 and ResNet50, to extract features from chest X-ray images and classify them into viral pneumonia, normal, and COVID-19 categories.The models achieved average accuracies of 89.34% (VGG16) and 91.39% (ResNet50) for COVID-19 detection.Larger datasets are beneficial for improving accuracy when using deep learning.The recommended system involves dataset creation, preprocessing, CNN implementation, output classification, loss calculation, parameter adjustment, and repetition for all datasets and epochs.VGG16 and ResNet50 models were effective for COVID-19 classification, with ResNet50 performing better. Several machine learning (ML) models have been trained and used in the literature for COVID-19 detection.Transfer learning has been employed using various pre-trained models, including COVIDX-Net, ResNet-50, MobileNetv2, DarkNet, Inception, Xception, Inception ResNet v2, VGG16, ResNet-50, MobileNet, DenseNet-121, Cascade VGGCOV19-NET, EfficientNet, Xception, VGGCOV19-NET, DeTraC, NASNet, and CycleGAN.These pre-trained models have demonstrated accuracy levels ranging from 79 to 93%.Additionally, some authors have developed their own models, such as the 2dCNN-BiCuDNNLSTM and BiCuDNNLSTM models, which have shown higher performance results, reaching an accuracy of 96.71%.It is worth noting that the accuracy of the models tends to decrease when applied to a larger set of X-ray images compared to achieving high accuracy with a small number of photos.Binary classifiers that performed exceptionally well and achieved accuracy levels surpassing 99% in many earlier works showed lower overall accuracy when classifying three groups (coronavirus, healthy, and pneumonia patients). Dataset characterization Two chest X-ray datasets were downloaded from free resources such as Kaggle to test and train the intended model.It is crucial to properly validate the performance of the suggested models using samples from the same category under assessment.The first dataset, referred to as dataset_1, is presented in Fig. 1 and consists of three categories: normal, coronavirus-positive, and viral pneumonia.The distribution of each class is illustrated in Fig. 2. Dataset_1 was developed by a group of scholars from Malaysia, Bangladesh, Pakistan, and Qatar and obtained from Kaggle 57 .It includes a total of 15,153 chest X-ray images, with 3,616 coronavirus-positive images, 1,345 viral pneumonia images, and 10,192 normal images.Figure 4 shows an example of dataset_1, depicting the three classifications: COVID-19, non-COVID-19 (Normal), and viral pneumonia 58 . The second dataset, labeled dataset_2, is represented in Fig. 3 and consists of two primary classes: normal and coronavirus-positive. Dataset_2 includes a total of 340 chest X-rays, evenly distributed between normal and coronavirus images.This dataset can be found on GitHub 59 , and each class contains 170 images after equal For training the suggested Custom-CNN model, 80% of the total chest X-ray images were used, while 20% were reserved for testing.Table 1 provides a detailed description of the normal (non-coronavirus), coronavirus, and viral pneumonia categories, along with the percentages of dataset division. Pre-processing Preprocessing is a crucial stage in deep learning techniques.It is an essential requirement for developing a model that yields good performance in the Convolutional Neural Network system used for COVID-19 detection.The input images have varying sizes in terms of width and length, necessitating the resizing of the input images.In this study, the two datasets consist of images with different dimensions (Width * Length).Therefore, the images were resized to the same dimensions for both datasets (224 * 224 pixels).A classification task was conducted, involving two and three categories, which were evaluated in this research study (Fig. 4). Convolutional neural network (Custom-CNN) To handle complex real-world scenarios while maintaining sufficient accuracy, numerous modifications have been made to CNN structures 61 .This section, which examines the structure of the proposed solution, presents the main argument of this research report.The CNN architecture of the proposed solution stands out due to the combination of methods used to construct this multi-level complex network.The development of the network and the arrangement of its building elements, including pooling, convolution, flattening, and fully connected layers, are collectively referred to as the "mix" in this context.In order for this algorithm to identify whether X-ray images of the patients under investigation depict health or disease, it requires access to the underlying features hidden within the X-ray images. As shown in Fig. 5, our suggested Custom-CNN model comprises eight weighted layers, with the first three being convolutional and the remaining five being fully connected.The initial convolutional layer filters the input image, which is 224 × 224 pixels, using 32 kernels of size 3 × 3, with a stride of one pixel and "valid" padding.The size of the subsequent layers in the CNN sequence is the same as the Max-pooling layer, which has a size of 2 × 2. However, the input size to the second and third convolutional layers differs from the first layer.The second and third convolutional layers each utilize 64 kernels of size 3 × 3, with a stride of one pixel and "valid" padding.Consequently, the input size for the third layer changes to 36 × 36 × 64, and for the second layer, it changes to 111 × 111 × 64.All three layers apply the ReLU activation function to introduce nonlinearity to their outputs.The output of the third convolutional layer, with a size of 17 × 17 × 64, is flattened into a 1-dimensional array of size 1 × 18,496. The remaining levels of the Custom-CNN model in this example consist of fully connected layers.The first fully connected layer has 2 neurons, the second has 256 neurons, the third has 128 neurons, and the fourth has 64 neurons.The ReLU activation function is utilized in these fully connected layers to nonlinearize their outputs.The output is then fed into a three-way Softmax function, which generates probabilities for the three class labels Due to the proposed algorithm having approximately ten million trainable parameters, the issue of overfitting arises, where the model performs better on the training data than on the test data.To address this problem, various well-known strategies were employed, including data augmentation, ℓ1 and ℓ2 regularizations, batch normalization, early stopping, and dropout.Among these strategies, dropout and batch normalization proved effective in improving the algorithm's performance and reducing overfitting.However, data augmentation, ℓ1 and ℓ2 regularization, and early stopping had limited impact in the conducted studies. Dropout is a technique where each neuron has a probability of being temporarily "dropped out" during training, excluding the input and output neurons.This means that the neuron's contribution is temporarily ignored during training but can be effective in subsequent steps.In this study, the initial dropout was set to a probability of 0.25 after the first fully connected layer, followed by subsequent dropouts with probabilities of 0.4, 0.3, and 0.5 after the second, third, and fourth fully connected layers, respectively. Batch normalization is a method used to normalize input values or bring numerical data to the same scale without altering its structure.It greatly reduces the number of training epochs required for training deep networks and stabilizes the learning process.In the proposed network, batch normalization was applied to the inputs of the second convolutional layer, the second fully connected layer, and the fourth fully connected layer. It is worth noting that the learning process of the suggested network mitigated the effects of well-known issues such as vanishing gradients and exploding gradients.Exploding gradients can cause exponential growth, resulting in significant weight updates in multiple layers and causing the algorithm to diverge.Vanishing gradients occur when the algorithm descends to lower layers, and the gradients become extremely small.These problems are well-recognized, and there are established methods that focus on network weight initialization, such as Glorot and Bengio and He et al., which were utilized in all layers of the proposed network.Additionally, the batch size was set to 32 examples, and the model was trained using stochastic gradient descent for a total of 30 epochs.The summarized details of the proposed Custom-CNN model can be found in Table 2. Findings and interpretation This section demonstrates the efficiency of the suggested Custom-CNN model in classifying COVID-19, pneumonia, and normal chest X-ray images for dataset_1 and dataset_2.Following the training process, the performance parameters based on the confusion matrix, including accuracy, precision, recall/sensitivity, F1-score, and test loss, are reported using the terms true positive (TP), true negative (TN), false positive (FP), and negative rates (FN).The expected and actual classifications of coronavirus X-ray images (i.e., pneumonia, normal, and coronavirus) are presented in Table 3 as a confusion matrix.This provides a detailed representation of the pre-processing and evaluation metrics for the two datasets.Section "Evaluation of the Custom-CNN using dataset_1" discusses dataset_1, while section "Evaluation of the Custom-CNN using dataset_2" focuses on dataset_2. The effectiveness of the Custom-CNN method can be evaluated using various metrics.In this study, the proposed model was assessed using the following metrics: accuracy, precision, recall/sensitivity, F1-score, and test loss, which were determined using the confusion matrix. Accuracy refers to the overall performance measurement, specifically the total number of correct predictions made.Accuracy = (TP+TN) (TP+TN+FP+FN) .Precision refers to the proportion of correctly predicted positive observations out of the total predicted positive observations.Precision = TP (TP+FP) .Recall (sensitivity) refers to the proportion of correctly predicted positive observations to all observations in the current actual class. Recall sensitivity = TP (TP+FN) .The F1 score refers to the metric that provides a single score that balances both precision and recall concerns into one number. F1 − Score = 2 * (Recall * Precision) (Recall+Precision) .Based on the previously specified criteria, the classification method assesses the effectiveness of the suggested strategy.The results of applying the proposed procedures to dataset_1 and dataset_2 are described in the following subsections. Evaluation of the Custom-CNN using dataset_1 Based on various hyperparameter adjustments, we investigated the performance of the proposed Custom-CNN model on COVID-19 images.For instance, we examined the model's performance regarding batch sizes, acquisition rate, and pre-trained network designs.In the first set of experiments, we evaluated the effectiveness of the suggested model in comparison to a CNN pre-trained network. Table 4 and Fig. 6 present the results for three split ratios: 80/20, 70/30, and 60/40.It was observed that the 80/20 split ratio consistently yielded higher results for accuracy, precision, recall, F1-score, and test loss, with values of 0.9819, 0.9767, 0.9833, and 0.073, respectively, compared to the 70/30 and 60/40 split ratios.The acquired data demonstrated that, based on all the performance indicators, an 80/20 split ratio produced the best outcomes. The effectiveness of the suggested Custom-CNN model was further examined in a second series of tests, focusing on various batch sizes.Table 5 and Fig. 7 present the results for three applicable batch sizes: 32, 64, and www.nature.com/scientificreports/128.It was observed that a batch size of 32 consistently yielded higher classification results for accuracy, precision, recall, F1-score, and test loss compared to batch sizes of 64 and 128.The classification scores for accuracy, precision, recall, and F1-score were 0.9819, 0.9767, 0.9833, and 0.073, respectively, for a batch size of 32.The collected data provided evidence that a batch size of 32 produced the best results across all performance indicators.The effectiveness of the suggested Custom-CNN model was further examined experimentally by considering different learning rate values.Table 6 displays 10 learning rate values (0.001, 0.002, 0.003, 0.004, 0.005, 0.0001, 0.0002, 0.0003, 0.0004, and 0.0005) and reveals that higher classification results of 0.9819, 0.9767, 0.9833, 0.9733, and 0.073 were achieved with a learning rate value of 0.001 for accuracy, precision, recall, F1-score, and test loss, respectively, compared to the other learning rate values.These results confirm that a learning rate of 0.001 consistently yields the best performance across all the measured criteria. Experimentally, the effectiveness of the proposed Custom-CNN model was further investigated by considering various CNN optimizers.Table 7 and Fig. 8 present the results of experiments conducted using eight CNN optimizers, namely Adam, Nadam, RMSprop, AdaGrad, SGD, Adadelta, Adamax, and Ftrl.The experimental results indicate that the highest classification results of 0.9819, 0.9767, 0.9833, 0.9733, and 0.073 were achieved with the Adam optimizer for accuracy, precision, recall, F1-score, and test loss, respectively, surpassing the Also, the proposed model was evaluated using binary classification of COVID-19 X-ray images and three classes consisting of coronavirus, normal, and viral pneumonia patients.The objective of this study was to assess the effectiveness of the Custom-CNN model in examining various relationships, including coronavirus and viral pneumonia, normal and viral pneumonia, and coronavirus and normal, as well as the associations among the three classes of coronavirus, normal, and viral pneumonia.In the experimental setup, a total of 1,345 chest X-ray images of pneumonia patients, 10,192 normal cases, and 3616 coronavirus-infected chest X-ray images were utilized.The outcomes were evaluated using various performance metrics such as accuracy, precision, recall/ sensitivity, F1-score, and test loss, as shown in Table 8.The experimental results demonstrated that the proposed method achieved optimal classification results for the three classes, with accuracy, precision, recall/sensitivity, F1-score, and test loss values of 98.19, 97.67, 0.9833, 97.33, and 0.073, respectively.These results indicate that the Custom-CNN effectively handled the datasets, even in the case of imbalanced data, and achieved optimal outcomes for multi-class problems.Specifically, the proposed method exhibited superior classification results for COVID and normal images, with accuracy, precision, recall/sensitivity, F1-score, and test loss values of 98.55, 98.5, 0.98, 0.98, and 0.0441, respectively.Conversely, for COVID and viral pneumonia images, the suggested method yielded accuracy, precision, recall/sensitivity, F1-score, and test loss values of 99.50, 99, 99.5, 0.99.5, and 0.0306, respectively.Similarly, the proposed method achieved higher classification results for normal and viral pneumonia images, with accuracy, precision, recall/sensitivity, F1-score, and test loss values of 99.35, 99.5, 0.97, 98.5, and 0.0562, respectively.In conclusion, the findings of this study demonstrate that the Custom-CNN model accurately and rapidly identifies COVID-19 from chest X-ray images.To mitigate the risk of bias, a large dataset of COVID-19 cases was employed, and extensive preprocessing techniques were applied to ensure appropriate inputs to the CNN architecture.Figure 9 illustrates the training, validation accuracy, and validation loss for the different classes.In this figure, one may observe certain sudden small value changes (peaks) in the validation accuracy and validation loss.Such occurrences are common when there is a mismatch between the distribution or characteristics of the training and validation images.This mismatch is a result of the random selection process used for training and validation. In this part, the effectiveness of the proposed Custom-CNN model in detecting COVID-19 images was evaluated using two deep learning techniques, namely vgg16 and vgg19, after determining the optimal parameter values for the Custom-CNN.The results demonstrated that the suggested model outperformed the other two approaches.Table 9 presents the outcomes for dataset_1 using various deep learning algorithms.Both Table 9 and Fig. 10 illustrate that the Custom-CNN achieved the highest classification accuracy of 0.9819, while vgg16 and vgg19 achieved accuracies of only 0.9159 and 0.88, respectively.Accuracy measures the percentage of positive samples that a model correctly identifies as positive samples.Additionally, Table 9 reveals that the Custom-CNN achieved the highest precision value of 0.9767, surpassing the precision values of vgg16 (0.9253) and vgg19 (0.9067).The Custom-CNN also attained the highest Recall/Sensitivity value of 0.9833, while vgg16 and vgg19 achieved Recall/Sensitivity values of 0.8612 and 0.8367, respectively.The F1-score is a suitable metric when seeking a technique that strikes a balance between precision and recall and provides a better measure of misclassified instances than accuracy.According to Table 9, the Custom-CNN obtained the highest F1-score of 0.9733, while vgg16 and vgg19 achieved scores of 0.8929 and 0.86, respectively.This F1-score result indicates that the Custom-CNN model performed better even when dealing with imbalanced class distributions, which is a common characteristic of real-world medical datasets.Furthermore, we conducted a comparison between our proposed model and vgg16 and vgg19 in terms of the required time.As shown in Table 10, our model demonstrated a significantly shorter time of 440s, compared to 3715s for vgg16 and 2899s for vgg19.It is worth noting that our proposed model has a smaller number of variables, with 9,701,571, in contrast to 14,714,688 for vgg16 and 20,024,384 for vgg19.However, when considering the ratio and proportion, our proposed model exhibits higher efficiency, as it requires less time. To determine whether the suggested model is superior to the others, we compared our findings with those of other studies in the literature in this section's final paragraph.Table 11 compares the metrics of the suggested approach to specifics from the literature, and displays that our results are better than those of others.These are the ones we compare ourselves to, some of whom used the same dataset as ours in our study, while others did not.Of course, we cannot achieve a fair comparison with those who used dataset different from ours, but it is a good indicator that can enlighten us about the performance level of our proposed algorithm in this research.As observed from the results shown in Table 11, none of them outperformed our proposed algorithm's results, whether the dataset used was the same as our algorithm as in 29,[51][52][53][54][55][56] (highlighted in bold font in terms of the number of images) or different, as in the referenced studies 33,[38][39][40][41][42][43][44][45][46][47][48]62 . Evaluation of the Custom-CNN using dataset_2 In this section, after determining the optimal parameters for the proposed Custom-CNN model in the previous section, our objective was to further validate the effectiveness of the model by applying it to analyze dataset_2, a new set of images.Figure 11 depicts the progression of the proposed model during the training phase of Data-set_2.The outcomes of the suggested model, compared to the latest findings, are presented in Table 12, where the data is divided into training and testing sets with proportions of 80% and 20% respectively.It should be noted that some of the compared results utilized the same dataset as ours, as mentioned in 59 , while others employed different datasets, as referenced in 28,29,[50][51][52][53][54][55][56]62,63 . Althogh a fair comparison cannot be made with those who used different datasets, this comparison serves as a valuable indicator to evaluate the performance of our proposed algorithm in this research.Upon examining the results displayed in Table 12, our proposed model achieved outstanding results with a classification accuracy of 99.8%, precision of 99.9%, recall/sensitivity of 99.7%, F1-score of 99.8%, and a test loss of 0.0710, surpassing other state-of-the-art competitors. Conclusion Chest X-rays were utilized in this study to diagnose COVID-19 and detect the presence of the coronavirus, aiming to address the issues related to the accuracy and time requirements of RT-PCR.Due to their lower cost compared to CT scans, chest X-rays were given more consideration in this study.Additionally, CT scans involve a higher level of ionizing radiation compared to X-rays.The proposed Custom-CNN model, which features an end-to-end structure and full automation, eliminates the need for human feature extraction.This approach can be particularly beneficial for countries heavily affected by COVID-19, as it addresses the shortage of radiologists.The comprehensive assessment conducted revealed that the analyzed chest X-ray images exhibited distinct patterns and bilateral alterations.However, the manual approach to COVID-19 detection using X-rays is challenging.Therefore, this study employed a deep learning-based methodology to automatically analyze chest X-rays. The performance of the process was evaluated through a thorough comparative analysis, with accuracy as the Table 11.The proposed Custom-CNN model is compared to many state-of-the-art deep learning models constructed using X-ray images to identify COVID-19 using three classes.The bold font highlights the number of images, indicating the usage of the same dataset. Figure 2 . Figure 2. Illustration of the percentage of each class. Figure 6 . Figure 6.Results of a Custom-CNN model with various splitting ratio percentages. Figure 7 . Figure 7. Results of a Custom-CNN model using various batch sizes. Figure 9 . Figure 9. Outcomes of training accuracy and validation accuracy (left), as well as training loss and validation loss (right) on dataset_1. Figure 11 . Figure 11.Outcomes of training accuracy and validation accuracy on dataset_2. introduced COVIDX-Net, an AI model capable of automatically detecting COVID-19 positivity in patients based on chest X-ray images.It achieved a classification accuracy of 91% when tested on a dataset of 75 individuals, with 25 confirmed positive cases and 50 negative cases.Sethy and Behera 35 utilized a pre-trained employed transfer learning to overcome the lack of images typically required to build a reliable CNN model.They used two datasets to support their findings.The first dataset consisted of 1427 X-ray images, including 224 COVID-19 cases, 700 cases of common bacterial pneumonia, and 504 normal cases.The second dataset comprised 1442 images, with 504 normal cases, 714 cases of viral and bacterial pneumonia, and 224 confirmed COVID-19 cases.Comparative analysis of various CNN models, including Xception, VGG19, Inception, MobileNet v2, and Inception ResNet v2, resulted in the best performance.When comparing MobileNet v2 and VGG19, the accuracy, sensitivity, and specificity were 98.75% for the 2-class classification and 93.48% for the 3-class classification, with sensitivity and specificity values of 92.85% and 98.75%, respectively. Table 1 . Dataset descriptions for the proposed model training and testing (80% and 20%). Table 2 . Summary of the custom-CNN model. Table 4 . Results of a custom-CNN model with various splitting ratio percentages.Significant values are in bold. Table 5 . Results of a Custom-CNN model using various batch sizes.Significant values are in bold. # Table 6 . Results of Custom-CNN model with various learning rates.Significant values are in bold. Table 7 . Results of Custom-CNN model with different optimizers.Significant values are in bold.www.nature.com/scientificreports/classification performance obtained by the other optimizers.The obtained data demonstrate that the Adam optimizer consistently delivered the best outcomes across all performance measures. Table 8 . Results of Custom-CNN model with different classes. Table 9 . Results of applying the different models of deep learning on dataset_1.Significant values are in bold. Accuracy Precision Recall/ sensitivity F1-score Test loss Measures for the different deep learning methods on dataset_1. Table 10 . Differences in deep learning model training times using dataset_1.
7,628.6
2024-01-04T00:00:00.000
[ "Computer Science", "Medicine" ]
Effect of 6 Wt.% Particle (B4C + SiC) Reinforcement on Mechanical Properties of AA6061 Aluminum Hybrid MMC Aluminum based hybrid metal matrix composite with more than two particle reinforcement is very much popular for heavy duty application, and the proportion of these particle reinforcement can be controlled to achieve desired mechanical properties (strength and wear resistance). AA 6061 alloys popularly used in aircraft and automobile applications, tends to have inferior tribological property and therefore particle reinforcements are being made to strengthen the matrix. The prime objective of this investigation is to study the effect of varying wt.% of proportionate individual reinforcement (SiC and B4C) on the mechanical properties of a particular composition (6 wt.%) of AA 6061 hybrid composite. The present investigation is done to evaluate the dependance of hard particle reinforcements on the strength and elongation behaviour of hybrid composite. Hardness measurement and uniaxial loading techniques were used to characterize the mechanical properties of the as-cast hybrid composites, whereas OM, XRD and SEM analysis was done to study the distribution of reinforcement within the base (AA 6061) metal matrix phase. The improvement in mechanical properties, such as Vickers hardness, UTS, yield strength and elongation were presented and explained using various hypothesis proposed by previous studies. The role of clustering theory and effect of binary eutectic Mg2Si phase found to be key the enhanced mechanical properties of the hybrid composites. Addition of Alkaline Earth Metal (Mg) during the synthesis process have led to an increase in the elongation of hybrid composite with the increase in wt.% of reinforcement which is analogous to the effect of alkali metals (‘Na’ and ‘Li’) addition that helps in refining the Mg2Si Eutectic phase. Introduction Al based Metal Matrix Composites (MMCs) are extensively used in automotive applications due its light weight and excellent mechanical properties. Despite having such outstanding property, continuous efforts are being made to improve its strength and stiffness, and as a result of this, researchers have tried to add numerous reinforcement (particle) into the base metal [1]. As far as reinforcements are concerned, variety of filler materials ranging from macro to nano size particles in both polymer and metal matrix composite, fiber type filling materials for laminated composites and some cryo-treated particle hardened filler material are commonly practiced for the synthesis of composite material [1][2][3][4]. Out of these reinforcements discussed, Aluminum [MMCs] with particulate reinforcement showed promising results in the form of improved strength and high stiffness which are more desirable for automotive and aircraft industry. While discussing about the particulate reinforcement, researchers from all over the world have worked with ceramic based hard particles (SiC, Al 2 O 3 , MgO, WC, and B 4 C) for Al based composite to strengthen the base material [5]. With the development in production technology, a new trend was adopted while preparing composites using two or more reinforcement to impart high specific strength, high toughness and better ductility property compared to conventional techniques [6][7][8]. As far as the use of SiC as a reinforcement to the base material (AA 6061) is concerned, it substantially increased both the mechanical and tribological properties of the composite due to its high hardness [9][10][11]. In addition to the conventional liquid metallurgy route, researchers have also tried powder metallurgy route to produce in-situ hybrid composite of AA6061, SiC and graphite particles [12]. A group of researchers have shown remarkable improvement in the tribological property of AA6061/SiC hybrid composite by adding a fixed proportion of boron carbide (B 4 C) to the Aluminum metal matrix [13]. There are instances, where researchers have reported an increase in hardness and wear property of hybrid composite with the increase in SiC particle content [14,15]. The role of boron carbide (B 4 C) is found to be identical to silicon carbide (SiC) particle which has also improved the tribological property significantly [16]. The improvement in tribological property due to boron carbide addition (B 4 C) is due to its intersection bonding with Al matrix in comparison with SiC and Al 2 O 3 particle [17]. Silicon carbide-based hybrid composites are also studied with other Al alloys such as A356 and they also show promising results [18,19]. A comparison of individual properties of AA6061 aluminum alloy, B 4 C and SiC is given in Table 1, and the density of Al base alloy is almost equal to the boron carbide, whereas the SiC seems to be relatively dense. The hardness of boron carbide and Silicon carbide is much higher than Al base alloy and the presence of B 4 C (hardest among all) may affect the hardness of the Al hybrid metal matrix composite [AHMMCs]. There has be many compositions of AA 6061 based hybrid composite with varying proportion of SiC/Al 2 O 3 /B 4 C/Gr tried by many researchers to obtain enhanced properties; however, very little efforts were made to study some compositions those maintain a fixed proportion of the total reinforcement with the base metal [41][42][43][44]. In this investigation, efforts were made to study certain unique set of compositions (AA6061/SiC/B 4 C), where the weight fraction of both SiC and B 4 C are maintained in such a pattern such that the maximum reinforcement are restricted to 6 wt.% only. The effect of increasing B 4 C addition were studied with the decreasing SiC addition by maintaining a fix reinforcement with the base metal. The as-cast hybrid composites were characterized and their strength, hardness and elongation was compared with the base AA6061 alloy. Materials and Method In this present investigation, Al 6061 rectangular blocks were cut out of the as-cast ingots for the preparation of Hybrid Aluminum Composite using stir casting technique. The composition of the as-rec Al alloy was confirmed from the Optical Emission Spectroscopy (OES) and the elemental composition of the alloy is given in Table 2. The Emission Spectroscopy technique used for the elemental composition analysis of bulk sample is very much reliable due to its complexity and economic aspects, compared to other conventional spectroscopy technique [22]. The synthetic ceramic particles (SiC and B 4 C) of size 15-60 μm, used for the preparation of hybrid composite are procured from Alfa Easer. Preparation of AA6061 Hybrid Composite The four samples (S1, S2, S3 and S4) including as-cast AA6061 base alloy with varying composition of SiC (2 wt.%, 3 wt.%, 4 wt.%) and B 4 C (4 wt.%, 3 wt.%, 2 wt.%) were prepared using liquid metallurgy technique (stir casting) ( Table 3). The compositions were chosen based on a study conducted by a group (Halili et. al., 2019) where the total reinforcement was fixed (12 vol.%) by adjusting the individual particle reinforcement rationally [45]. The stir casting method is considered as the most economical method where, it can be ensured that the homogeneous mixing of reinforcement in the metal matrix [23,24]. The entire experimental setup for the synthesis were shown in Fig. 1. The stir casting parameters were chosen based on standard processing parameter practiced in literature (Bhandare et. al., 2013) and by conducting few trials to obtain composites with less porosity [46]. Prior to the casting, the rectangular aluminum blocks were cut into further pieces to get accommodated into the graphite crucible and melted in an electric arc furnace at temperature of 750°C to ensure complete liquification of Aluminum. The ceramic particle reinforcement (SiC and B 4 C) was pre-heated in an oven (at 250°C) to remove moisture content present in the particle. The pre-heated SiC and B 4 C particles were added to the molten metal after the complete liquification of AA6061 alloy present in the graphite crucible and stirring was done in the range of 400-500 rpm for 4-6 min. to produce a homogeneous mixture of composite material [25,26]. To improve the wettability of the ceramic particle reinforcement and better miscibility with the molten metal a thickening agent (0.5 wt.% of Mg) was added at the slurry stage. Few degassing tablets (C 2 C 16 : solid hexa-chloroethane) weighing~3 g. was added to the vortex of whirling molten pool during stirring to reduce porosity in the hybrid composite. After the completion of stirring, hot molten metal mixture (~700°C) was poured into the pre-heated metal mould cavity (150 mm × 15 mm × 15 mm) at a temperature The cast samples of different composition were cut into small pieces (15 mm × 20 mm × 10 mm) and cold mounted for microstructural analysis. The mounted samples were polished till mirror surface finish achieved using emery sheets (400, 600, 800 and 1000) followed by Alumina polishing. The polished samples were etched with Keller's reagent and micrographs were taken using LECO Olympus BX53M Microscope [27]. The micrographs of all the samples with varying composition were studied for phase analysis and particle distribution. In the as-cast base AA6061 alloy (S1), few Mg 2 Si phases (shown in Fig.2a) were detected in the matrix, whereas for the samples (S2, S3 and S4) tend to have uniform distribution (shown in Fig. 2b,c and d) throughout the matrix. The microstructure analysis reveals that there is no agglomeration of SiC and B 4 C particle, and the reinforcements were evenly distributed throughout the hybrid composite matrix. SEM and XRD Study of Hybrid Composite Prior to elctron microscopy (SEM) and X-Ray diffraction analysis, the samples were polished with mirror finsih surface using emry sheets. High magnification SE images of the polished hybrid composite samples were taken using Jeol J-6000 Plus Scanning Electron Microscope (SEM) to study the ceramic particle (SiC and B 4 C) reinforcement. The formation of Mg 2 Si Eutectic phase can be confirmed form both optical and SEM imges shown in Figs. 3a and 4a, respectively. To support this claim, additional experiments such as XRD analysis was odne on the polished samples using Pananalytical X'Pert system (2 = 20 0 -120 0 ; Scan rate = 2 0 per min). The phases appearing in the hybrid composite are shown (Fig. 3c) in the XRD pattern marked with symbols ('+':Mg 2 Si, '#':SiC, '*':B 4 C). The most intense peak of Al base matrix phases are indexed as (1 1 1), (2 0 0), (2 2 0) and (3 1 1). In addition to the particle reinforecment, some other features such as micro-pores were also seen in th matrix phase of all the SEM images shown in Fig. 4. It can be evident from these images that the B 4 C particle are having irregular shape and the average diameter is near to 100 μm, and similar features was also observed for SiC particle whose size is relatively smaller than boron carbide particles. The size of boron carbide particles seems to be uniform in most of the composite, and no sign of clustering/aggolomoraion is discovered at micron level. There are enough proof on the formation of micron level pores ranging from 1 to 10 μm throughout the aluminum matrix. Indentation Test Results of AA6061 Hybrid Composites The bulk hardness of the mounted samples (S1-S4) was experimentally calculated using Brinell Hardness Scale. The test results (shown in Fig. 5a) reveal that there is a proportional improvement of hardness with the increase in B 4 C wt.% in the hybrid composites. Conventionally, the effect of boron carbide (B 4 C wt.%) was found to be predominant in the increase in hardness of AA6061/B 4 C and AA6082/B 4 C composite (shown in Fig. 6b), whereas there is a dearth of literature that can justify the increase in hardness with the increase in SiC wt.% of AA6061 hybrid composite produced via liquid metallurgy route. However, a study on AA6061/SiC composite has justified the increase in hardness (HV) and Compressive Strength (shown in Fig. 6d) value with increase in SiC (wt.%) [28]. In the present investigation, the SiC wt.% for the samples (S2-S4) was replaced with the B 4 C wt.% to maintain a fixed reinforcement and the addition of the hard B 4 C particles have helped in compensating the effect of SiC that is responsible for increase in hardness of majority of hybrid composite (AA6061/SiC). Except one recent study (shown in Fig. 6b) by a group of researcher lead by Hynes et. al., 2020 [29], almost all work showed an Uniaxial Tensile Test Results of AA6061 Hybrid Composites The tensile tests were conducted on cylindrical as-cast AA6061 hybrid composite based on ASTM-E-8 M specifications using INSTRON 8801 Servo hydraullic tensile tester [32]. Prior to the uni-axial loading the gauge section was polished using fine grade emery sheet to eliminate any pre- (d) SEM image of AA 6061 + 4% SiC+2% B 4 C alloy existing crack during machining [33]. The UTS and yield strength of all the samples (S1-S4) is shwon in Fig. 5b, and the strength of the composites (S2 and S3) with reinforcement has shown higher value compared to the base alloy. But the composition (S4) with 2 wt.% SiC and 4 wt.% B 4 C has shown a reduction in both yield and tensile strength. However, the elongation (Fig. 5c) for the hybrid composites (S2-S4) shown continious improvement compared to the base alloy (AA6061). The Ultimate Tensile Strength of AA6061/B 4 C composites generally increases with the increase in both the B 4 C wt.% (shown in Fig. 6a) [10,21,30,31], 34] and B 4 C vol.% (shown in Fig. 6c) [21,35,36]. But, in the case of Hynes et. al., 2020 [29], the strength keeps on decreasing with the increase in B 4 C wt.%. However, such exception in reduction in strength might be due to two reasons: (i) Improper mixing of reonforcement particle/ agollomoration of particles during mixing; (ii) the tensile sample preparation (some pre-existing cracks during machining of gauge section). The work carried out by Sharma et. al.,2019 [21], showed that the strength increases with the B 4 C addition and then decrease. The present set of results realted to strength is analogous with the results produced by Sharma et. al.,2019. It can be noted that the 6 wt.% reinforcement which contain both B 4 Cand SiC particle ranging from only 2-4 wt.% in the Al hybrid composite is able to achieve strength in the range of 250-270 MPa. Whereas, previous work done on either SiC or B 4 C have achived the strength more than 220 MPa with B 4 C particle above 7 wt.% [21,34]. The present investigation has created a scope for studying on achieving best mechanical properties with optimized particle reinforecement to the Al base metal, because it is difficult to avoid any deletirious effect of excessive particle reinforcement on the strength of hybrid composites. This can be proven through the "Theory of Clustering", where Hong et. al., 2003 tried to explain by comparing the theoretically calculated strength (shown in Eq. 1) with experimentally investigated strength [37]. The experimental strength value drops after a saturated reinforcement is achieved, however the calculation shows an increasing trend. Therefore, the variation in strength is due to the formation of cluster and the modified theortical strength was given by Eq. 2. Effect of Mg Addition on Elongation of AA6061/ B 4 C/ SiC Hybrid Composite When comparisons were done on the UTS and Elongation of AA 6061/B 4 C/SiC for the present investigation with the study done by Poovazhagan et. al.,2013 [38], very interesting facts were reveled. As far as compositions are concerned the net reinforcement of the hybrid composite samples (C1-C4) are not much differing from the compositions of (S1-S4) the composite with 6 wt.% (SiC+B 4 C). The increase in SiC vol.% by keeping the B 4 C vol.% by Poovazhagan et. al.,2013 has shown a continious decrease (shown in Fig. 7a) in the elongation (%) of hybrid compsite samples (C1-C4),whereas in the present investigation, the increase in B 4 C wt.% by proportionately decreasing the SiC wt.% has led to an increase (shown in Fig. 7b) in elongation (%) of the hybrid composites. It means, the B 4 C addition has certainly some effect on the improvement in the elongation, but there is not enough evidence or [39]. The 'Na' addition has moved the binary eutectic point towards the Mg 2 Si rich direction which changed the Mg 2 Si phase distribution (more uniform) and size/morphology. This change has increased (shown in Fig. 7c) the UTS and elongation of the composite for certain range of 'Na' wt.% addition, however the reason for such increase is not yet understood. Similar study was conducted by Hadian et. al.,2008 on the Al-15 wt.% Mg 2 Si composite, where 'Li' addition has improved UTS and elongation of the as-cast composite [40]. The hypothesis was given that 'Li' might have shifted the eutectic point Mg 2 Si rich side of the diagram by changing the surface energy of the Mg 2 Si phase. For the present investigation 0.5 wt.% 'Mg' was added as a thickening agent during the synthesis of the hybrid composite and this has led to a uniform and fine distribution of Mg 2 Si network troughout the matrix. These changes in the microstructre might have led to an increase in elongation of composites even with increased B 4 C content. To prove this hypothesis more study need to be done on such compositions. Conclusions The mechanical properties of AA6061/SiC/B 4 C hybrid composite using stir casting method were explicitly studied, and the significant outcomes of the investigation are presented as follows: (i) The optical microscopy (OM) results reveal that the homogeneous distribution of dual particles (SiC and B 4 C) within the AA6061 matrix. Besides OM results, other characterization techniques such as SEM and XRD analysis of hybrid composites were conducted on the hybrid composites to ascertain the presence and uniform distribution of dual particles within the matrix. (ii) The presence of ceramic particles (SiC and B 4 C) was confirmed from the XRD peaks along with major peaks (indexed) from the base AA6061 alloys. Some additional features (casting defect/ micro-pores) were discovered within the as-cast composite from the SEM study. (iii) As far as mechanical properties are concerned, the hardness (BHN) value of hybrid composite (AA6061 + 4% B 4 C+ 2% SiC) shows 60% improvement when compared with the AA6061 base alloy. Such enhancement in hardness is due to the presence of hard B 4 C particles within the matrix. (iv) However, a similar improvement in tensile strength (UTS) and yield strength (YS) did not reflect in the case of the composite with 4% B 4 C and 2% SiC reinforcement. Rather, the composite with an equal fraction of reinforcement (3% B 4 C and 3% SiC) showed the highest UTS and YS value compared to other compositions and base alloy. The reduction in UTS and YS for the composite with 4% B 4 C might be because of clustering effect (strength decreases after reaching an optimum reinforcement level within the matrix). (v) While discussing the elongation results of the as-cast hybrid composites, the composition with 2% SiC and 4% B 4 C showed the highest elongation compared to other compositions, including the base alloy. This is supposed to be a contradicting result; however, such improvement in elongation might be due to the addition of alkaline earth metal (0.5 wt.% Mg). The Mg addition has led to the refinement of Mg 2 Si phase throughout the matrix, which helped in improving the elongation of the hybrid composite.
4,492.6
2021-06-20T00:00:00.000
[ "Materials Science" ]
Edge-Disjoint Paths in Eulerian Digraphs Disjoint paths problems are among the most prominent problems in combinatorial optimization. The edge- as well as vertex-disjoint paths problem, are NP-complete on directed and undirected graphs. But on undirected graphs, Robertson and Seymour (Graph Minors XIII) developed an algorithm for the vertex- and the edge-disjoint paths problem that runs in cubic time for every fixed number $p$ of terminal pairs, i.e. they proved that the problem is fixed-parameter tractable on undirected graphs. On directed graphs, Fortune, Hopcroft, and Wyllie proved that both problems are NP-complete already for $p=2$ terminal pairs. In this paper, we study the edge-disjoint paths problem (EDPP) on Eulerian digraphs, a problem that has received significant attention in the literature. Marx (Marx 2004) proved that the Eulerian EDPP is NP-complete even on structurally very simple Eulerian digraphs. On the positive side, polynomial time algorithms are known only for very restricted cases, such as $p\leq 3$ or where the demand graph is a union of two stars (see e.g. Ibaraki, Poljak 1991; Frank 1988; Frank, Ibaraki, Nagamochi 1995). The question of which values of $p$ the edge-disjoint paths problem can be solved in polynomial time on Eulerian digraphs has already been raised by Frank, Ibaraki, and Nagamochi (1995) almost 30 years ago. But despite considerable effort, the complexity of the problem is still wide open and is considered to be the main open problem in this area (see Chapter 4 of Bang-Jensen, Gutin 2018 for a recent survey). In this paper, we solve this long-open problem by showing that the Edge-Disjoint Paths Problem is fixed-parameter tractable on Eulerian digraphs in general (parameterized by the number of terminal pairs). The algorithm itself is reasonably simple but the proof of its correctness requires a deep structural analysis of Eulerian digraphs. INTRODUCTION The -disjoint paths problem, that is, the problem of deciding for a given (directed) graph and a set ≔ { 1 , . . ., } of sources and ≔ { 1 , . . ., } of targets, whether there is a set of mutually edge-or vertex-disjoint paths-depending on whether we are talking about the vertex-or Edge-Disjoint Paths problem-connecting the sources to the targets, is one of the most fundamental problems in the area of graph algorithms.By Menger's theorem, or network ow algorithms, this problem can be solved in polynomial time on undirected and directed graphs if we are only interested in a set of disjoint paths each having one end in and the other end in .But the situation changes completely if we require that the paths connect each source to its corresponding target : this problem is NP-complete for edge-disjoint and vertex-disjoint paths, on directed and undirected graphs. On undirected graphs , Robertson and Seymour developed an algorithm for the -Vertex-Disjoint-Paths and the -Edge-Disjoint-Paths problem which runs in time O ( 3 ) for any xed number of terminal pairs [23] where = | ( )|.Rephrased in the terminology of parameterized complexity, they showed that the problem is xed-parameter tractable parameterized by the number of terminal pairs, i.e., it runs in fpt-time witnessed by a running time of the form ( ) for some constant ∈ N and some computable function : N → N. The complexity has subsequently been improved to quadratic time in [18].Robertson and Seymour developed the algorithm as part of their celebrated series of papers on graph minors.While the correctness proof is still long and di cult, relying on large parts of the graph minors series, the algorithm itself is beautifully concise and essentially facilitates a reduction rule that reduces any input instance to an equivalent instance of bounded treewidth which in turn can be solved using standard dynamic programming techniques. On directed graphs-henceforth called digraphs-the -Vertexand -Edge-Disjoint-Paths problems are considerably more dicult.As shown by Fortune, Hopcroft, and Wyllie [8], both problems are NP-complete already for = 2 terminal pairs.This implies that they are not xed-parameter tractable and not even in the class (under the usual complexity theoretical assumptions that we tacitly assume throughout the introduction), i.e., they are widely believed to be unsolvable in polynomial time for any xed ≥ 2. Furthermore, Slivkins [27] showed that the problems remain [1]hard-and therefore (presumably) not xed-parameter tractablealready on acyclic digraphs, which is directed graphs not admitting any cycles.On the positive side, Cygan, Marx, Pilipczuk, and Pilipczuk [6] proved that the -Vertex-Disjoint Paths problem is xed-parameter tractable on planar digraphs.Interestingly, Chitnis proved that the edge-disjoint version remains [1]-hard [4], refuting its xed-parameter tractability under the aforementioned standard assumptions.Eulerian Digraphs.A well-studied class of digraphs whose complexity often turns out to be somewhere between undirected and general directed graphs is the class of Eulerian digraphs.A digraph = ( , ) is Eulerian if the in-degree of each vertex equals its out-degree or, equivalently, if it is the union of a set of edge disjoint cycles.See [2,Chapter 4] for a recent survey on Eulerian digraphs.It has been observed in [17] that the correctness proof of the algorithm for the -Edge-Disjoint Paths problem can be simplied for undirected Eulerian graphs.This already suggests that the 'Eulerian' property could make a di erence for the -Edge-Disjoint-Paths problem also on digraphs.Indeed, Johnson [15] pioneered the structural analysis of Eulerian digraphs with emphasis on solving the -Edge Disjoint Paths problem in his dissertation.He proved a structure theorem for internally 6-connected Eulerian digraphs (a notion that will not be of further relevant to this exposition) in the same avour as the undirected structure theorem proved by Robertson and Seymour, following the same line of argumentation as their proof.Unfortunately, his results have never been published. In the literature on Eulerian digraphs the Edge-Disjoint-Paths problem is often studied in the following formulation. De nition 1.1 (Edge-Disjoint Paths problem). The Edge-Disjoint Paths problem is the problem to decide, given two digraphs and with ( ) ⊆ ( ) and ( ) ∩ ( ) = ∅ as input, whether contains a set L of pairwise edge-disjoint paths which contains for each edge ( , ) ∈ ( ) an − -path ∈ L that is edge-disjoint from . When xing the number of terminal pairs we are interested in, i.e., for xed ≔ | ( )|, we refer to the problem as the -Edge-Disjoint Paths problem.An equivalent formulation is to decide, given and as above, whether + contains a set of pairwise edge-disjoint cycles each containing exactly one edge of , where + ≔ ( ( ), ( ) ∪ ( )). We call the supply and the demand digraph.The vertices incident to an edge in are called terminals.It is easily seen that this formulation is (qualitatively) equivalent to the specication of the disjoint paths problem by a single digraph and pairs ( 1 , 1 ), . . ., ( , ) of terminals.One advantage of the presentation with separate demand and supply graphs is that this makes it possible to classify the complexity of the problem relative to the structure of the demand graph. Unfortunately, the Edge-Disjoint-Paths problem remains NPcomplete on Eulerian digraphs.In fact, Marx [22] proved that the problem is already NP-complete if is an acyclic directed grid graph and + is Eulerian. On the positive side, Slivkins [27] proved that the problem is xed parameter tractable in the case that is acyclic and + is Eulerian.Further, Frank [9] showed that the problem can be solved in polynomial time if + is Eulerian and consists of two sets of parallel edges or is the union of two stars.Moreover, he showed that in these cases the directed cut criterion is su cient for the existence of a solution; that is, the problem can be solved by deciding whether there exists a set of vertices ⊂ ( ) such that the number of edges in with a head in and tail in ¯ ≔ ( ) \ is less than the number of edges in having a tail in and head in ¯ . Polynomial time algorithms for a few other special cases (for ≤ 3) if + is Eulerian have been developed in [10,14,28], but in the last nearly 30 years no signi cant progress on determining the complexity of the general -Edge-Disjoint-Paths problem on Eulerian digraphs has been made.However, Johnson [15] proved in his dissertation that given an Eulerian digraph (Euler-)embedded in some surface Σ-we will make this precise shortly-such that there exists a disc Δ ⊂ Σ containing many concentric edge-disjoint cycles of alternating orientation (with respect to the orientation of the disc), then the most deeply nested cycles are irrelevant to the instance.That is, one may delete any such cycle from the graph without altering the outcome of the instance.Unfortunately, since Johnson's work has never been published said result has not been peer-reviewed thus far. As That is, there is a computable function and an algorithm with running time ( ) • O (1) , which, given an -vertex digraph and a 2 -vertex digraph with ( ) ⊂ ( ) with parameter ≔ | ( )| such that + is Eulerian, decides correctly whether or not contains a set of pairwise edge-disjoint paths which contains an −path for each edge ( , ) ∈ ( ). We start with recalling some concepts, notation and results relevant for the exposition and continue with a high level overview of the algorithm and its correctness proof in section 3. PRELIMINARIES AND NOTATION Throughout this exposition we use standard graph theoretic notation as in [2,7] and assume the reader to be familiar with common graph theoretic concepts and notation.For example, given a subset ⊂ ( ) we write [ ] to denote the subgraph induced by .Given a directed graph = ( , ), we call the graph resulting from when forgetting the edge directions its underlying undirected graph.Further we call a graph pseudo-Eulerian (of order ∈ N) if there exists a graph satisfying | ( )| = such that + is Eulerian.Although our main theorem talks about Eulerian digraphs, most of the results solely rely on the fact that is pseudo-Eulerian, a notion turning out to be central to our results (although not apparent from this exposition).Eulerian graphs are commonly known for the existence of Eulerian cycles, where we de ne cycles slightly di erent than the standard literature. Note that, by de nition, vertices may be visited several times in a cycle; what we call a cycle is usually referred to as a closed walk in the standard literature.A cycle is Eulerian if it visits all of the edges in ( ). Further notions of great importance to the paper are induced cuts. Beautifully, the order of induced cuts is always even for Eulerian digraphs, and given any such the number of edges in ( , ¯ ) with a head in equals the number of edges with a tail in , revealing a nice symmetrical property for induced cuts in Eulerian digraphs. We further assume the readers to be familiar with general graph embedding concepts such as planarity.We say that is a plane graph if we assume to be given together with a planar embedding.Throughout this exposition we will frequently talk about digraphs embedded on a xed surface Σ, where surfaces in our setting are compact 2-dimensional manifolds possibly with boundary.In these cases we will often work with a speci c type of graph-embedding which we call Euler-embedding.Let + be Eulerian of degree at most four.Then is called Euler-embedded (in some surface Σ) if the embedding contains no strongly planar vertex.A vertex ∈ ( ) of the embedded digraph is called strongly planar if it is of degree four and we can draw a simple closed curve in the surface Σ around the vertex such that intersects exactly all edges adjacent toexactly once and only these-such that visits rst both in-edges and then both out-edges (up to a cyclic rotation) at . Given , such that + is Eulerian and is a demand graph with | ( )| = for some ∈ N we say that + encodes an instance of the Eulerian Edge-Disjoint Paths problem.That is, given and we are to decide whether there exist edgedisjoint paths 1 , . . ., -we call the collection L = { 1 , . . ., } a -linkage-such that connects to for some ( , ) ∈ ( ) for every 1 ≤ ≤ .(Note that a priori = , = and = are possible for any 1 ≤ , ≤ .)Given such an instance, if the respective edge-disjoint paths exist we call it a YES-instance, otherwise we call it a NO-instance. Moreover, we assume the readers to be familiar with the notion of undirected treewidth and will at times talk about directed treewidth (see [16,19] for de nitions and results) although the exact de nitions of either notions will not be of importance.The reason why we will not need the directed treewidth is that Eulerian digraphs of bounded degree admit 'high' undirected treewidth if and only if they admit 'high' directed treewidth and thus both notions are qualitatively the same (this is not true in general directed graphs). Theorem 2.3 (Theorem 2.2 in [16]).Let + be Eulerian and let be its directed treewidth.Let be the undirected treewidth of Figure 1: A cylindrical wall of order 4. The perimeter of the wall is depicted using thick edges. An important result proved in [19] that is central to our arguments is that high directed treewidth guarantees the existence of a large cylindrical wall W. See g. 1 for a de nition by picture.We refer to the paths marked 1 , . . ., as wall-cycles and to the paths 1 , . . ., 2 as the horizontal paths.We say that the wall-we omit 'cylindrical' whenever we talk about walls in digraphs-is of order or simply a × -wall with a natural extension to ×walls (note that a -wall contains 2 horizontal paths).The vertices in the intersection of horizontal paths and wall-cycles are called coordinate vertices.We may write W = ( 1 , . . ., ; 1 , . . ., 2 ) to mean a × -wall with a speci ed ordering of the wall-cycles and horizontal paths in a xed plane drawing as in g. 1.Thus, tying to the above, whenever we will say 'high treewidth' this implies a 'large cylindrical wall as a subgraph'. Further, it was shown in [3] that computing such a wall is xedparameter tractable parameterized by the treewidth.In particular, this means that for Eulerian directed graphs we can nd a large cylindrical wall in fpt-time parameterized by the undirected treewidth.Summarising we get the following.Theorem 2.4.There is a computable function : N → N such that every Eulerian digraph + of directed treewidth at least ( ) for ≔ | ( )| contains a cylindrical wall W of order with (W)∩ ( ) = ∅ as a subgraph that can be found in fpt-time on . In order to use theorem 2.3 and facilitate many of the arguments made in the paper we rst reduce the Edge-Disjoint Paths problem to the class of Eulerian digraphs of maximum degree four via an easy reduction.Lemma 2.5.Let + be Eulerian where is the demand graph with | ( )| = ∈ N. Then + can be reduced in polynomial time to a new instance ′ , ′ such that ′ + ′ is Eulerian and in which each non-terminal vertex has degree 4, all terminals have degree 2, no terminal vertex is part of two edges in ( ) and + is a YES-instance if, and only if, ′ + ′ is a YES-instance. Throughout the rest of the exposition we will tacitly assume our graphs + to be Eulerian and such that every vertex ∈ ( ) \ ( ) is of degree four and every vertex in ( ) is of degree two unless stated otherwise. STRUCTURE OF THE PROOF We continue with the algorithm proving the main theorem 1.2 before giving a high level description of the proof of its correctness.Let + be an Eulerian directed graph of maximum degree four, where is the demand graph encoding an instance of the Edge-Disjoint Paths problem with | ( )| = ∈ N.That is, we want to decide whether there exists a -linkage L = { 1 , . . ., } where connects to for ( , ) ∈ ( ) for every 1 ≤ ≤ . The Main Idea of the algorithm is to keep reducing the instance using the irrelevant vertex technique-in our case irrelevant cycle technique-until the graph has bounded undirected treewidth whence the instance can be solved in fpt-time using standard techniques (e.g. using Courcelle's Theorem [5]; more precisely an adaptation of it for directed graphs [1]).In a nutshell the irrelevant cycle technique works as follows: as long as the treewidth of is not bounded by a function in , we are able to locate a cycle in the graph whose deletion does not change the existence of a solutionby deleting a cycle we mean the graph − ≔ ( ( ), ( ) \ ( )) after removing possibly isolated vertices.We can therefore repeat this process until, eventually, the graph is of bounded treewidth and proceed as discussed. Our goal in this paper is to show that the Edge-Disjoint Paths problem on Eulerian digraphs is xed-parameter tractable and not to optimise the running-time of our algorithm.In our correctness proof we establish several results that can be used more explicitly in the algorithm to get a better running-time performance and sometimes we deliberately chose to bloat up the numbers at the cost of a worse overall running time when granting slicker and conciser proofs, circumventing case distinctions or ddling with structural details. Readers familiar with the work of Robertson and Seymour [23,25,26] on Graph Minors will see the similarities between our algorithm and our approach to prove its correctness and the line of argumentation presented in their papers.To bring the readers not familiar with their work onboard, we will brie y summarise the broad line of argumentation behind the arguments that have relevance for this exposition. The Undirected Case.There are two classes of graphs central to the work of Robertson and Seymour: • Cliques on ∈ N vertices denoted by : a graph on vertices where every pair of vertices is connected by an edge, • Grids of order : a graph consisting of disjoint paths = ( 1 , . . ., ) of length ( − 1) such that is connected to +1 for every 1 ≤ ≤ − 1, 1 ≤ ≤ . Robertson and Seymour have shown that given any graph of high treewidth, the graph contains a large grid as a minor-taking a subgraph and contracting edges in that subgraph results in a large grid-and in particular a large wall as a subgraph (the underlying undirected graph of g. 1 after deleting the bent edges is an example of an undirected wall).They then continue to analyse how the rest of the graph is attached to that wall, using the wall as a kind of skeleton for the rest of the graph; like drawing a graph on squared paper, using the corners of the squares as vertices (and adding any missing vertices to the drawing of course).A central result being that, either the remainder of the graph has many attachments to the wall connecting many di erent parts 'reasonably far apart' from each other, in which case they were able to nd a large clique minor in the graph, or, if we cannot nd such a clique minor, then a large part of the wall is 'almost' planar embeddable; it is what they call at. To understand the intuition behind the de nition of atness, it is important to rst think about how one can construct a clique-minor starting from a wall.The crucial concept to the existence of large clique-minors is the existence of many disjoint 'non-planarities', i.e., crosses.Of course, when working with drawings, the exact edges that cross in the drawing depend on the drawing at hand.This leads to the following de nition of crosses. Intuitively the idea here is, that when trying to draw in a disc with the vertices Ω drawn on the boundary of the disc in its prescribed order, the paths 1 , 2 cross in the drawing in the common sense.Thus, when trying to nd a clique-minor, one tries to locate many distinct crosses spread all over a plane wall, using the wall to connect them.However, it is not hard to see that given a pair ( , Ω) as above it may well be the case that there is no Ω-cross in despite being highly non-planar.To see this, think of a plane wall, Ω being its four corners in clockwise order, where we attach a large to a single vertex somewhere inside the wall.This example can be strengthened as we see next. Flat Walls.The duality between nding a large clique-minor or a at wall is one of the many cornerstone results in the graph minor series, a theorem that was later on named the Flat Wall Theorem [20].In a nutshell, a wall W ⊆ can be thought of as at if its perimeter (the outer cycle when drawing it in the plane) is separating in -marking the outside and inside of the wall in a sense-such that the inside is 'cross-free'.More precisely, deleting ( ) from let ⊂ \ ( ) be the unique component with (W) ∩ ( ) ≠ ∅.Then we say that W is at in if there exists no Ω-cross in the 'inside of the wall', that is for ( [ ∪ ], Ω) where Ω contains the four corners of the wall in clockwise order.The graph [ ∪ ] is formally referred to as the compass of W; note that W ⊆ [ ∪ ].It turns out that being at has rather strong impacts on the topological structure of the compass as we brie y discuss next. (Non)-Planarity and the Two-Paths Theorem.Planar graphs behave very rigidly when it comes to routing problems: if a path traverses an undirected wall completely (it has both its ends on the perimeter), it intuitively cuts the wall into two disjoint parts since no other path is allowed to cross the rst one when we require the paths to be vertex-disjoint.Non-planarities allow for edges that can 'hop over' the rst path making it possible for pairs and sets of paths to cross: take a wall and add two crossing edges connecting diagonally opposite corners for each face, then the resulting graph is highly non-planar, it contains large clique-minors, and allows to cross paths easily using said non-planarities.In fact, the relation between planarity and rigidness in routing can be measured by what is called the Two-Paths Theorem which sees a proof in the graph-minors series due to Robertson and Seymour.Essentially, the Two-Paths Theorem proves that given a pair ( , Ω), either we can 'nicely embed' into a disc drawing Ω on its boundary respecting its order, or admits an Ω-cross.By a 'nice embedding' we mean that can be planar embedded in a disc up to ≤ 3-separations, which are replaced by a respective clique on ≤ 3 vertices in the embedding. A very important implication of the Two-Paths Theorem is that at walls can be nicely embedded xing Ω to be the corners of the wall as above.That is, given a wall with corners 1 , 2 , 1 , 2 in clock-wise order and perimeter , the Two-Paths Theorem implies that either we nd two 'crossing paths' connecting 1 to 1 and 2 to 2 in the compass of the wall refuting its atness, or we can embed the compass (after replacing ≤ 3-separations by respective cliques on the ≤ 3-vertices) into a disc with the corners of the wall embedded on the boundary of the disc.The fact that it is 3separations crucially helps with solving the Disjoint Paths problem: no two vertex-disjoint paths starting and ending outside of the at wall can enter the wall and both use (enter and leave) a part attached via a ≤ 3-separation to the wall, for there are not enough vertices to disjointly enter and leave said part. The Structure Theorem.With the Flat-Wall Theorem in hand, Robertson and Seymour proved that given an integer ∈ N and a graph that does not admit a -minor, then we can decompose the graph into chunks that can be glued together at vertices in a tree-like fashion such that two parts overlap in only a few vertices with respect to -a tree-decomposition of low adhesion-such that every chunk can be 'almost' embedded-up to a bounded number of apices and vortices which we will introduce shortly-in a surface Σ of genus bounded in , where every chunk 'uses up' the surface.By 'using up the surface' we mean that the graph cannot be embedded in a lower surface unless we delete an large (not boundable in ) number of vertices-the embedding has high representativity.That is, given such a chunk ′ , after deleting a few special vertices called apices (think of them as vertices connected to all other vertices of the graph, hence introducing a lot of non-planarities but they can be used by at most a single path), we can embed the graph in a surface of genus bounded in up to ≤ 3-separations and a few highly local non-planar regions called vortices that may be harder to disconnect.One may think of a vortex as an 'untamed' subgraph of that given the above embedding is drawn in a disc Δ ⊂ Σ (although not planar) where one usually by cuts a hole into Σ along the boundary of Δ and pushes , and thus the local non-planarities, into the the hole such that is attached to the boundary of the hole (drawing its attachment vertices on the boundary of the hole).Further the above embedding guarantees the vortices to be of bounded depth, i.e., given any such vortex there is no large linkage between any two halves of the boundary of the hole (being Δ) no matter how we choose the halves; so the boundary is rather loosely connected through the hole.In order to get to the above structure theorem one can start by embedding a at wall using the Flat-Wall Theorem and then extend the embedding from there by introducing handles, cross-caps, apices and vortices (see also [21]). Routing Paths Disjointly.Given the above structure theorem, the fpt-algorithm for nding vertex-disjoint paths works in three major steps: Either the graph has a -minor, in that case Robertson and Seymour locate a vertex of the -minor that is irrelevant to the problem (think of a large clique and trying to route two paths.Then it seems intuitive that the paths do not need all of the vertices of the clique, for every vertex is connected to every vertex and thus no path needs to enter the clique twice).Thus we may assume that the graph has no large clique-minor left after deleting said vertices, in which case we can nd the above described embedding for the graph in fpt-time. In a next step Robertson and Seymour proved that, given the 'quasi-embedding' from the structure theorem above, one can locate a vertex deeply nested inside the at wall that is irrelevant to the instance and simply delete it (this proof is very technical and far from easy; in the planar case it seems intuitive that it should be true, for no path should need to use much of a wall as one cannot cross any paths in it).The proof works via several inductions again using three major steps.First it is shown that the theorem holds true for planar graphs containing a large wall (which is clearly at).This is then leveraged to graphs embeddable on xed surfaces by induction on the genus of the surface.In a last step the proof is extended to the 'quasi-embeddings' by proving that solutions do not enter and leave vortices too often, reducing it to the embedded case, in a sense 'killing vortices'.(We omit a discussion about apices as we will not encounter that problem ourselves). Finally one repeats both steps above, deleting vertices until the graph has no large wall left and is thus of low treewidth; use Courcelle's theorem from here. Back to Eulerian Digraphs While many of the arguments and techniques we use are highly inspired and follow the same line of reasoning as Robertson and Seymour's, we note that our proofs, the respective constructions, and ideas behind them are in no way easily derivable from the results presented in [23,25,26].For example the standard graph-minor structure theorem due to Robertson and Seymour [24] is of no direct use to us.One reason being that there is no straightforward argument how undirected clique-minors help in routing directed edge-disjoint paths; in particular edge-contractions do change the instance.Also, knowing that there are no undirected clique-minors left is no real help either, as given a drawing of + in the plane in no way yields enough structure to forbid certain edge-disjoint linkages in the graph.To see this note that strongly planar vertices may still help in crossing paths in the same spirit as discussed above-we will make this more precise shortly-a phenomenon that in the (undirected) vertex-disjoint case only appears if the graph itself is non-planar (as given by the Two-Paths Theorem).But taking any drawing of a non-planar graph and adding vertices at points of crossing edges results in a planar graph which has at least the possible edge-disjoint paths as the non-planar graph and at most an undirected 4 -minor.However, vertices of degree four that are not strongly planar lose said intuition and do not allow to cross paths in the same spirit; both strongly planar vertices and non-strongly planar vertices cannot be distinguished in the underlying undirected graph.Hence it is not obvious how to leverage the undirected graph-minor structure theorem-and in fact we do not in this paper-neither is it obvious how to use clique-minors (not even directed clique minors) for routing, which is why we do not.Note further that many results in the graph-minor structure theory rely on inductive reasoning, separating the graphs into smaller graphs, deleting parts of the vertices and edges, splitting vertices, contracting subgraphs; arguments that cannot easily be transferred to the Eulerian setting, for they may destroy the Eulerianness of the graph or, in the latter case, augment the set of solutions by creating new ways to route the paths edge-disjointly.This problem required us to develop new techniques that are tailored towards Eulerian digraphs and the edge-disjoint case, bringing to light a deeper structural understanding of both. More generally, the fact that the graphs in question are directed makes algorithmic problems often harder, and especially in the eld of structural graph theory the direction of edges has turned out to be a major nuisance in the past; for example there is no directed analogue of the aforementioned Two-Paths Theorem which in the undirected setting, as elaborated above, is used in [23] to prove nice embedding properties for the at wall given the absence of large clique-minors.Also, perhaps somewhat counter-intuitively at rst, and certainly in contrast to the undirected case, given a cylindrical wall with many disjoint non-planarities (that is, crosses) that are pairwise 'far apart' on the wall does in general not yield a directed clique-minor [12]; whereas on undirected graphs it does, an observation that lies at the core of the undirected Flat-Wall Theorem [20,23].In particular, topological obstructions do not as easily infer the existence of large clique minors.Fortunately, most nuisances seem to disappear when focusing on Eulerian digraphs and there are other routing devices that help with routing paths (edge-)disjointly. Revisiting Notation We start with introducing the most relevant notions needed throughout the remainder of this exposition. In our setting the irrelevant cycles found by the algorithm are either cycles of some large Router-a collection of edge-disjoint cycles that pairwise intersect-or cycles deeply nested inside an Euler-embedded at swirl-a collection of edge-disjoint concentric cycles that alternatingly change their orientation (with respect to the orientation of the plane they are embedded in). De nition 3.2 (Routers and Swirls is a graph consisting of edgedisjoint cycles with 1 ≤ ≤ such that they pairwise intersect. and two consecutive cycles and +1 have di erent orientation with respect to a given orientation of the plane for 1 ≤ < ≤ . Let be an Eulerian digraph and W a large cylindrical wall in .A -tile ⊂ W is a connected subgraph whose underlying undirected graph is an undirected × 2 -wall (see g. 3).We say that an -swirl S = 1 ∪ . . ., ∪ is induced by W if there is antile ⊂ W such that ⊂ S; see g. 2 for an example.Intuitively, routers and induced swirls will be to Eulerian digraphs what cliques and walls are to undirected graphs. Next we de ne the notions of crosses relevant to the exposition, the rst of which are wall-usable crosses, i.e., crosses that are only readily usable when going with the ow of the cylindrical wall.De nition 3.3 (Wall-usable crosses).Let ∈ N and let W = ( 1 , . . ., ; 1 , . . ., 2 ) be a plane cylindrical -wall.Let ⊂ W be some tile of the wall with corners 1 , 2 , 1 , 2 visited in clockwise order such that 1 , 2 lie on a common wall-cycle ℓ and 2 , 1 lie on a common wall-cycle for some 1 ≤ ℓ < ≤ .Then we say that there is a wall-usable -cross if there exist edge-disjoint paths 1 , 2 ⊆ − (W − ) such that connects to for = 1, 2. When investigating swirls induced by walls, they do not have any apparent ow direction.This leads to the following notion of swirl-usable crosses which, in turn, comes with a natural de nition of at swirls.De nition 3.4 (Swirl-usable crosses and at swirls.).Let be Eulerian and S = ( 1 ∪ . . .∪ ) ⊆ be an -swirl induced by an -tile ⊂ with denoting the outer-cycle of S. We dene S [ ] ≔ ∪ where is the unique component of − containing 1 .Let 1 , 2 , 1 , 2 be the four corners of appearing in clockwise order given a plane embedding of S. Then S admits a swirl-usable -cross if there exist two disjoint paths 1 , 2 ⊆ S [ ] such that connects to or vice-versa for = 1, 2. We call S at if it does not admit a swirl-usable -cross. In particular the above reveals that a plane graph may contains swirl-usable crosses if it contains strongly planar vertices.It turns out that if it does not contain (well-connected) strongly planar vertices, then there is no swirl-usable cross.This is a directed version of the Two-Paths Theorem for Eulerian digraphs as we will clarify later. The Algorithm Our algorithm exploits that, given a large router in our graph , that router contains some cycle whose deletion does not change the instance.In particular we present an algorithmic approach that starts with a cylindrical wall W and either nds said router grasped by W-think of this as the router being well connected to the wallor a at swirl induced by W; this is given as the Flat-Swirl Theorem (see section 3.3 for more details).Then, after at most | ( )| steps there is no Router left and either the treewidth of the graph is low (bounded in ) or the treewidth of the graph remains high.If the treewidth of the graph is low (with respect to ) the problem can be solved using a variant of Courcelle's theorem adapted to directed graphs as pursued in [1].Thus, assume the treewidth is still high.The Flat-Swirl theorem 3.5 then implies the existence of (as well as an algorithmic way to nd) a large at swirl S in .Given S we show that there exists an irrelevant cycle deeply nested inside the at swirl.This is presented as theorem 3.12, the proof of which needs a lot of preparation and machinery.In either case, router or at swirl, we are able to inductively reduce the input instance to an equivalent instance of low treewidth in fpt-time on , which in turn is an instance that we can solve in fpt-time as discussed.We proceed by giving the algorithm, proving the main theorem 1.2 of this paper, referring to theorems we will only introduce and discuss subsequently.Proof of theorem 1.2.Let + be an instance of the directed Eulerian Edge-Disjoint Paths problem such that + is of maximum degree four and every terminal vertex is of degree two.Let ≔ | ( )|.Let 1 ( ) ≔ 3.9 ( ) and 2 ( ) ≔ 2ℎ 3.12 ( ).Given 1 , 2 de ne 1 ( ) ≔ 3.5 ( ; 1 , 2 ).And nally let ( ) ≔ 2.4 ( 1 ( )). The following algorithm decides the instance in fpt-time on . 1. Determine whether tw( + ) ≤ 6 • ( ), which can be done in fpt-time on .If this is the case, then we can solve the instance using a version of Courcelle's theorem for directed graphs [1] in fpt-time on .Otherwise continue with Step 2. 2. Since tw( + ) ≥ 6 • ( ), theorem 2.3 implies that dtw( + ) ≥ ( ).Using theorem 2.4 we deduce that there is an 1 ( ) × 1 ( )-wall W in (away from ) which can be found in fpt-time on .Using the Flat-Swirl theorem 3.5 we deduce that either we can nd a at 2 ( )-swirl S induced by a same-sized tile ⊂ W or a 1 ( )-router R grasped by W in away from ( ) in fpt-time on .If we nd a router go to Step 3; else proceed with Step 4. 3. If we have found a 1 ( )-router R, use the irrelevant cycle theorem 3.9 for routers to nd and delete a cycle ⊂ R that is irrelevant to the instance in fpt-time on .That is, the graph ( − ) + is Eulerian and an equivalent instance to + .After having successfully reduced the instance, go back to Step 1 and start over.4. If we have found a at 2 ( )-swirl S induced by some samesized tile use theorem 3.8 to construct an equivalent instance ′ + with | ( ′ ) ∪ ( ′ )| ≤ | ( ) ∪ ( )| together with a separation ( , ) with ∪ = ′ in polynomial-time such that ′ [ ] contains a at 2 ( )-swirl S ′ with ( ) ∩ ′ [ ] = ∅ and ′ [ ] can be Euler-embedded in a disc.Finally, using the embedding of ′ [ ], the irrelevant cycle theorem 3.12 for at swirls yields an irrelevant cycle ⊂ ′ to the instance nested deeply inside S ′ in fpt-time on .Thus ( ′ − ) + is a reduced and equivalent instance; go back to Step 1 and start over.We continue with a dissection of the above proof, providing further (high-level) details concerning each of the steps; the rst of which is self-explanatory and rather trivial.It is noteworthy that for general directed graphs the directed treewidth may not be bounded by the undirected treewidth in any meaningful way, e.g.acyclic grids which have directed treewidth 1, but contain a large underlying undirected grid and hence have high undirected treewidth.Thus, the Eulerianness (as well as the assumption on the bounded degree) is crucial for the rst step of the algorithm.Note further that there is no direct analogue to Courcelle's Theorem [5] for directed treewidth, again highlighting that the Eulerianness is crucial for our algorithm. The Flat-Swirl Theorem The Flat-Swirl Theorem is in the same spirit as the undirected Flat-Wall Theorem as seen in [23] or the directed Flat-Wall Theorem as pursued in [12] and the many more grid-like theorems lying at the heart of graph structure Theorems [11,13]. In a nutshell the Flat-Swirl Theorem states that given high (un)directed treewidth in an Eulerian digraph of maximum degree four, we either nd a large router or a large at swirl.The proof of the theorem has three major steps.In a rst instance, given high directed treewidth we nd a large cylindrical wall W in fpt-time on using theorem 2.4.We continue with analysing how the rest of attaches to W. Finding and Untangling a Swirl.Given an Eulerian graph containing a -wall W = ( 1 , . . ., ; 1 , . . ., 2 ), we rst prove that either we nd a large swirl induced by the wall (not necessarily at), or we nd a large router grasped by the wall.The topological gadgets of interest in this step are the wall-usable crosses.The swirl we construct in a rst instance will be what we call tangled-not to be confounded with the notion of tangles introduced by Robertson and Seymour, but to be taken in the gurative sense-that is, the swirl when taken as a subgraph may still contain (wall-or swirl-) usable crosses. To nd said swirl, note that the coordinates of the wall W ⊂ have degree 3 in W, while said vertices have degree 4 in ; there is an in-edge or out-edge missing for each coordinate vertex highlighted in red in g. 3. Using this insight we analyse how the remaining paths in − W that start (or end) in wall-coordinates attach to the wall.Given an embedding of we may assume that the cycles of the wall run collectively in the same direction, say clock-wise, lies 'left of' +1 and, similarly, the horizontal path lies 'above' +1 .Finally we denote by , ∈ ( ) ∩ ( ) the coordinate-vertices, i.e., vertices of degree 3; see g. 1 and g. 3. Suppose a path , starts in , , then either the path , ends in a vertex 'above' , and close to it, given the embedding, say , −1 ∈ ( ) ∩ ( −1 ), in which case we call , an up-path, or it ends somewhere further away or below, say in +2, +1 ∈ ( +2 ) ∩ ( +1 ), then , is what we will call The green area is a subwall, the blue area forms a band, and the intersection of both marked in dark blue is a tile. a jump; see g. 4 for a schematic representation of both.Carefully analysing the possible types of jumps (there are Type 0, and jumps) we then deduce that having 'a lot' of edge-disjoint jumps (whichever type) witnesses the existence of routers, while the absence of jumps implies the existence of many up-paths which in turn witness the existence of a swirl (compare g. 2).While the Types of the jumps have no imminent meaning for this exposition, we encourage the reader to analyse g. 4 and ponder on why Type 0 jumps marked with immediately yield wall-usable crosses, while a single Type or jump does, in itself, not witness the existence of such a cross.Leaving out some details, it turns out that Type and jumps never come alone, and thus their existence witnesses the existence of a wall-usable cross in a local area nonetheless.This relies on the notions of jump-sequences and jump-cycles: starting a path with any jump, then threading it down along the wall-cycle the jump ended at to the next coordinate-vertex, we are guaranteed to nd another jump along which we can extend the path, whence, repeating this construction using Eulerianness, we obtain a cycle alternating between jumps on the wall and sub-paths of wall-cycles.This way we either nd a router (in which case we are done) or some swirl S. As mentioned above, the swirl we nd may in itself still contain crosses: for example the up-paths could pairwise intersect in vertices still being edge-disjoint.We call such swirls tangled. To get rid of this nuisance we proceed with untangling the swirl as an intermediary step towards the Flat-Swirl Theorem, resulting in a swirl S where the swirl itself is cross-less-it contains no wall-usable cross.This last step turns out to be rather easy: if the starting wall is large we either nd many tangled swirls far apart, or a router by simply applying the previous result to di erent tiles of the wall.Then, either one of the swirls is cross-less, or each of the swirls contains such a wall-usable cross, where many of the swirls and their crosses are pairwise edge-disjoint.Then, this witnesses many disjoint wall-usable crosses spread over W that can again be used to build a router in the same spirit as when starting with jumps.The general scheme to build a -router using disjoint non-planarities is as follows.We try to locate 2 disjoint 'large' bands (see g. 3) in the wall that each cover a wall-usable cross, witnessing the existence of 2 wall-usable crosses in di erent bands: a cross-column.Then the router can be built from the wall-cycles by threading in the direction prescribed by the wall-cycles and pairwise intersecting them in the 2 distinct bands resulting in 2 pairwise intersecting cycles; we skip the details for they are standard in the area of graph structure theory. Flattening a Swirl.In a second step, we re ne the above analysis proving that either we nd a large router on top of the untangled swirl S, or we nd a large at sub-swirl.To this extent we repeat the analysis as in the previous step-we analyse the di erent paths attaching to S-but with the additional information that we already have a swirl which provides even more structure to build routers.In contrast to the last section the topological gadgets of interest in this step are the swirl-usable crosses.Compared to wall, swirls come with a richer structure: it turns out that almost any path starting and ending in a swirl, and otherwise edge-disjoint from it witnesses the existence of a swirl-usable cross.This way we are able to guarantee that either we nd a router attached to the swirl S, or the swirl contains some sub-swirl that is at.The proof here heavily relies on the Eulerianness of the graph and in particular the Eulerianness of S [ ] − S which consists of the attachments to S that are relevant to nding the crosses.The techniques used share analogies to the techniques we introduce to nd the (untangled) swirl but are far more elegant as we are working on a swirl. Finally, as a last step, we prove that we can reduce the instance to an equivalent instance such that S [ ] can be Euler-embedded; until here their could be two-cuts or four-cuts in our graph that may attach highly non-planar graphs to the at swirl, which cannot be embdedded but neither be used to cross paths starting and ending in the swirl, similar to ≤ 3-separations in the undirected setting.The reduction relies on a theorem due to Frank, Ibaraki, and Nagamochi [10], proving a version of the aforementioned Two-Paths Theorem tailored to Eulerian digraphs.To state the theorem we rst need the following reductions introduced in [10, Section 3]. 1 Let ⊆ ( ) induce a 2-cut such that ∩ = ∅.Let be the tail of the edge from ¯ to and the head of the edge from to ¯ .Then delete and (if ≠ ) add the edge ( , ).Using the above reductions, Frank, Ibaraki, and Nagamochi [10] de ne minimal instances as follows: Let be an Eulerian digraph and ( , 1 , 2 ) an instance of the unordered Eulerian Edge-Disjoint Paths problem; that is, the problem asking whether there exist two edge-disjoint paths connecting to or vice-versa for = { , } and = 1, 2. Then we say that ( , 1 , 2 ) is a minimal instance if none of the reductions from de nition 3.6 are applicable to .The same authors showed that applying any of the reductions in de nition 3.6 results in an equivalent instance.The following is the aforementioned version of the Two-Paths Theorem. Leveraging the above reductions to our setting in order to reduce our instance + by killing 2-cuts and 4-cuts, we prove the following.Note here that in his dissertation Johnson [15] implicitly proved the existence of such a at swirl in the case that + is internally 6 edge-connected removing the problem of loosely connected attachments and thus circumventing the need of a Two-Paths Theorem.The techniques we use are however very di erent from his and have more of an algorithmic avour analysing the attachments to swirls and wall 'by hand', revealing how a wall (stemming from high treewidth) helps in nding either swirls or routers.Further, the results established in order to nd said ( at) swirl are of independent interest for future work we are pursuing. Irrelevant Cycles in Routers The third step of the algorithm relies on a single theorem-the irrelevant cycle theorem for routers-reading as follows. Theorem 3.9 (Irrelevant Cycle Theorem for Routers).For every function ( ) there exists a function ( ) ≔ ( ; ) satisfying the following.Let + be an instance of the Eulerian Edge-Disjoint Paths problem with | ( )| = ∈ N. Let R ≔ 1 ∪ . . .∪ ( ) ⊂ be a ( )-router in .Then there exists a sub-router R ⊆ R of size ( ) ≤ ( ) which can be found in fpt-time such that each router-cycle ⊂ R is irrelevant to the instance, i.e., ( − ) + is an equivalent instance. In a nutshell the theorem states that, given a large enough router R in there is a cycle ⊂ R in the router that is irrelevant to the instance, and it can be located in fpt-time on .The proof of this relies on a Theorem due to Frank [9] which states that the Edge-Disjoint Paths problem for Eulerian digraphs can be solved in polynomial time if the demand graph consists of two directed stars; a directed star is a graph where all the edges have the same vertex as a head (or as a tail).Theorem 3.10 (Theorem 2.3 in [9]).If + is an Eulerian digraph, and is the union of two stars, then the directed cut criterion is necessary and su cient for the solvability of the directed Edge-Disjoint Paths problem.(In particular it can be checked in polynomial time on the instance).The graph R + remains Eulerian by construction.We can decide the instance R + in fpt-time using Frank's theorem 3.10 (and Algorithm) and nd the respective paths solving the instance (if it is a YES-instance) in fpt-time.In a nutshell the instance R + asks whether there exist edge-disjoint paths routing from 1 , . . ., to 1 , . . ., that use the router cycles.If the answer is NO, then this means that not all paths can simultaneously use the router, which in turn can be used to apply inductive reasoning.If the instance is a YESinstance, then we are able to show that it remains a YES-instance after deleting part of the router R, that is, after deleting a large sub-router R ′ ∪ * ⊂ R, where * is a designated cycle.Finally, we prove that the deleted sub-router R ′ can be used in to reroute the solution in R omitting and * , and thus producing a solution in omitting a cycle * ⊂ R ; this is proven as the following lemma. Lemma 3.11.There exists a function ( ) satisfying the following.Let be a graph with designated vertices , ⊂ ( ) such that | | = | | = ∈ N such that any matching of edges connecting vertices in to vertices in results in an Eulerian graph + .Let R ⊂ be a ( )-router in .Further assume that every pushed router-cut of ( , , , R)-think of this as a cut being chosen as close to R as possible--is of order 2 .Then there exists some cycle * ∈ R such that + is a YES-instance if and only if ( − * ) + is a YESinstance for every possible demand graph .In particular we can nd * in fpt-time parameterized by . Note that the exact knowledge of is not needed, i.e., it su ces for to be pseudo-Eulerian.While the above may sound easy as described here, the proof does in no way follow immediately from theorem 3.10 and requires some ddling and clever choices of sub-routers. Irrelevant Cycles in Flat-Swirls The fourth step of the algorithm is the most elaborate part again relying on a single result, namely the irrelevant cycle theorem for at swirls, which, in a nutshell, proves that given a graph + and a large Euler-embedded swirl S in a disc, the most deeply nested cycle of S is irrelevant to the instance.We prove a slightly stronger theorem proving that cycles deeply nested in a large insulation are irrelevant to the instance.An ℎ-insulation in a graph Γ Eulerembedded in some surface Σ is a collection = 2ℎ =1 ⊆ Γ of concentric edge-disjoint cycles embedded in a disc Δ ⊂ Σ such that and +1 have alternating orientation for every 1 ≤ < 2ℎ and such that Δ 1 ⊆ Δ 2 . . .⊆ Δ 2ℎ where Δ ⊂ Δ is a disc bounded by the outline of in Σ, containing but no edge of +1 .Let ′ ⊂ Γ be a subgraph embedded in Σ away from Δ. Then an Eulerian subgraph ′ ⊆ Γ [Δ 1 ] is called ℎ-insulated from ( ′ ).Given a at swirl we may use theorem 3.8 to get an equivalent instance with the at swirl embedded in a disc, providing us with the setting needed for theorem 3.12.We continue with elaborating on the main ideas behind the proof of theorem 3.12, taking up over 70 pages in the full paper, in smaller steps. Shifting the Paradigm.Readers (now) familiar with the Graph Minor Structure Theorem due to Robertson and Seymour [24] and the main ideas behind the proof that the undirected (vertex-)disjoint paths problem can be solved in fpt-time [23] (which heavily relies on the irrelevant vertex technique [25]) will see the inspiration in our line of argumentation, but also the di erences in the obstacles we had to overcome stemming from the edge directions and the fact that we need to keep our graphs Eulerian.The general line of reasoning uses similar arguments as the respective proof of the undirected version.However, many of our proofs di er from the proofs given in [23,25,26] for many of the arguments do simply not transfer to our setting. The arguments and ideas we provide heavily exploit the Eulerianness of the graphs and the fact that we are solving the Edge-Disjoint Paths problem rather than the Vertex-Disjoint Paths problem.The latter turns out to be far more impactful than anticipated: for one of the most notable di erences being that we will shift the paradigm from graphs to be seen as a set of vertices with edges being relations on vertices, to graphs to be seen as incidence structures.That is vertices and edges are equals and may (in theory) exist without the need of each other.Of course we were not the rst to view graphs in that way, and by no means do we think we are pioneers in that sense: we call it shifting the paradigm because it deviates from the standard way of viewing graphs and was a very insightful and fruitful step to take.That is, an edge ∈ will be an element for itself where each edge in a (non-partial) graph is adjacent to exactly two vertices which is captured by the tail ⊂ × and head ⊂ × relations.It turns out that switching to incidence graphs facilitates and smoothens a lot of the reasoning for behind the machinery we set up for a proof of theorem 3.12.Note here that the conceptually similar idea of switching to the line-graph-the graph obtained when taking edges ∈ ( ) as vertices and adding an edge if there exists a 2-path between two edges-and trying to leverage the arguments made for the vertex-disjoint case, comes with a lot of obstacles, losing most of the inherent beauty (as far as we were able to reproduce).The key di erence being that, when passing to the line-graph we create unnecessary edges that were of no use in the rst place, for no solution may have ever used a two-path in representing that edge.Note further that, even if the instance has now been reduced to a Vertex-Disjoint Paths problem, many of the arguments made in [25] do not transfer to this setting trivially: we cannot delete vertices, split vertices nor contract edges in the line-graph; operations heavily used to prove the main theorems in [25,26] and in turn the irrelevant vertex theorem.Hence, since we are interested in edges, and edge-disjoint paths, it seemed only natural to take this step, unleashing more potential than anticipated. A Minimal Counterexample.Let + be an Eulerian digraph and suppose that contains an ℎ-swirl S = 1 ∪ . . .∪ ℎ such that S [ ] can be Euler-embedded in a disc which we found in Step 2. of the algorithm.In order to prove the existence of an irrelevant cycle deeply nested inside the swirl S we assume the contrary and let + be a minimal witness towards our hypothesis together with a -linkage L = { 1 , . . ., } witnessing that + is a YES-instance and thus, by our assumption, it visits all the cycles of S. The main theorem in the analysis of the counterexample states that, given the above, + adheres to a very restrictive structure: + is what we call an ℎ-ower graph (see g. 6).In particular ( ) = (S) ∪ ( ) and every possible edge in + is either an edge of S, , or between two vertices lying on the outer-cycle ℎ .In particular S [ ] = S, and thus S has no attachments to take care of.The most crucial observation is that the linkage L is rigid.De nition 3.13.We call L rigid if + = 1≤ ≤ + and there is no other -linkage L ′ in solving the instance + .This is very restrictive and a key ingredient to most of the proofs towards theorem 3.12.Note that by theorem 3.9 this implies that the graph cannot contain a large router.Using the rigidity we can derive even more restrictive patterns for the paths in L: it turns out that any restriction of some path to the swirl S ⊆ is what we will call a level path.That is, any component ∈ ∩ S is a path starting at the outer-cycle ℎ , then going straight down to some swirl-cycle and from there straight back up to ℎ where it ends; in particular a level path visits every swirl-cycle (up to the level ) exactly twice.This is a rather powerful observation that can be used to prove the irrelevant cycle theorem in the case that the ℎ-ower graph can be Euler-embedded in some xed surface of bounded genus. Although we could foucs on ℎ-ower graphs for the remainder of the proof-refuting its existence and thus contradicting the minimal counterexample-we present the results in a broader setting.That is, we prove that for ℎ large enough, there exists no rigid linkage in + containing a at ℎ-swirl S. The statements and proofs need a lot of notation and the establishment of some heavy machinery, the gist of which we describe next. Coastal Maps.The main tool that helps us to dissect the graph + in order to better analyse how a solution L in behaves, is what we call (weak and strong) coastal maps.The exact de nition of coastal maps is rather technical thus we will rely on intuition here: 'admitting a coastal map' means that there exist pseudo-Eulerian graphs Γ, I ⊂ and a surface Σ (possibly with boundary) such that = Γ ∪ I where Γ can be Euler-embedded into Σ with a few vertices (conceptually edges that we call ports) drawn on the boundary bd(Σ) (rather what we call the zone) of Σ, with S ⊆ Γ.Further, each component ′ of I is what we will call an island, that is Γ ∩ ⊂ ( ) is drawn on a single cu ∈ (Σ) of the boundary of Σ-the boundary of Σ consists of disjoint cu s which are homeomorphic to circles-and there exists no large linkage connecting two halves of -in particular starting and ending in Γ ∩ -that is otherwise contained in , i.e., the islands are of bounded depth.Further, the coastal map assigns (sort of) a linear decomposition to each island, guaranteeing some internal linkedness properties taming the behaviour of linkages inside the island; this is too technical to describe in detail, intuitively on may think of this as regrouping an island into a bounded number of chunks arranged in a cyclic order, each chunk attached to some vertex of the same cu , such that (almost) any two neighboring chunks are equally-well connected so we get a good grip on how a rigid solution L may behave inside the islands. Part of the proof of the existence of a coastal map relies on a structure Theorem for Eulerian directed graphs (actually we only need a structure theorem for the very restricted class of owergraphs).In his dissertation Johnson [15] proved a structure theorem for internally 6 edge-connected Eulerian digraphs that suits our needs.The theorem, restricting and adapting it to terms introduced in this exposition, reads as follows. Theorem 3.14 (Theorem 17.1 in [15]).Let be a positive integer.There exist integers , , , , and such that the following holds.Let be an internally 6 edge-connected Eulerian digraph.Suppose contains an Eulerian subdigraph which immerses a swirl of size at least .Then either immerses a router of size or there exists a surface Σ and an Eulerian digraph ′ such that: 1. ′ is obtained from by exchanging at most edges.2. ′ Euler-embeds in Σ with at most islands, each of depth at most where every island is surrounded by edge-disjoint cycles of alternating orientation drawn in Σ. 3. The embedding can be chosen so that there is a closed disc disjoint from every island and every changed edge containing an Eulerian subdigraph which immerses a swirl of size . We use said theorem since giving a rigorous proof ourselves (for ower-graphs) would take up quite some pages without much new insights.However, since the results in the dissertation have not yet been published, we will provide a proof-and hopefully a structure theorem for general Eulerian digraphs-in the future, using di ering techniques matching the constructions and results we provided in this exposition. Shipping with Coastal Maps Recall that we assume + to be a graph without large router containing a at ℎ-swirl, and L to be a rigid -linkage (just think of + as an ℎ-ower graph).Using the above one can show that, after cutting through a bounded number of edges in , the graph + admits a (weak) coastal map (note that this does not follow imminently from theorem 3.14 but requires us to chart the islands in order to get the aforementioned linear decompositions).To make the idea of cutting edges more rigorous think of it as follows: let ∈ ( ) be an edge, then since L is rigid there exists ∈ L with ∈ ( ).Since is part of the solution to + , there exists ( , ) ∈ ( ) such that starts in and ends in .Now cut = ( , ) into two edges 1 ≔ ( , ) and 2 ≔ ( , ) by introducing the respective vertices.Then replacing ( , ) ∈ ( ) via ( , ), ( , ) does the trick (since augments we require the number of cuts to be bounded). Finally, we prove that if + admits a coastal map of bounded depth, then + cannot admit a rigid linkage, a contradiction to the minimal counterexample.The proof reduces the -linkage L to a rigid ( )-linkage L ′ of Γ (a function independent of ℎ).The proof works via induction by cutting through a bounded number of edges in , whose endpoints result in new demand-edges for a new demand graph ′ which by our assumption need not be embedded in Σ and neither be part of any island; this is crucial.Then, after doing some more skilled cuttings (we massage a weak coastal-map into a strong one), if any cu ∈ (Σ) contains 'too' many vertices of and thus too many ports (the edges adjacent to the vertices marking the entries to the islands from the surface), then the linkage can be rerouted inside the island at to omit part of the ports.This uses the aforementioned linear decomposition in the islands.Since L was assumed to be rigid, this is impossible (we cannot nd a di erent linkage solving the same instance) and thus only a bounded (in ) number of vertices may lie on each cu of Σ.This guarantees that our linkage L does only interact with each island a 'bounded number of times'.Therefore, after cutting all the (boundedly many) ports, we get a new graph ′ + ′ where ′ is completely Eulerembedded in Σ together with a rigid ( )-linkage in ′ containing an Euler-embedded ℎ( )-swirl; we killed the islands. Shipping in the Open Sea.We reduced the problem to the case where we have a graph + together with an ℎ-swirl S such that is Euler-embedded in Σ and admits a rigid -linkage.The last step in the proof of theorem 3.12 is to show that this is impossible; a minimal counterexample is again given by an an ℎower graph as above.We leverage a result due to Cygan, Marx, Pilipczuk, and Pilipczuk [6] implying that given a digraph Eulerembedded in a disc such that there exists a large embedded swirl then the most deeply nested cycle is irrelevant to the instance; this marks the base-case when Σ is a disc.Our proof uses induction on Σ and the structure of the minimal counterexample + : we are able to cut the surface Σ along a closed curve reducing the genus of the surface, and such that intersects the swirl S only in a bounded (say ℓ ( )) number of edges after deleting some edges.This results in a surface of lower genus and an Euler-embedded graph ′ containing a large swirl S ′ , together with a rigid ′ = ( + ℓ ( ))linkage; the claim then follows by induction.The existence of the just mentioned curve heavily relies on the fact that for each path ∈ L it holds that every component ∈ ∩ S is a level path as mentioned above.This allows the following: take a sub-swirl S ′ deeply nested in S. One can show that (using rigidity) there must be some path ∈ L containing some sub-path ⊂ that 'uses' the genus of the surface-closing to a cycle and cutting the surface along reduces its genus-such that has both its endpoints on S ′ .Then has two components 1 , 2 connecting the outer-cycle of S ′ to the outer-cycle of S, where both , being sub-paths of some level path, do not visit any of the swirl-cycles twice.Thus, deleting (the edges of) both paths, cutting Σ along the curve traced by , and cutting straight through S ′ connecting the ends of both paths in order to reduce the genus, results in a graph still containing a large undirected wall and thus a large swirl by our previous work.All in all this augments the number of paths in the linkage roughly by the order of S ′ , i.e., ℓ ( ), concluding the idea of the induction. It is noteworthy that in his dissertation Johnson [15] proved a similar theorem to what we dubbed Shipping in the Open Sea, that in our setting implies that if is Euler-embedded in Σ and contains a large swirl, then the linkage cannot be rigid.Note also that it is not clear at all how to extend the base-case result for Euler-embedded graphs to the general setting (which we capture by coastal maps).While his proof relies on a classi cation of homotopy classes of curves, our proof is a consequence of preliminary work we have done to analyse the minimal counterexample-this is of independent importance to the proof of theorem 3.12-which is of its own interest, yielding a deeper understanding of the behaviour of rigid linkages in minimal counterexamples (e.g. level paths). This concludes the high-level discussion of theorem 3.12, gathering the main ideas to a proof of our main theorem 1.2. stated in [2, Problem 4.5.7] the status of the -Edge-Disjoint-Paths problem on Eulerian digraphs is wide open and could range from xed-parameter tractable with respect to to NP-complete already for = 4.The main result of this paper is to settle this long open problem by showing that the -Edge-Disjoint-Paths problem is xed-parameter tractable parameterized by on the class of all Eulerian digraphs.Theorem 1.2 (Main Theorem).The -Edge-Disjoint-Paths problem in Eulerian digraphs is xed-parameter tractable parameterized by the number of terminal pairs . Figure 2 : Figure 2: A 6-swirl with highlighted cycles induced by a tile. Since each time we enter Step 3 or Step 4 we reduce | | ≔ | ( )| + | ( )| by at least 1, the above algorithm stops after at most | | many recursive steps, each of which run in fpt-time.This concludes the proof.□ Figure 3 : Figure 3: Part of an embedded wall with wall-coordinates.The green area is a subwall, the blue area forms a band, and the intersection of both marked in dark blue is a tile. Figure 4 : Figure 4: Wall with Type 0 jumps marked by , Type and jumps marked by and , and up-paths marked by . 2 Let ⊆ ( ) induce a 2-cut such that | ∩ | = 1.Contract to the terminal ∈ deleting any resulting loops.3 Let induce a 4-cut such that the subgraph [ ] is connected and, | | ≥ 2 and ∩ ( ) = ∅.Then contract [ ] to a single vertex of degree four, and delete possible loops. Figure 5 : Figure 5: Constructing by connecting to a 3-router in Figure 6 : Figure 6: An example of a possible ℎ-ower graph in the case of = 2 where is highlighted in purple.
16,462.6
2024-02-21T00:00:00.000
[ "Mathematics" ]
OUR EXPERiENCE WiTH ATOPY PATCH TESTS WiTH AEROALLERGENS Aim of our study was to evaluate the importance of atopy patch testing with aeroallergens as a diagnostic method in patients suffering from atopic dermatitis. Method: the complet dermatological and allergological examinations were performed in 29 patients; 10 men, 19 women with the average age of 27.8 years, min. 17, max. 57 years; with the median sCoRAD 24.2 points, s.d. 13.3 points. Wormwood, grass, dog dander, cat dander, dermatophagoides pharinae, dermatophagoides pteronyssinus and birch pollen were examined in diagnostic procedures. Skin prick tests, specific IgE were examined; the atopy patch tests were performed with aeroallergens for skin prick tests in concentration 1× skin prick tests. Results: Specific IgE and skin prick tests to one or more tested aeroallergens were positive altogether in 27 patients; atopy patch tests were positive only in one of these patients. Conclusion: For atopy patch testing with aeroallergens the concentration of 1× skin prick tests is low to confirme the eczematic reaction in patients suffering from allergy to inhallant allergens. introduction Atopy patch test (APt) involves epicutaneous application of type I allergens known to elicit Ige -mediated reaction, followed by evaluation of eczematous skin reaction after 48 and 72 h (1).It represents a model of cellular immunity reaction and it is presumed to reflect delayedphase clinical reactions.According tonzic (2), its value is supported by the fact that atopic dermatitis is the result of complex immune interactions and involves both Coombs and Gell reactions type IV and I. APt is considered as a useful diagnostic procedure in patients with atopic dermatitis allergic to inhalant allergens (house dust mite, pollen and animal dander) and in children with food allergy younger than 2 years.The sensitivity and specificity of the test greatly depend on the tested allergens and patient age (2). the limitations of atopy patch tests include the lack of test standardization.After standardization, the APt may provide further diagnostic information in addition to the skin prick test and serum immunoglobulin e values and may be able to evaluate the actual clinical relevance of immunoglobulin e-mediated sensitizations for eczematous lesions (3).Various concentrations of allergens for APt are described in the literature, ranging from 1× skin prick test (sPt) (10,000 AU/ml) to 1,000 × sPt (4). Aim of our study was the evaluation if the usage of 1× skin prick tests of aeroallergens (= 100 IR/ml) is a convenient method for atopy patch testing.these tests were performed in patients suffering from atopic dermatitis in the age 14 years and older. Methods Patients 29 patients over 14 years of age with atopic dermatitis (the diagnosis was made according to the the Hanifin-Rajka criteria (5)) were examined at the outpatient department of Department of Dermatology and Venereology, Faculty Hospital and Medical Faculty of Charles University, Hradec Králové, Czech Republic, from september 2010 to May 2012. Complete dermatological and allergological examination were performed in all included patients (including the examinations for asthma bronchiale with spirometry).the occurence of rhinoconjunctivitis was evaluated. Scoring of atopic dermatitis severity of eczema was scored in agreement with sCo-RAD score, with assessment of topography items (affected skin area), intensity criteria (extent of erythema, oedema, crusts, excoriations, lichenification, xerosis), and subjective parameters (extent of itch and loss of sleep).Mild form to 20 points, moderate 21 to 50 points, over 50 points severe form of atopic dermatitis.oRIGInAL ARtICLes Tested allergens Wormwood, grass pollen, dog dander, cat dander, House dust mites -Dermatophagoides pharinae, Dermatophagoides pteronyssinus, birch pollen were used in testing procedures.After discontinuation of antihistamines and topical steroids for at least 5 days and systemic steroids and UV therapy at least 2 months, the skin prick tests, the atopy patch tests were performed.Specifig IgE was examined. Skin prick test Commercial extracts Alyostal (stallergens, France) were used for skin prick tests (sPt). sPts were placed on the volar side of the forearm according to the extent of atopic dermatitis.sPts were carried out by a standardized method using lancets with a 1 mm tip. the results were read after 15 minutes and were assessed by comparison with the wheal induced by histamine (10 mg/ml) and negative control.A wheal with a diameter greater than 3 mm in comparison with negative control was scored as positive. Specific IgE The serum level of the specific IgE to the tested aeroallergens has been measured with the method of CAP (system FeIA -Pharmacia Diagnostics, Uppsala, sweden).the level of specific IgE higher than 0.35 U/ml was assessed as positive. Atopy patch tests Atopy patch tests were performed on non-lesional, non-abraded, untreated skin of the back during a remission. A technique similar to conventional patch tests has been used by performing of atopy patch testing -CURAtest F strip (Lohmann & Rauscher International GmbH & Co. KG D-56579, Rengsdorf, Germany) with 12mm cup size.For atopy patch testing the concentrations of allergens 1× sPt was used -commercial extracts Alyostal (stallergenes, France), 1 ml of this allergen was administred to 12 mm cup size of patch testing.the reactions were evaluated in 48 and 72 hours after the first application of allergens. Grading of positive APts reactions was similar to the criteria used in conventional contact allergy patch testing with the modifications of the European task Force on Atopic Dermatitis (eFtAD) Consensus Meetings; i.e. + erythema, infiltration, ++ erythema, infiltration, papules (up to 3), +++ erythema, papules from 4 to many, ++++ erythema, many or spreading papules and vesicules.test application and reading was performed by an investigator with no knowledge of the patient´s history.only reactions from + (erythema, infiltration) onwards were designated positive (6). Patients Altogether 29 persons suffering from atopic dermatitis were included in the study: 10 men, 19 women with the average age of 27.8 years, min.17, max.57 years; with the median sCoRAD 24.17 points, s.d.13.3 points. Personal history Rhinoconjunctivitis was recorded in 21 patients. Spirometry examination Asthma bronchiale was recorded in 12 patients. Specific IgE Specific IgE to tested aeroallergens were recorded in 16 patients (to birch in 2 patients, to grass in 7 patients, to wormwood in 4 patients, to cat or dog dander in 6 patients, and to dermatophagoides farinae or pteronyssinus in 9 patients) (table 1). Skin prick tests Positive results in skin prick tests to tested aeroallergens were recorded in 25 patients (to birch in 8 patients, to grass in 17 patients, to wormwood in 2 patients, to cat or dog dander in 5 patients, and to dermatophagoides farinae or pteronyssinus in 11 patients) (table 1). Atopy patch tests Atopy patch test were recorded as positive in 1 patientto wormwood, grass and dermatophagoides farinae (table 1, patient no 23), evaluated as ++. the positive results in skin prick tests were recorded to dermatophagoides farinae, grass and cat dander, in sIge to grass and dermatophagoides farinae (table 1). Discussion The first experimental study on patch test with aeroallergens was published in 1937 by Rostenberg and sulzberger, and in 1982 by Mitchell (7,8) and various APt techniques have been described in the literature.In order to enhance the penetration of the allergen into the skin, skin abrasion, tape -stripping and sodium lauryl sulfate application were used (4).today, APt is performed on nonlesional, untreated skin in remission (4). the european task Force on Atopic Dermatitis (eFtAD) has developed a standardized APT technique.It consists of purified allergen preparation in petrolatum, applied in 12 mm diameter Finn chambers mounted on scanpor tape to non-irritated, non-abraded, or tape-stripped skin on the upper back (8). the test is read after 48 hours and 72 hours and the reading key is the appearance of erythema, and the number and distribution pattern of the papules.Usage of aeroallergens concentration over 5,000 PnU (protein nitrogen units/g in petrolatum allows for testing on clinically uninvolved skin without potentially iritating tape-stripping (9).Various concentrations of allergens are described in the literature, ranging from 1x sPt (10,000 AU/ml) to 1,000 × sPt (4).Van Voorst Vader et al.conclude that the optimal allergen concentration should be 500 × sPt with exposure time of 48 h (10).Langeveld-Wildschut et al. conclude that concentration should be equal to 1 × sPt and according to their results increasing the allergen concentration to up 10 × SPT did not significantly influence the number of positive results (11).In their study, APts were performed in 84 patients with atopic dermatitis, 30 control patients with atopic disease, and 85 healthy volunteers, with house dust mite and grass pollen allergens in concentrations of 100, 1,000, 10,000, and 100,000 allergenic units/ml.the authors from Poland also studied the impact of allergen concentration and found that 0.1 × sPt was too low, while 10 × SPT concentration had significantly more positive reactions than 1 × sPt (12). When biopsy is performed from allergen-induced eczematous APT site, allergen specific T cells are cloned (13)).the tH2 cytokine pattern is initially present and after 48 h TH 1 pattern is predominant (13).An early influx of inflammatory dendritic epidermal cells into lesional skin has been demonstrated (14).When allergen is captured by Ige molecules, it binds to Ige receptor on Langerhans cells.Antigen presentation results in specific T cell reaction which is responsible for eczematous reaction observed clinically (14).t cells are responsible for the reaction occuring in lesional skin in atopic dermatitis and also in the skin in APt, and macroscopic and microscopic similarities indicate that APT is valid model for inflammation found in atopic dermatitis (15). According to our results in specific IgE and in skin prick test, the allergy to aeroallergens is common in patients suffering from atopic dermatitis; 12 patients of our study suffer from asthma bronchiale and 21 patients from rhinoconjunctivitis. the positive result in atopy patch test was recorded only in one patient.this patient suffers from moderate form of atopic dermatitis, from allergy to aeroallergens according to the result in sIge and sPt, suffers from asthma bronchiale and rhinoconjunctivitis; his level of total Ige is 5,000 IU/ml, this level is the highest in comparison with the results of other patients. Literature data indicate that positive APt reactions can occur in 15-90% of atopic eczema patients, depending on the methodology used in testing (12,16).Healthy individuals as well as patients with respiratory atopy without a history of eczema have negative APt or react to house dust with lower frequency and intensity compared with atopic patients (12).Ronchetti et al. found positive APt with food in 4-11% and with aeroalleregens in 4-30% of an unselected children population, depending on allergen tested (17), and these results are in conflict to other study results (4,18). For our atopy patch testing the concentrations of allergens 1× sPt was used from commercial extracts Alyostal (stallergenes, France).this company makes use IR units in skin prick tests.The definition is that it is a measure of allergenic efficacy; it is extract of allergens with the content of 100 IR, which provokes in skin prick test with the use of lancet stallerpoint a wheal about the size 7 mm in 30 patients with sensibilisation to this allergen.the other company producing skin prick tests ALK Abelló has units of allergens in HeP (histamin equivalent prick) and sevapharma makes use of PnU/g (Protein nitrogen units/g).that is why it is difficult to identify the solutions in skin prick tests with high concentration of allergens, because each company represented on the market (sevapharma, stallergenes, ALK Abelló) has chosen its own allergen standardization with its own unit of measure.the direct conversion from one unit to another is not possible.What is missing is the 'gold standard' upon which the various 'currencies' orient themselves and which allows clear comparability.We can conclude, that the concentration of 1× skin prick tests of stallergenes is not right for atopy patch testing. the european APt model used with standardization of allergen concentration and vehicle may provide an important diagnostic tool to select patients for avoidance and for procedures of allergen-specific immunotherapy, but the clinical relevance of positive APt reactions awaits standardized provocation and avoidance testing (3). Conclusion the concentration of aeroallergens of 1× skin prick tests (extracts Alyostal, stallergenes, France) is too low to confirme the eczematic reaction in patients suffering from allergy to inhallant allergens in atopy patch testing.
2,793
2013-01-01T00:00:00.000
[ "Medicine", "Biology" ]
A Modified IHACRES Rainfall-Runo ff Model for Predicting the Hydrologic Response of a River Basin Connected with a Deep Groundwater Aquifer : A flow regime is influenced by the degree of hydrologic connection between surface water and groundwater. As this connection becomes more transient and the basin’s runo ff response more non-linear, such as for intermittent streams, the need for explicit representation of the groundwater component increases. The present study investigates the connection between Northern Etna groundwater system and the Alcantara river basin in Sicily (Italy). In particular, the upstream part of the basin, whose flow regime is essentially intermittent, is modeled through a modified version of the IHACRES rainfall-runo ff model. The structure of the model includes a routing module formulated as a two-store model, with the upper store simulating the quick component of the runo ff and recharging the lower store which, in turn, describes the slow component of the runo ff and the groundwater extraction and losses. Both stores are conceptualized as simple linear reservoirs, with the lower one that maintains a continuous water balance account of groundwater storage volumes for the upstream basin area with respect to a control cross-section, assumed to be the stream gauging station. The model is calibrated at Moio Alcantara cross-section, where daily streamflow data are available. Model calibration and validation are carried out for the period 1980–1984 and 1986–1988, respectively. A first-order analysis is also performed to assess the sensitivity of model parameters. The adopted configuration is shown to improve model performance with respect to the original IHACRES model, with the proposed formulation able to better capture the interactions between the aquifer and the river. Introduction Groundwater extraction from aquifers that are connected with river systems can alter river hydrologic response by reducing the base flow, or low flow, component of streamflow. This may have adverse consequences for riverine ecosystem health and threaten water resources security [1]. To this end, it is particularly important to understand the connectivity between surface and groundwater resources, so that connected systems could be managed as a single system. Deep knowledge of such interaction is indeed required to effectively implement appropriate water policies, as well as to investigate the role played by climatic variability on the observed impacts on water resources availability. The degree of coupling between groundwater and surface water systems defines a flow regime and has implications for the model structure needed to predict the hydrologic response of a river basin. In perennial systems, there is a permanent connection between surface water and groundwater, and good results can be obtained from rainfall-runoff models that do not explicitly represent the groundwater storage. While ephemeral streams are defined as having short-lived flow after rainfall, intermittent streams may maintain flow over some sections even during dry periods, due to the local emergence of the water table over the ground level. Rainfall-runoff models often fail to simulate the hydrologic connection between surface water and groundwater where it tends to be variable in time and space, as for the intermittent streams. This is, for instance, the case of the Alcantara river basin in Sicily region (Italy), whose upstream is intermittent, while its middle valley is characterized by perennial surface flows, enriched by spring water arising from the big aquifer of the Northern sector of the Etna volcano. Since significant groundwater extraction is mainly located in the upstream, an in-depth knowledge of the aquifer-river interaction in this part of the river basin is fundamental for a proper water resources management. In a previous study, Aronica and Bonaccorso [2] investigated the impact of future climate change on the hydrologic regime of the Alcantara River basin, by combining stochastic generators of daily rainfall and temperature with the IHACRES rainfall-runoff model under different climatic scenarios, to qualitatively detect modifications in the hydropower potential. In their study, some simplifications to the system configuration have been considered to disregard the contribution of the groundwater component, as the emphasis was on simulating surface runoff only. With reference to the aquifer-river interaction studies, fully coupled models are usually considered the most appropriate models for water resources management. However, coupled surface-groundwater models present several complexities. One of the main challenges is related to the mathematical representation of the flow and head variability between surface and subsurface systems [3]. The choice of the temporal discretization is critical as well. In fact, surface water models often use small time increments (minutes to days) to capture rapid hydrologic changes, while groundwater models require longer time periods (weeks to months) to simulate slower groundwater movement and solute transport. Other important limitations are the need for a large amount of input data and the time required for model development, calibration, and simulation. Overall, complex models, considering a large number of spatially distributed processes, may be characterized by a high degree of uncertainty associated with model parameters, which may affect the model outputs, thus resulting in lower predictive capability. On the other hand, an alternative modeling approach for aquifer-river interaction processes can consist of a simple, spatially lumped model, using as few parameters as possible to represent the key identifiable river basin features, such as that achieved by combining a rainfall-runoff model with a simple groundwater model. In the present study, a modified IHACRES model is proposed to better describe the complex connection between Northern Etna groundwater system and the Alcantara river basin. The adopted modeling approach involves an integrated analysis of the basin response to rainfall through the implementation of a two-store routing module, which allows to simulate the groundwater component of the flow, as well groundwater extraction and losses. The modified version of IHACRES model is calibrated and validated at one of the main cross-sections of the Alcantara river basin, namely Moio Alcantara, where daily streamflow data are available for both model calibration and simulation. The structure of the model also provides the opportunity for dealing with parameters uncertainty when very short and poor-quality data series are available for model calibration and validation [4]. In Section 2, first, the case study used for the model application is illustrated. Then, after a brief description of previous IHACRES-based models addressing groundwater-surface water interactions, the proposed modeling approach is described in details. Section 3 presents the main results of the calibration and validation procedures of the new model, as well as of the sensitivity analysis on model parameters through a first-order method. Finally, conclusions are drawn in Section 4. The Alcantara River Basin The Alcantara River Basin (see Figure 1) is located in North-Eastern Sicily (Italy), encompassing the north side of Etna Mountain, the tallest active volcano in Europe. The river basin has an extension of about 603 km 2 . The headwater of the river is at 1400 m a.s.l in the Nebrodi Mountains, while the outlet in the Ionian Sea is reached after 50 km. Table 1 lists the main morphometric and hydrologic characteristics of the entire river basin, as well as of its main sub-basin, at Moio Alcantara. The Alcantara River Basin The Alcantara River Basin (see Figure 1) is located in North-Eastern Sicily (Italy), encompassing the north side of Etna Mountain, the tallest active volcano in Europe. The river basin has an extension of about 603 km 2 . The headwater of the river is at 1400 m a.s.l in the Nebrodi Mountains, while the outlet in the Ionian Sea is reached after 50 km. Table 1 lists the main morphometric and hydrologic characteristics of the entire river basin, as well as of its main sub-basin, at Moio Alcantara. On the right-hand side of the river, the mountain area is characterized by volcanic rocks with a very high infiltration capacity and by the lack of a hydrographic network. Here, precipitation and snow melting supply a big aquifer whose groundwater springs at the mid/downstream of the river, mixing with surface water and contributing to feeding the river flow also during the dry season. The left side of the basin is characterized by sedimentary soils (heterogeneous marly clays complex with poor power water-bearing horizons in the rocky levels) where a dense hydrographic network was formed, and gives a seasonal contribution to the river flow, as it follows the rainfall annual variability typical of the Mediterranean climate. On the right-hand side of the river, the mountain area is characterized by volcanic rocks with a very high infiltration capacity and by the lack of a hydrographic network. Here, precipitation and snow melting supply a big aquifer whose groundwater springs at the mid/downstream of the river, mixing with surface water and contributing to feeding the river flow also during the dry season. The left side of the basin is characterized by sedimentary soils (heterogeneous marly clays complex with poor power water-bearing horizons in the rocky levels) where a dense hydrographic network was formed, and gives a seasonal contribution to the river flow, as it follows the rainfall annual variability typical of the Mediterranean climate. The entire valley hosts many animal species, especially migrating birds. The rich vegetation changes in the different stretches of the river offering a great variety of plans along the way. All this area takes the name of Alcantara Fluvial Park, a wonderful nature reserve which includes a fluvial, a botanical and a geological park. Groundwater resources, withdrawn in the upper part of the basin (over Moio Alcantara cross-section), are mainly used to supply agricultural areas (see Figure 2) and all the municipalities inside the river basin through local aqueducts, as well as the villages along the Ionian coast, from the Alcantara basin itself up to the city of Messina, by the Alcantara aqueduct [5]. In addition, an existing interconnection between the Alcantara aqueduct and the Messina water distribution system could be used to partially supply the city of Messina, in order to lighten other water sources in the near future. Surface water withdrawals, due to agricultural activities, are mainly concentrated at the downstream, from May to October. Industrial surface water demand is mainly related to paper factories, hydroelectric run-of-river power plants, and mineral extraction. Based on the water balance at the river mouth, about 15% of the annual rainfall recharges the aquifer, primarily during the winter season. The groundwater store has a peak during April-May and then gradually depletes over the summer-fall season until recharge occurs again in winter. The entire valley hosts many animal species, especially migrating birds. The rich vegetation changes in the different stretches of the river offering a great variety of plans along the way. All this area takes the name of Alcantara Fluvial Park, a wonderful nature reserve which includes a fluvial, a botanical and a geological park. Groundwater resources, withdrawn in the upper part of the basin (over Moio Alcantara crosssection), are mainly used to supply agricultural areas (see Figure 2) and all the municipalities inside the river basin through local aqueducts, as well as the villages along the Ionian coast, from the Alcantara basin itself up to the city of Messina, by the Alcantara aqueduct [5]. In addition, an existing interconnection between the Alcantara aqueduct and the Messina water distribution system could be used to partially supply the city of Messina, in order to lighten other water sources in the near future. Surface water withdrawals, due to agricultural activities, are mainly concentrated at the downstream, from May to October. Industrial surface water demand is mainly related to paper factories, hydroelectric run-of-river power plants, and mineral extraction. Based on the water balance at the river mouth, about 15% of the annual rainfall recharges the aquifer, primarily during the winter season. The groundwater store has a peak during April-May and then gradually depletes over the summer-fall season until recharge occurs again in winter. Beyond the prestigious environmental aspect aforementioned, the Alcantara river basin exhibits several environmental problems due to anthropogenic factors, i.e., urban pressure, industrial settlements (sometimes disused), problems related to flood and landslide defense, and water quality deterioration. Furthermore, following various research studies, climate change effects could exacerbate these criticalities. To this end, 13 municipalities within the Alcantara river basin have agreed upon a River Contract on 22 July 2016, with the aim to define and implement planning tools for the safeguard and the effective management of water resources, the appreciation of fluvial territories and the flood control, and to foster the socio-economic development of the area. To this end, it is highly important to develop appropriate modeling tools able to simulate the basin's hydrologic response by taking into account the complex aquifer-stream interactions. The Modified IHACRES Model The IHACRES rainfall-runoff model proposed by Jakeman and Homberger [6], describes the basin's behavior well in the case that the surface water is the primary component of the flow regime. It is a simple model designed to perform the identification of hydrographs and component flows purely from rainfall, evaporation, and streamflow data. In this model the rainfall-runoff processes Beyond the prestigious environmental aspect aforementioned, the Alcantara river basin exhibits several environmental problems due to anthropogenic factors, i.e., urban pressure, industrial settlements (sometimes disused), problems related to flood and landslide defense, and water quality deterioration. Furthermore, following various research studies, climate change effects could exacerbate these criticalities. To this end, 13 municipalities within the Alcantara river basin have agreed upon a River Contract on 22 July 2016, with the aim to define and implement planning tools for the safeguard and the effective management of water resources, the appreciation of fluvial territories and the flood control, and to foster the socio-economic development of the area. To this end, it is highly important to develop appropriate modeling tools able to simulate the basin's hydrologic response by taking into account the complex aquifer-stream interactions. The Modified IHACRES Model The IHACRES rainfall-runoff model proposed by Jakeman and Homberger [6], describes the basin's behavior well in the case that the surface water is the primary component of the flow regime. It is a simple model designed to perform the identification of hydrographs and component flows purely from rainfall, evaporation, and streamflow data. In this model the rainfall-runoff processes are represented by two modules (see Figure 3): A non-linear loss module that transforms precipitation to effective rainfall, considering the influence of temperature, followed by a linear module based on two parallel transfer functions, represented by exponential equations, which transform the effective rainfall into a quick flow and a slow flow component. The sum of the two components gives the modeled streamflow. are represented by two modules (see Figure 3): A non-linear loss module that transforms precipitation to effective rainfall, considering the influence of temperature, followed by a linear module based on two parallel transfer functions, represented by exponential equations, which transform the effective rainfall into a quick flow and a slow flow component. The sum of the two components gives the modeled streamflow. Extensions of the IHACRES model have been proposed by Croke et al. [7], Ivkovic [8], and Herron and Croke [9] to take into account the role of the groundwater component in the hydrologic response of connected groundwater-surface water basins, by appropriately representing the effect of the change in groundwater storage and discharge, also due to water extraction. Croke et al. [7] integrated a parsimonious, lumped, and physically-based hillslope model, developed by Sloan [10] for homogeneous aquifers, within the IHACRES rainfall-runoff model. The discharge formulation within the groundwater model is expressed as a series of exponential terms and is therefore similar to the commonly used form of the unit hydrograph approach implemented in streamflow models such as IHACRES. Ivkovic [8] proposed a simple coupled aquifer-river model, entitled IHACRES-GW, where the slow transfer function component of the IHACRES model has been modified by incorporating a groundwater storage module. The latter is conceptualized as a single reservoir, whose areal extent is the basin area upstream of a stream gauging station, considered as the basin outlet. The volume of water released from groundwater storage to the river system is represented by the baseflow component of streamflow. Groundwater extraction and other losses behave as additional outflows from the volume of water held in groundwater storage. The volume of water that recharges the groundwater storage is determined by the proportion of effective rainfall partitioned as slow flow. The remaining fraction of effective rainfall is apportioned to surface runoff. The model was developed for use in unregulated, gauged basins in narrow, semi-confined and narrow, shallow unconfined alluvial valleys with strong aquifer-river connectivity, where groundwater extractions predominantly occur upstream of the gauging station. Herron and Croke [9] formulated a three-store model (IHACRES-3S), where the slow flow pathway comprises two-layered stores able to capture non-linear hydrologic response better than the linear routing module of the IHACRES-GW model. The upper store receives the volume of effective rainfall partitioned as slow flow, discharges to the stream and recharges the lower store. Conceptually, it can be viewed as a perched water table which develops in response to rain and tends to be relatively short-lived. The lower store corresponds to the groundwater storage in IHACRES-GW. Keeping in mind the non-linear response of the Moio Alcantara river basin, as well as the specific features of its aquifer system, we modify the original IHACRES model capitalizing on the works by Ivkovic [8] and Herron and Croke [9], in order to properly describe the groundwater discharge and extraction. The structure of the modified IHACRES model is illustrated in Figure 4. In particular, the routing module is formulated as a two-store model including an upper store to simulate the quick component of the runoff and a lower store that simulates the slow component of the runoff and the groundwater extraction and losses, which is recharged by the first store as well as by the proportion of effective rainfall partitioned as slow flow. Extensions of the IHACRES model have been proposed by Croke et al. [7], Ivkovic [8], and Herron and Croke [9] to take into account the role of the groundwater component in the hydrologic response of connected groundwater-surface water basins, by appropriately representing the effect of the change in groundwater storage and discharge, also due to water extraction. Croke et al. [7] integrated a parsimonious, lumped, and physically-based hillslope model, developed by Sloan [10] for homogeneous aquifers, within the IHACRES rainfall-runoff model. The discharge formulation within the groundwater model is expressed as a series of exponential terms and is therefore similar to the commonly used form of the unit hydrograph approach implemented in streamflow models such as IHACRES. Ivkovic [8] proposed a simple coupled aquifer-river model, entitled IHACRES-GW, where the slow transfer function component of the IHACRES model has been modified by incorporating a groundwater storage module. The latter is conceptualized as a single reservoir, whose areal extent is the basin area upstream of a stream gauging station, considered as the basin outlet. The volume of water released from groundwater storage to the river system is represented by the baseflow component of streamflow. Groundwater extraction and other losses behave as additional outflows from the volume of water held in groundwater storage. The volume of water that recharges the groundwater storage is determined by the proportion of effective rainfall partitioned as slow flow. The remaining fraction of effective rainfall is apportioned to surface runoff. The model was developed for use in unregulated, gauged basins in narrow, semi-confined and narrow, shallow unconfined alluvial valleys with strong aquifer-river connectivity, where groundwater extractions predominantly occur upstream of the gauging station. Herron and Croke [9] formulated a three-store model (IHACRES-3S), where the slow flow pathway comprises two-layered stores able to capture non-linear hydrologic response better than the linear routing module of the IHACRES-GW model. The upper store receives the volume of effective rainfall partitioned as slow flow, discharges to the stream and recharges the lower store. Conceptually, it can be viewed as a perched water table which develops in response to rain and tends to be relatively short-lived. The lower store corresponds to the groundwater storage in IHACRES-GW. Keeping in mind the non-linear response of the Moio Alcantara river basin, as well as the specific features of its aquifer system, we modify the original IHACRES model capitalizing on the works by Ivkovic [8] and Herron and Croke [9], in order to properly describe the groundwater discharge and extraction. The structure of the modified IHACRES model is illustrated in Figure 4. In particular, the routing module is formulated as a two-store model including an upper store to simulate the quick component of the runoff and a lower store that simulates the slow component of the runoff and the groundwater extraction and losses, which is recharged by the first store as well as by the proportion of effective rainfall partitioned as slow flow. The effective rainfall volume ( ) is finally determined as: where A is the area of the river basin. The model assumes that the partitioning of effective rainfall between the two stores is through the constant percentages 1 and 2 = 1 − 1 , respectively. Application of the mass balance equation to the upper store leads to: where 1 (•) is the volume of the upper store and 0 ( ) is the outflow volume at time t, which is further portioned into the quick flow component 1 ( ) and the recharge component ( ) through the constant percentages 1 and 2 = 1 − 1 . Based on the property of linear reservoir, the following relationships between 1 ( ) and 0 ( ) hold: where is a dimensionless constant equivalent to the storage coefficient for the upper store. Replacing Equation (5) into Equation (4) results in: Recalling the IHACRES model, the non-linear loss module involves the calculation of an index of basin storage s(t) based upon an exponentially decreasing weighting of total rainfall r(t) and temperature T(t) conditions: where s(t) is the basin storage index, or basin wetness/soil moisture index at time t, varying between 0 and 1, τ w (T(t)) is a time constant which is inversely related to the temperature declining rate, τ 0 is the value of τ w (T(t)) for a reference temperature fixed to a nominal value depending on the climate and usually equal to 20 • C for warmer climates [11], c (mm) is a conceptual total storage volume chosen to constrain the volume of effective rainfall to equal runoff, and f (1/ • C) is a temperature modulation factor. The effective rainfall volume U(t) is finally determined as: where A is the area of the river basin. The model assumes that the partitioning of effective rainfall between the two stores is through the constant percentages x 1 and x 2 = 1 − x 1 , respectively. Application of the mass balance equation to the upper store leads to: where G 1 (·) is the volume of the upper store and Q 0 (t) is the outflow volume at time t, which is further portioned into the quick flow component Q 1 (t) and the recharge component R(t) through the constant percentages y 1 and y 2 = 1 − y 1 . Based on the property of linear reservoir, the following relationships between G 1 (t) and Q 0 (t) hold: where a is a dimensionless constant equivalent to the storage coefficient for the upper store. Replacing Equation (5) into Equation (4) results in: Multiplying Equation (6) by a allows determining for Q 0 (t) a functional form similar to the classical exponential transfer function of the linear routing module in the IHACRES model. Thus, after some algebras, the following relation can be obtained: where the first parameter related to the rate of the flow recession and the second one related to the height of unit hydrograph peaks of the quick flow. It is worth reminding that α 0 = −exp − t τ 0 , being τ 0 the time constant describing the decay of the outflow from the upper store. Similarly, the application of the mass balance equation to the lower store leads to: where G 2 (t) is the volume of the lower store, Q 2 (t) is the groundwater discharge to the stream and L(t) accounts for both the groundwater extraction and natural losses at time t. In our formulation, the lower store is recharged by the upper store (i.e., R(t) > 0) only when G 1 (t) > 0 and G 2 (t) < 0, as well as it discharges to the stream (i.e., Q 2 (t) > 0) only when G 2 (t) > 0. Following the same line of reasoning as for the upper store, the following relationships between G 2 (t) and Q 2 (t) can be considered: where b is a dimensionless constant equivalent to the storage coefficient for the lower store. Replacing Equation (10) into Equation (9) results in: Once again, multiplying Equation (11) by b allows determining for Q 2 (t) a functional form similar to the classical exponential transfer function for the slow component, although without the recharge and groundwater extraction and loss terms, with parameters: the first parameter related to the rate of the flow recession and the second one related to the height of unit hydrograph peaks of the slow flow, with α 2 = −exp − t τ 2 , being τ 2 the time constant describing the slow flow decay from the lower store. Finally, the streamflow discharge is given by: Calibration and Validation of the Modified IHACRES Model The modified IHACRES model has been applied to the case study described in Section 2.1. The model has in principle a total number of eight independent parameters: Three parameters in the loss module (τ 0 , f, c) and five in the routing module (x 1 , τ 1 , y 1 , τ 2 , L(t) ). However, since the overall groundwater extraction in the river basin roughly amounts to 47.5 Mm 3 per year, most of which is concentrated above Moio, and that about 32 Mm 3 per year are for municipal water use only, while the remaining is mainly for irrigation purpose [5], we have derived a time series for L(t). In particular, 2/3 of the total extracted volume was equally distributed during the year, and the remaining added to the irrigation season, lasting from May to October. These values are assumed to also include the natural losses from the aquifer. Initially, the input data used for running the model were daily point rainfall and temperature data spatially averaged over the considered area. In particular, the influence of elevation has been taken into account to assess the average daily temperature by means of monthly linear regressions. Clearly, this influence also reflects on the amount of precipitation. On the other hand, no direct relationship between rainfall and elevation has been considered since, at several hundred meters above of the sea level, as the temperature gets cooler with altitude, the maximum precipitable moisture decreases drastically, so that the rainfall-altitude curve reflexes back upon itself. Therefore, we have preferred to use standard weighted Thiessen polygons approach to calculate areal daily rainfall, despite the poor spatial coverage of meteorological stations (see Figure 1). The model has been calibrated on a four-year daily streamflow discharge time series (1980/81-1983/84) at Moio Alcantara hydrometric station. The calibration period starts in October, at the beginning of the hydrological year, to set the initial condition of the soil storage index to zero. Model calibration has been manually carried out based on visual inspection of modeled versus observed streamflow data, Q obs , as well as by minimizing the relative bias (RB), namely: n t=1 Q obs (t) (14) and by maximizing the Nash-Sutcliffe Efficiency (NSE), that is: (15) where Q obs is the mean of the observed streamflow. In Table 2 the parameter values of the modified IHACRES and the corresponding performance indicators RB and NSE are reported. Figure 5a shows the comparison between the observed and modeled streamflow for the calibration period, by having considered climate data for effective rainfall generation through the loss module. 0.46 17.63 appears to generally under-predict the frequency of quick flow events; besides in some cases modeled peaks occur when there is no observed peak flows. This might be due to the poor spatial coverage of the meteorological stations, which is not able to represent the non-uniform rainfall pattern over the basin. To work around this issue, following Ivkovic [8], the effective rainfall time-series has been generated from the observed streamflow record, by using a baseflow filter to separate its quick and slow components. First, a running filter of width equal to five-time steps was applied whereby, at each time step t, the minimum of the observed flows was determined. The resulting series is then smoothed using a running average filter of the same width. The filtered series, representing the baseflow contribution to streamflow, was then subtracted from the total streamflow data, yielding the quick flow contribution to streamflow. The effective rainfall was then calculated as: where 0 ′ ( ) is the filtered quick flow at time step t. Therefore, a new calibration has been carried out yielding a new set of model parameters for the routing module (see Table 3). Figure 5b illustrates the comparison between the observed and modeled streamflow by having considered effective rainfall derived through Equation (16). a) b) Figure 5. Observed versus modeled streamflow for the calibration period, by using as model input (a) effective rainfall generated from climate data, or (b) effective rainfall generated from observed streamflow. Despite the relatively good values of the performance indicators, it can be observed that flow peaks are not properly represented. In particular, the resultant modeled streamflow time-series appears to generally under-predict the frequency of quick flow events; besides in some cases modeled peaks occur when there is no observed peak flows. This might be due to the poor spatial coverage of the meteorological stations, which is not able to represent the non-uniform rainfall pattern over the basin. To work around this issue, following Ivkovic [8], the effective rainfall time-series has been generated from the observed streamflow record, by using a baseflow filter to separate its quick and slow components. First, a running filter of width equal to five-time steps was applied whereby, at each time step t, the minimum of the observed flows was determined. The resulting series is then smoothed using a running average filter of the same width. The filtered series, representing the baseflow contribution to streamflow, was then subtracted from the total streamflow data, yielding the quick flow contribution to streamflow. The effective rainfall was then calculated as: where Q 0 (t) is the filtered quick flow at time step t. Therefore, a new calibration has been carried out yielding a new set of model parameters for the routing module (see Table 3). Figure 5b illustrates the comparison between the observed and modeled streamflow by having considered effective rainfall derived through Equation (16). 28.54 As expected, Figure 5b shows a better agreement between the observed and modeled streamflow with respect to Figure 5a, with special reference to the timing and, to a large extent, the values of the peaks. This good match is confirmed by the performance indicators for the total streamflow reported in Table 3. In addition, in order to highlight the performance of the model in simulating the aquifer-river interactions, RB and NSE have also been calculated for the slow flow component, by replacing the observed and modeled total streamflow with the observed (filtered) and modeled baseflow in Equations (14) and (15). The values of the relative bias and the Nash-Sutcliffe efficiency for the slow flow, called respectively RB_s and NSE_s in Table 3, suggest that the model is also able to capture the recession volumes of baseflow satisfactorily on a daily time step, although the volume of baseflow predicted over the calibration period is about 20% greater than the volume corresponding to the filtered observed flow. For the validation of the model, the daily streamflow discharge time series observed at Moio Alcantara hydrometric station during the period between October 1986 and September 1988 are used. Once again, effective rainfall generated by observed streamflow has been considered for the validation, to avoid that uncertainty associated with model inputs can be transferred through the model outputs, resulting in lower predictive capability. Results are shown in Figure 6a. Again, the modeled time series seems to capture the timing of observed peak discharges satisfactorily. However, a closer inspection reveals that the model does not reproduce all parts of the flow regime equally well. In fact, the overall predictive capability of the model deteriorates in simulating the recession curves, as confirmed by the values of the performance indicators in Table 4. Nonetheless, the efficiency of the model can still be considered satisfying also with respect to the simulation of slow flow component, since NSE_s is greater than 0.5 [12,13]. In addition, the value of RB_s reveals that the error in the assessment of the baseflow volume decreases in absolute terms in comparison to the calibration. Sensitivity Analysis of Modified IHACRES Model Parameters In order to understand and assess the sensitivity of the model on its parameters, a first-order analysis was carried out. First-order methods estimate uncertainty in model output assuming that the effects of the individually varying parameters contribute positively to overall model uncertainty. Using an extension of sensitivity analysis, first-order methods predict model variability as a sum of the parameter's variances [14]. Finally, in order to objectively judge the added value of the proposed model, the original IHACRES model with two simple linear reservoirs working in parallel with no recharge and no groundwater extension has been run should. The results for the validation period are shown in Figure 6b, while the performance indicators are listed in Table 5. It is evident that the proposed modified IHACRES outperforms the traditional IHACRES in reproducing the interaction between surface water and groundwater. Sensitivity Analysis of Modified IHACRES Model Parameters In order to understand and assess the sensitivity of the model on its parameters, a first-order analysis was carried out. First-order methods estimate uncertainty in model output assuming that the effects of the individually varying parameters contribute positively to overall model uncertainty. Using an extension of sensitivity analysis, first-order methods predict model variability as a sum of the parameter's variances [14]. In this study, the first-order equation for quantifying system variance is used in its traditional form and it is applied to the calibrated parameters of the routing module (τ 1 , τ 2 , x 1 , y 1 ), by: where Q represents the dependent variable, in our case the total streamflow discharge at Moio cross-section, σ 2 is the variance of Q, n represents any independent model parameter, that is τ 1 , τ 2 , x 1 , y 1 and δQ δi is the partial derivate of the dependent variable Q with respect to each parameter i. This method gives an overall assessment of model sensitivity on its parameters. In Figure 7 a short subsample of Q(t) (200 days), together with its range of variation, is shown. Water 2019, 11, x FOR PEER REVIEW 12 of 15 In this study, the first-order equation for quantifying system variance is used in its traditional form and it is applied to the calibrated parameters of the routing module ( 1 , 2 , 1 , 1 ), by: where Q represents the dependent variable, in our case the total streamflow discharge at Moio crosssection, 2 is the variance of Q, n represents any independent model parameter, that is 1 , 2 , 1 , 1 and is the partial derivate of the dependent variable Q with respect to each parameter i. This method gives an overall assessment of model sensitivity on its parameters. In Figure 7 a short subsample of Q(t) (200 days), together with its range of variation, is shown. To better understand how each parameter influences the model's output, another analysis is carried out. As proposed by Sobol [15], for each parameter i a first-order sensitivity index or Sobol Index [ ] is calculated: The value of this first-order sensitivity index for each parameter is reported in Table 6 and a graphic representation of Sobol index values is shown in Figure 8. Sobol indices provide information about the sensitivity of the model on each parameter. From Table 6 and Figure 8 it can be observed that the model shows a very low sensitivity on parameters 2 , and y1. Conversely, the model appears more sensitive to the variation of parameters x1, and , representing the share-out parameter of the effective rainfall u(t), and the storage constant of the quick flow conceptual reservoir. This result highlights how it is important to deeply understand the main physical characteristics of the basin To better understand how each parameter influences the model's output, another analysis is carried out. As proposed by Sobol [15], for each parameter i a first-order sensitivity index or Sobol Index [S] i is calculated: The value of this first-order sensitivity index for each parameter is reported in Table 6 and a graphic representation of Sobol index values is shown in Figure 8. Sobol indices provide information about the sensitivity of the model on each parameter. From Table 6 and Figure 8 it can be observed that the model shows a very low sensitivity on parameters τ 2 , and y 1 . Conversely, the model appears more sensitive to the variation of parameters x 1 , and τ 1 , representing the share-out parameter of the effective rainfall u(t), and the storage constant of the quick flow conceptual reservoir. This result highlights how it is important to deeply understand the main physical characteristics of the basin under investigation to keep under control the model simulation dynamics and outputs and to reduce the uncertainties in simulations due to model parameters estimation. To this end, parameter estimation could benefit from the availability of other measured data, such as spring discharges, soil moisture, as well as of a separate calibration to reduce model uncertainties. Conclusions A modified version of the IHACRES rainfall-runoff model has been proposed to simulate the hydrologic connection between surface water and groundwater in intermittent streams. The proposed model has been developed in the Moio Alcantara river basin, whose groundwater component is associated to a large deep aquifer which may have a critical influence on soil moisture, especially during long dry periods. The use of a spatially lumped conceptual model, which includes an explicit representation of the interaction of the deep groundwater with the river and the soil water storage, allows to limit the number of parameters necessary to represent the key identifiable river basin features. The results presented in this paper show improvements in model performance with respect to the original version of the IHACRES model. In particular, the model appears capable to better simulate not only the flood peaks but also the recession curves describing the groundwater aquifer contribution at the end of the wet seasons. Although a closer inspection reveals some over-or under-estimates of flow, overall the results are encouraging. To this end, it's worth underlining that the simulation is carried out at the daily time scale. However, in many applications, the main concern is in modeling monthly streamflow rather than daily discharges, which usually leads to higher values of the performance indicators. A first-order sensitivity analysis, carried out to assess the sensitivity of the model on its parameter, reveals a stronger influence of the parameters x1, and 1 , representing respectively the share-out parameter of the effective rainfall u(t), and the storage constant of the quick flow conceptual reservoir. The conversion from a two exponential stores in parallel in the original IHACRES configuration, Conclusions A modified version of the IHACRES rainfall-runoff model has been proposed to simulate the hydrologic connection between surface water and groundwater in intermittent streams. The proposed model has been developed in the Moio Alcantara river basin, whose groundwater component is associated to a large deep aquifer which may have a critical influence on soil moisture, especially during long dry periods. The use of a spatially lumped conceptual model, which includes an explicit representation of the interaction of the deep groundwater with the river and the soil water storage, allows to limit the number of parameters necessary to represent the key identifiable river basin features. The results presented in this paper show improvements in model performance with respect to the original version of the IHACRES model. In particular, the model appears capable to better simulate not only the flood peaks but also the recession curves describing the groundwater aquifer contribution at the end of the wet seasons. Although a closer inspection reveals some over-or under-estimates of flow, overall the results are encouraging. To this end, it's worth underlining that the simulation is carried out at the daily time scale. However, in many applications, the main concern is in modeling monthly streamflow rather than daily discharges, which usually leads to higher values of the performance indicators. A first-order sensitivity analysis, carried out to assess the sensitivity of the model on its parameter, reveals a stronger influence of the parameters x 1 , and τ 1 , representing respectively the share-out parameter of the effective rainfall u(t), and the storage constant of the quick flow conceptual reservoir. The conversion from a two exponential stores in parallel in the original IHACRES configuration, to a two interconnected stores model, with the lower groundwater store recharged by a constant proportion of the effective rainfall and partly by the upper store when the water table is below the elevation of the stream bed (i.e., G 2 (t) < 0), and depleted by discharges to the stream, extractions and other natural losses involve some assumptions about the system. More specifically, we have assumed that the recharge between the upper and lower stores, R(t) increases linearly with increasing G 1 (t), through the parameter y 2 = 1 − y 1 . An alternative approach consists in deriving for R(t) a soil moisture threshold, g1, below which recharge to the deeper aquifer decreases [9]. However, this solution, as well as other potential refinements of the model parameters to improve the model performance, clearly requires a greater knowledge of the river basin properties than the one that we currently have. Overall, the proposed model can be considered as a first relevant step towards the implementation of relatively simple conceptual models, easier to use than complex, parameter intensive models, for an effective water resources management in deep groundwater-fed basins. It should be stressed that sufficient calibration data are required for a valid representation of the connection among deep groundwater storage, soil water storage, and surface runoff. In particular, it's deemed necessary to have measures of spring discharges and soil moisture for proper calibration and implementation of these type of models to predict the basin hydrologic response into the future. Once that these measures will be available, it will be possible to extend the application of the proposed model to the whole Alcantara basin, in order to properly simulate the effect of the interaction between surface water and groundwater at the downstream of the basin. Future studies will also investigate the possibility to apply this model to other case studies with similar characteristics and with more available data, in order to test the model performance thoroughly.
10,056.2
2019-09-28T00:00:00.000
[ "Environmental Science", "Engineering" ]
Analysis on the Trade Structural Competitiveness in Manufacturing Industry between Guangzhou and “the Belt and Road” Participating Countries Based on Lafay Index The economic development in Guangzhou presents an export-oriented characteristic. Therefore, it is the key path for Guangzhou’s manufacturing to upgrade by participating the construction of the Belt and Road initiative. This paper adopts Lafay Index to measure the structural competitiveness of trade between China and countries along the B&R and finds that Guangzhou has a long-term and stable comparative advantage in clothing and textile industry, metal products industry and leather products industry, but a long-term disadvantage in metal smelting industry, chemical manufacturing industry and non-metallic mineral products. It also shows a high degree of intra-industry trade in food processing industry, sports and entertainment industry. INTRODUCTION Over the past 40 years of China's reform and opening up, the industrial restructuring, transformation and upgrading in Guangdong Province, especially the rapid development of Guangzhou manufacturing industry, which continuously integrated into global economy, has brought up a "Guangdong Miracle" (Han et al, 2015) [1]. Guangzhou's GDP climbed to 1.96 trillion Yuan in 2016 from 4.3 billion in 1978, increasing by 455 times. However, Guangzhou as a key city during opening-up, presents an export-oriented characteristic in its economic development, accompanying with a rather strong external dependence, which made it more impacted by international financial crisis (Jiang et al, 2012) [2]. Guangzhou's economy is also entering a critical transformation period, during which there exists excess supply of traditional industries, a lack of impetus for emerging industry and high technology industries, a temporary shortage in industrial development and improper industrial structure. In 2015, the gross output value of manufacturing industry of Guangzhou in 2016 was around 1.63 trillion Yuan, with only a 2% year-on-year increase, taking 13.8% of the gross output value of manufacturing industry of Guangdong Province. The manufacturing value added was 400 billion Yuan, growing by 2.2% from 2014 (figure 1, data from Guangzhou Statistics Bureau). Though Guangzhou's manufacturing industry is scaling up, the rate of growth has entered a medium-low speed period of steady growth. Under the "new normal", the structure of Guangzhou's manufacturing industry needs to be optimized, and its development model needs to shift to innovation-driven model. Seeking wider cooperation potential is an important channel to urge Guangzhou's development model of manufacturing industry to shift from element-driven and investment-driven to innovation-driven model. The central committee of the CPC has put forward some great ideas as "the Belt and Road" initiative, "Going Global" strategy and Made in China 2025" program, which bring new turning points for Guangzhou's manufacturing industry enhancing its innovative capability. Facing the turning point, Guangzhou was firstly approved the establishment of "Made in China 2025" pilot demonstration cities. The upgrading of Guangzhou's manufacturing industry is imperative. In the mean time, the timely proposal of the "Made in Guangzhou 2025" will lead the direction to the transformation and upgrading of manufacturing in Guangzhou, which will help promote the stable and rapid development of manufacturing capacity and realize the strategic goal of adjusting the industry structure and stabilizing the growth of economy. Based on the background above, it boasts great referential value for promoting production capacity cooperation in Guangzhou to analyse intra-industry trade and comparative advantage in manufacturing industry between Guangzhou and participating countries. Analysis on Trade Structural Competitiveness in Manufacturing Industry It is widely accepted in academia to apply RSCA and TC index to analyse the circumstance and tendency of the comparative advantage of inter-regional industries. However, the measurement of RSCA index puts emphasis on inter-industry trade, while fails to excavate information about intra-industry trade. In the sphere of manufacturing industry trade, the contribution from intra-industry trade cannot be ignored. It requires further research about how the comparative advantages of manufacturing industry affect intra-industry and completely reveal the status of comparative advantage of manufacturing industry between Guangzhou and the "B&R" participating countries. Lafay(1992) [3]brought forward Lafay Index, which simultaneously take trade flows of exports and imports into consideration, superior to traditional RACA index. Furthermore, different from TC index, in Lafay index trade share is weighted based on Normalized Trade Balance (NTB) which consists of classified trade volume and total trade volume. It overcome distortion problem of calculation results owing to economic fluctuation factor (Wu et al, 2012) [4]. Thus this paper uses Zaghini et al.(2005)[5] method for reference and applies Lafay Index to measure the comparative advantage in manufacturing industry between Guangzhou and B&R participating countries. Lafay Index Construction Compared with RCA index, Lafay index can measure the comparative advantage and the intra-trade change in manufacturing sector between Guangzhou and participating countries. The formula is: Xj represents the export volumes of the j-th manufacturing sector towards participating countries; Mj represents the import volumes of the j-th manufacturing sector towards participating countries; N means total quantity of manufacturing industry. Interpreting Xj and Mj as the import and export of "the Belt and Road" related industries in Guangzhou, it can reflect the comparative advantage of Guangzhou relative to participating countries. (Xj-Mj )/( Xj+Mj)in Lafay index means the competitive index of Guangzhou manufacturing sector j relative to that of participating countries, while competitive index accumulated by all the manufacturing sectors. If these two formulas are subtracted from each other, the result means the extent of deviation of them.  means the proportion of the import and export trade volume in the total trade volume between Guangzhou manufacturing sector and participating countries. The more proportion it has, the higher the absolute value of Lafay index in this sector is. It can be judged whether a manufacturing sector has comparative advantage by the positive or negative of Lafay index value. If the value presents to be positive, it means the manufacturing sector has a high degree of specialization and has comparative advantage relative to that in participating countries. Conversely, if the value presents to be negative, it means that the sector has a large import quota and has comparative disadvantage. Lafay index can also reflect the degree of intra-product trade. The further away the value strays from zero, the lower the degree is. Analysis on Comparative Advantage Change in Manufacturing Industry between Guangzhou and B&R Participating Countries based on Lafay Index Using Shang's (2010) [6] method for reference, Guangzhou's manufacturing industry can be divided as 11 sectors, including: (01) food processing, (02) timber processing, (03) metal smelting, (04) chemical products, (05) nonmetallic mineral products, (06) metal products, (07) equipment manufacturing, (08) leather products, (09) clothing and textile, (10) paper printing (11) sports and entertainment. We calculated respectively the Lafay index of Guangzhou's 11 manufacturing sectors relative to those in participating countries. The data sample span from 1995 to 2014. The data comes from WTO Database, World Bank Open Data, Guangzhou Statistical Yearbook and relevant Statistical Bulletin. Table 1 is the result of Lafay index value. The above order numbers of the 11 sectors correspond to the sector numbers in table 1. The result shows that in 2014 Lafay index value in (09) clothing and textile industry is the largest. The second largest value appears on (07) equipment manufacturing industry, followed by metal products. While (03) metal smelting industry comes in last. The value of (09) clothing and textile industry in Guangzhou escalated from 1.528 in 1995 to 4.265, with a high degree of the advantages of the division of labor and rapid development. The value of (07) equipment manufacturing industry was 0.057 in 1995, taking on a Guangzhou's (07) equipment manufacturing industry and (09) clothing and textile industry not only have revealed comparative advantage, but also play an important role in Guangzhou's manufacturing trade in "the Belt and Road". Besides, the index value of Guangzhou's (03) metal smelting industry is quite low. Though it rises with fluctuation from -6.125 in 1995, it is still -3.151 even in 2014. This explains the import trade volume of Guangzhou's metal smelting industry from participating countries exceeded export trade volume, running a large trade deficit. But the deficit is gradually shrinking. The absolute value of Lafay index of (08) leather products industry, (11) sports and entertainment industry, (01) food processing industry, (02) timber processing industry and (10) paper printing industry are small, closed to zero. This shows a high degree of intra-industry trade between this sectors and participating countries. Because relative to participating countries, these sectors, the ratio of net import and export value to the import and export volume is rather low. Also, the ration of the total value of imports and exports to Guangzhou's import and export volume is relatively low. Figure 2 is drawn on the data in table 1, intuitively reflecting the time trend and spatial distribution of the Lafay index in each manufacturing sector between Guangzhou and "B&R" participating countries. As shown in figure 2, clothing and textile industry, equipment manufacturing industry and metal products industry are distant from central axis zero, which explains the degree of intra-industry trade of these three sectors is low, mainly focusing on inter-industry trade. Equipment manufacturing industry and clothing and textile industry score far above the zero-line, which means they have high comparative advantages. While metal smelting industry and nonmetallic mineral products industry are far below the zero-line, meaning their comparative advantages are low. The rising trend of Lafay curve of equipment manufacturing industry shows the comparative advantage of Guangzhou's equipment manufacturing sector is rising. The Lafay curve of chemical products industry has seen a decline, which means its comparative disadvantage is enlarging. The Lafay index curve of Guangzhou's food processing sector and sports and entertainment sector are close to the zero-line, meaning their import and export trade volume is no different than participating countries with a high degree of intra-industry trade. food processing timber processing metal smelting chemical products nonmetallic mineral products metal products equipment manufacturing leather products clothing and textile paper printing sports and entertainment Figure 2. The trend of comparative advantage change in manufacturing industry between Guangzhou and B&R participating countries from 1995 to 2014 Conclusion and Suggestions From the above analysis we can conclude that the comparative advantage of Guangzhou's manufacturing industry focus on equipment manufacturing industry and clothing and textile industry. However, the overall comparative advantage is not sufficient. In fact, the R&D investment in Guangzhou's manufacturing industry is relatively low, leading to less intellectual property rights and fewer new products. They also have a high degree of homogenization of products. The enterprises show deficiency in key technical reserves and innovation capacity, which leads to a lack of competitive advantage and severely constrains the sustainable development of Guangzhou's manufacturing industry (HE, 2013) [7]. Faced with new opportunities, Guangzhou government should make the corresponding countermeasures on policy arrangement and talent teams building. Firstly, Guangzhou should make scientific overall development planning about advanced manufacturing industry, drive technology innovations of industrial enterprises and strengthen the support the new industrial format. Concrete steps could include integrating existing special fiscal funds, introducing social capital, setting up development fund, implementing special project such as supporting core-component technical breakthrough, carrier construction and backbone enterprises cultivating in manufacturing industry. Secondly, Guangzhou government should encourage and guide famous-brand products to register in the international market in time for participating international competition and exploring international market based on certain rewards and subsidies. The government should also encourage existing manufacturing firms to establish standardized manufacture, improve product quality and establish brand reputation. Lastly, in view of the current situation of a shortage of top talents in manufacturing industry, Guangzhou need to gather a group of high-level personnel with excellent quality and outstanding achievement to settle in Guangzhou. On the one way, this can be achieved through a primary focus on establishing "the Belt and Road" talents communication platform, ensuring that the need of Guangzhou's "going global" strategy and talents in production capacity global cooperation can be met. On the other hand, the government should innovate on talents stimulating mechanism, which is an organic integration of personal interests and business interests, and focus on training science and technology innovative talents.
2,780.4
2018-01-01T00:00:00.000
[ "Economics" ]
Interactive comment on “ Light-induced protein nitration and degradation with HONO emission ” This MS reports on HONO formation resulting (mostly) from the interaction of NO2 with a particular protein under visible illumination in a flow tube reactor. The HONO released to the gas phase is formed both by photolysis of nitrated tyrosine and a Langmuir-Hinshelwood surface reaction involving NO2 uptake; this latter process forms HONO even in the dark. For bot dark and illuminated channels, there is a positive dependence on RH which suggests that water is involved somehow, although this may be by changing the protein surface morphology rather than as a chemical promoter. The experiments are well constructed and the results are of some interest. I do have a few comments for the authors’ consideration, however: Abstract. Proteins can be nitrated by air pollutants (NO 2 ), enhancing their allergenic potential. This work provides insight into protein nitration and subsequent decomposition in the presence of solar radiation. We also investigated lightinduced formation of nitrous acid (HONO) from protein surfaces that were nitrated either online with instantaneous gas-phase exposure to NO 2 or offline by an efficient nitration agent (tetranitromethane, TNM). Bovine serum albumin (BSA) and ovalbumin (OVA) were used as model substances for proteins. Nitration degrees of about 1 % were derived applying NO 2 concentrations of 100 ppb under VIS/UV illuminated conditions, while simultaneous decomposition of (nitrated) proteins was also found during long-term (20 h) irradiation exposure. Measurements of gas exchange on TNMnitrated proteins revealed that HONO can be formed and released even without contribution of instantaneous heterogeneous NO 2 conversion. NO 2 exposure was found to increase HONO emissions substantially. In particular, a strong dependence of HONO emissions on light intensity, relative humidity, NO 2 concentrations and the applied coating thickness was found. The 20 h long-term studies revealed sustained HONO formation, even when concentrations of the intact (nitrated) proteins were too low to be detected after the gas exchange measurements. A reaction mechanism for the NO 2 conversion based on the Langmuir-Hinshelwood kinetics is proposed. Introduction Primary biological aerosols, or bioaerosols, including proteins, from different sources and with distinct properties are known to influence atmospheric cloud microphysics and public health (Lang-Yona et al., 2016;D'Amato et al., 2007;Pummer et al., 2015). Bioaerosols represent a diverse subset of atmospheric particulate matter that is directly emitted in form of active or dead organisms, or fragments, like bacteria, fungal spores, pollens, viruses and plant debris. Proteins are found ubiquitously in the atmosphere as part of these airborne, typically coarse-sized biological particles (diameter > 2.5 µm), as well as in fine particulate matter (diameter < 2.5 µm) associated with a host of different constituents such as polymers derived from biomaterials and proteins dissolved in hydrometeors, mixed with fine dust and other particles (Miguel et al., 1999;Riediker et al., 2000;Zhang and Anastasio, 2003). Proteins contribute up to 5 % of par- Houée-Levin et al., 2015, andShiraiwa et al., 2012). Subsequent intramolecular H transfer initiated by irradiation decompose the protein and HONO is emitted (adapted from Bejan et al., 2006). ticle mass in airborne particles (Franze et al., 2003a;Staton et al., 2015;Menetrez et al., 2007) and are also found at surfaces of soils and plants. Proteins can be nitrated and are then likely to enhance allergic responses (Gruijthuijsen et al., 2006). Nitrogen dioxide ( q NO 2 ) has emerged as an important biological reactant and has been shown to be capable of electron (or H atom) abstraction from the amino acid tyrosine (Tyr) to form TyrO q in aqueous solutions (tyrosine phenoxyl radical, also called tyrosyl radical; Prütz et al., 1984Prütz et al., , 1985Alfassi, 1987;Houée-Lévin et al., 2015), which subsequently can be nitrated by a second NO 2 molecule. Shiraiwa et al. (2012) observed nitration of protein aerosol, but not solely with NO 2 in the gas phase, and demonstrated that simultaneous O 3 exposure of airborne proteins in dark conditions can significantly enhance NO 2 uptake and consequent protein nitration (3-nitrotyrosine formation) by way of direct O 3 mediated formation of the TyrO q intermediate. A connection between increased allergic diseases and elevated environmental pollution, especially traffic-related air pollution has been proposed (Ring et al., 2001). Tyrosine is one of the photosensitive amino acids and it is subject of direct and indirect photo-degradation under solar-simulated conditions (Boreen et al., 2008), especially mediated by both UV-B (λ 280-320 nm) and UV-A (λ 320-400 nm) radiation (Houee-Levin et al., 2015;Bensasson et al., 1993). Direct light absorption or absorption by adjacent endogenous or exogenous chromophores and subsequent energy transfer results in an electronically excited state of tyrosine (for details see Houée-Lévin et al., 2015, and references therein). If the triplet state of tyrosine is generated, it can undergo electron transfer reactions and deprotonation to yield TyrO q ( Fig. 1; Bensasson , 1993;Davies, 1991;Berto et al., 2016). Regardless of how the tyrosyl radical is generated, it can be nitrated by reaction with NO 2 , as well as hydroxylated or dimerized (Shiraiwa et al., 2012;Reinmuth-Selzle et al., 2014;Kampf et al., 2015). With respect to atmospheric chemistry, Bejan et al. (2006) have shown that photolysis of ortho-nitrophenols (as is the case for 3-nitrotyrosine) can generate nitrous acid (HONO). HONO is of great interest for atmospheric composition, as its photolysis forms OH radicals, which are the key oxidant for degradation of most air pollutants in the troposphere (Levy, 1971). In the lower atmosphere, up to 30 % of the primary OH radical production can be attributed to photolysis of HONO, especially during the early morning when other photochemical OH sources are still small (Reaction R1, Kleffmann et al., 2005;Alicke et al., 2002;Ren et al., 2006;Su et al., 2008;Meusel et al., 2016). HONO can be directly emitted by combustion of fossil fuels or formed by gas-phase reactions of NO and OH (the backwards reaction of Reaction R1) and heterogeneous reactions of NO 2 on wet surfaces according to Reaction (R2). On carbonaceous surfaces (soot, phenolic compounds) HONO is formed via electron or H transfer reactions (Reactions R3 and R4-R6; Kalberer et al., 1999;Kleffmann et al., 1999;Gutzwiller et al., 2002;Aubin and Abbatt, 2007;Han et al., 2013;Arens et al., 2001Arens et al., , 2002Ammann et al., 1998Ammann et al., , 2005. Previous atmospheric measurements and modeling studies have shown unexpected high HONO concentrations during daytime, which can also contribute to aerosol formation through enhanced oxidation of precursor gases (Elshorbany et al., 2014). Measured mixing ratios are typically about 1 order of magnitude higher than simulated ones, and an additional source of 200-800 ppt h −1 would be required to explain observed mixing ratios Acker et al., 2006;Li et al., 2012;Su et al., 2008;Elshorbany et al., 2012;Meusel et al., 2016), indicating that estimates of daytime HONO sources are still under debate. It was suggested that HONO arises from the photolysis of nitric acid and nitrate or by heterogeneous photochemistry of NO 2 on organic substrates and soot (Zhou et al., 2001(Zhou et al., , 2002(Zhou et al., and 2003Villena et al., 2011;Ramazan et al., 2004;George et al., 2005;Sosedova et al., 2011;Monge et al., 2010;Han et al., 2016). Stemmler et al. (2006Stemmler et al. ( , 2007 found HONO formation on light-activated humic acid, and field studies showed that HONO formation correlates with aerosol surface area, NO 2 and solar radiation (Su et al., 2008;Reisinger, 2000;Costabile et al., 2010;Wong et al., 2012;Sörgel et al., 2015) and is increased during foggy periods (Notholt et al., 1992). Another proposed source of HONO is the soil, where it has been found to be co-emitted with NO by soil biological activities (Oswald et al., 2013;Su et al., 2011;Weber et al., 2015). In view of light-induced nitration of proteins and HONO formation by photolysis of nitrophenols, light-enhanced production of HONO on protein surfaces can be anticipated, which, to the best of our knowledge, has not been studied before. This work aims to provide insight into protein nitration, the atmospheric stability of the nitrated protein and respective formation of HONO from protein surfaces that were nitrated either offline in liquid phase prior to the gas exchange measurements or online with instantaneous gas-phase exposure to NO 2 , with particular emphasis on environmental parameters like light intensity, relative humidity (RH) and NO 2 concentrations. Bovine serum albumin (BSA), a globular protein with a molecular mass of 66.5 kDa and 21 tyrosine residues per molecule, was chosen as a well-defined model substance for proteins. Nitrated ovalbumin (OVA) was used to study the light-induced degradation of proteins that were nitrated prior to gas exchange measurements. This wellstudied protein has a molecular mass of 45 kDa and 10 tyrosine residues per molecule. Protein preparation and analysis BSA (Cohn V fraction, lyophilized powder, ≥ 96 %; Sigma Aldrich, St. Louis, Missouri, USA) or nitrated OVA was solved in pure water (18.2 M cm) and coated onto the glass tube. The nitration of OVA was described previously (Yang et al., 2010;Zhang et al., 2011). Briefly, OVA (grade V, A5503-5G, Sigma Aldrich, Germany) was dissolved in phosphate-buffered saline PBS (P4417-50TAB, Sigma Aldrich, Germany) to a concentration of 10 mg ml −1 . 50 µL tetranitromethane (TNM; T25003-5G, Sigma Aldrich, Germany) dissolved in methanol 4 % (v/v) were added to a 2.5 mL aliquot of the OVA solution and stirred for 180 min at room temperature. Please note that TNM is toxic if swallowed, can cause skin, eye and respiration irritation, is suspected to cause cancer and causes fires or explosions. Size exclusion chromatography columns (PD-10 Sephadex G-25 M, 17-0851-01, GE Healthcare, Germany) were used for cleanup. The eluate was dried in a freeze dryer and stored in a refrigerator at 4 • C. After the flow-tube experiments (see below) the proteins were extracted with water from the tube and analyzed with liquid chromatography (HPLC-DAD; Agilent Technologies 1200 series) according to Selzle et al. (2013). This method provides a straightforward and efficient way to determine the nitration of proteins. Briefly, a monomerically bound C18 column (Vydac 238TP, 250 mm × 2.1 mm inner diameter, 5 µm particle size; Grace Vydac, Alltech) was used for chromatographic separation. Eluents were 0.1 % (v/v) trifluoroacetic acid in water (LiChrosolv) (eluent A) and acetonitrile (ROTISOLV HPLC gradient grade, Carl Roth GmbH + Co. KG, Germany) (eluent B). Gradient elution was performed at a flow rate of 200 µL min −1 . ChemStation software (Rev. B.03.01, Agilent) was used for system control and data analysis. For each chromatographic run, the solvent gradient started at 3 % B followed by a linear gradient to 90 % B within 15 min, flushing back to 3 % B within 0.2 min and maintaining 3 % B for additional 2.8 min. Column reequilibration time was 5 min before the next run. Absorbance was monitored at wavelengths of 280 (tyrosine) and 357 nm (nitrotyrosine). The sample injection volume was 10-30 µL. Each chromatographic run was repeated three times. The protein nitration degree (ND), which is defined as the ratio of nitrated tyrosine to all tyrosine residues, was determined by the method of Selzle et al. (2013). Native and untreated BSA did not show any degree of nitration. 2.2 Coated-wall flow tube system Figure 2 shows a flowchart of the setup of the experiment. NO 2 was provided in a gas bottle (1 ppm in N 2 , Carbagas AG, Grümligen, Switzerland). NO 2 was further diluted (mass flow controller, MFC3) with humidified pure nitrogen to achieve NO 2 mixing ratios between 20 and 100 ppb. Impurities of HONO in the NO 2 -gas cylinder were removed by means of a HONO scrubber. The Na 2 CO 3 trap was prepared by soaking 4 mm firebrick in a saturated Na 2 CO 3 in 50 % ethanol-water solution and drying for 24 h. The impregnated firebrick granules were put into a 0.8 cm inner diameter and 15 cm long glass tube, which was closed by quartz wool plugs on both sides. A constant total flow (1400 mL min −1 ) was provided by means of another N 2 mass flow controller (MFC2) that compensated for changes in NO 2 addition. Different fractions of total surface areas (50, 70 and 100 %) of . Nitrogen passes a heated water bath to humidify the gas and a HONO scrubber to eliminate any HONO impurities of the NO 2 supply. The overflow maintains a constant pressure through the reaction tube and the detection unit. The dotted boxes (blue, green, orange) indicate the three different parts: the gas supply, reaction unit and detection unit. the reaction tube (50 cm×0.81 cm i.d.) were coated with 2 mg BSA or nitrated OVA, respectively. Therefore 2 mg protein was dissolved in 600 µL pure water, injected into the tube and then gently dried in a low-humidity N 2 flow (RH ∼ 30-40 %) with continuous rotation of the tube. The coated reaction tube was exposed to the generated gas mixture and irradiated with either (i) one, three or seven visible (VIS) lights (400-700 nm; L 15 W/954, Lumilux de Luxe daylight, Osram, Augsburg, Germany), which is 0, 23, 69 or 161 W m −2 , respectively; or (ii) four VIS and three UV lights (340-400 nm; UV-A, TL-D 15 W/10, Philips, Hamburg, Germany). An overview of the experiments performed during this study is shown in Table 1. Light-induced decomposition of nitrated proteins was studied on OVA. Instantaneous NO 2 transformation and its light and RH dependence on heterogeneous HONO formation were studied on BSA in short-term experiments. Extended studies on BSA were performed to explore the persistence of the surface reactivity and respective catalytic effects. A commercial long-path absorption photometry instrument (LOPAP, QUMA) was used for HONO analysis. The measurement technique was introduced by Heland et al. (2001). This wet chemical analytical method has an unmatched low detection limit of 3-5 ppt with high HONO collection efficiency (≥ 99 %). HONO is continuously trapped in a stripping coil flushed with an acidic solution of sulfanilamide. In a second reaction with n-(1-naphthyl)ethylenediamine-dihydrochloride an azo dye is formed, whose concentration is determined by absorption photometry in a long Teflon tubing. LOPAP has two stripping coils in series to reduce known interferences. In the first stripping coil HONO is quantitatively collected. Due to the acidic stripping solution, interfering species are collected less efficiently but in both channels. The true concentration of HONO is obtained by subtracting the interferences quantified in the second channel from the total signal obtained in the first channel. The accuracy of the HONO measurements was 10 %, based on the uncertainties of liquid and gas flow, concentration of calibration standard and regression of calibration. BSA nitration and degradation Nitrated proteins can trigger allergic response. The nitration of proteins can be enhanced by O 3 activation (in the dark). In the atmospheric environment, about half the time sunlight is present. What happens with irradiated proteins when exposed to NO 2 ? Can they be nitrated efficiently? To investigate the degree of protein nitration under illuminated conditions, BSA coated on the reaction tube (17.5 µg cm −2 ) was exposed to seven VIS lamps (40 % of a clear-sky irradiance for a solar zenith of 48 • ; Stemmler et al., 2006) and 100 ppb NO 2 at 70 % RH. After 20 h the BSA ND (concentration of nitrated tyrosine residues divided by the total concentration of tyrosine residues) investigated by means of the HPLC-DAD method was (1.0 ± 0.1) %, significantly higher than the ND of untreated BSA (0 %). Introducing UV radiation (four VIS plus three UV lamps) resulted in a slightly higher ND of (1.1 ± 0.1) %. Note that no intact protein (nitrated and nonnitrated) could be detected by HPLC-DAD after another 20 h of irradiation without NO 2 , indicating light-induced decomposition of proteins. However, the applied HPLC-DAD technique only detects (nitro-)tyrosine residues in proteins and does not provide information about protein fragments or single nitrated or non-nitrated tyrosine residues. Hence, proteins might have been decomposed while tyrosine remains in its nitrated form, not detectable by our analysis method. Similarly, proteins (here OVA) that were nitrated with TNM in aqueous phase prior to coating (21.5 µg cm −2 ) to an extent of 12.5 % also decomposed when illuminated about 6 h (one to seven VIS lights; with and without 20 ppb NO 2 ). Thus the nitration of proteins by light and NO 2 was confirmed, but with simultaneous gradual decomposition of the proteins. Effects of UV irradiation (240-340 nm) on proteins containing aromatic amino acids were reviewed previously (Neves-Peterson et al., 2012). It was shown that triplet state tryptophan and tyrosine can transfer electron to a nearby disulfide bridge to form the tryptophan and tyrosine radical. The disulfide bridge could break leading to conformational changes in the protein but not necessarily resulting in inactivation of the protein. In strong UV light (≈ 200 nm) the peptide bond could also break (Nikogosyan and Görner, 1999). Franze et al. (2005) analyzed a variety of natural samples (road dust, window dust and particulate matter PM 2.5 ) collected in the metropolitan area of Munich, containing 0.08-21 g kg −1 proteins, and revealed equivalent degrees of nitration (EDN, concentration of nitrated protein divided by concentration of all proteins) between 0.01 and 0.1 % only. Such low nitration degree is in line with light-induced decomposition of (nitrated) proteins. In contrast, an EDN up to 10 % (average 5 %) was found for BSA and birch pollen extract exposed to Munich ambient air for 2 weeks under dark conditions, with daily mean NO 2 (O 3 ) concentration of 17-50 ppb (7-43 ppb) in the same study, possibly suggesting the deficiency of decomposition without being irradiated. BSA and OVA loaded on syringe filters and exposed to 200 ppb NO 2 / O 3 for 6 days under dark conditions were nitrated to 6 and 8 %, respectively (Yang et al., 2010). Reinmuth-Selzle et al. (2014) found similar ND for major birch pollen allergen Bet v 1 loaded on syringe filters exposed to 80-470 ppb NO 2 and O 3 . When exposed for 3-72 h to NO 2 / O 3 at RH < 92 % the ND was 2-4 %, while at condensing conditions (RH > 98 %) the ND increased to 6 % after less than 1 day (19 h). The ND of Bet v 1 was considerably increased to 22 % for proteins solved in the aqueous phase (0.16 mg mL −1 ) when bubbling with a 120 ppb NO 2 / O 3 gas mixture for a similar period of time (17 h). Shiraiwa et al. (2012) performed kinetic modeling and found that maximum 30 % (conservative upper limit) of N uptake on BSA could be explained by NO 3 or N 2 O 5 , which are generated by the reaction of NO 2 and O 3 , while overall nitration was governed by an indirect mechanism in which a radical intermediate was formed by the reaction of BSA with ozone, which then reacted with NO 2 . On NaCl surface N uptake was dominated by NO 3 and N 2 O 5 . Furthermore, NO 3 radicals, which in this study could be formed by photolysis of NO 2 (> 410 nm, disproportionation of excited NO 2 ), are not stable under the light conditions applied (400-700 nm) (Johnston et al., 1996). Therefore, in the present study reactions with NO 3 were neglected. Photolysis of NO 2 forming NO (< 400 nm) can also be neglected (Gardner et al., 1987;Roehl et al., 1994). A photolysis frequency for NO 2 of up to 5×10 −4 s −1 under similar experimental light conditions was determined by Stemmler et al., 2007. Other nitration methods investigated by Reinmuth- Selzle et al. (2014), e.g., nitration of Bet v 1 with peroxynitrite (ONOO − , formed by reaction of NO with O − 2 ) or TNM, lead to ND between 10 and 72 % depending on reaction time, reagent concentration and temperature. Similarly, high NDs of 45-50 % were obtained by aqueous-phase TNM nitration of BSA and OVA by Yang et al. (2010). HONO formation from nitrated proteins To study HONO emission from nitrated proteins, OVA was nitrated with TNM (see Sect. 2.1) in liquid phase. The nitrated OVA (2 mg; ND = 12.5 %) was coated onto the reaction tube and exposed to VIS lights under either pure nitrogen flow or 20 ppb NO 2 gas. Strong HONO emissions were found. A high correlation between HONO emission and light intensity was observed (50 % RH; Fig. 3). Initially, we did not apply NO 2 . Thus the observed HONO formation (up to 950 ppt) originated from decomposing nitrated proteins rather than from heterogeneous conversion of NO 2 . However, when exposed to 20 ppb of NO 2 in dark conditions, HONO formation increased 4-fold (50-200 ppt) and about 2fold with seven VIS lamps turned on (950-1800 ppt). After 7 h of flow tube experiments (4.5 h irradiation with varying light intensities (0, 1, 3, 7 lights) + 2.5 h irradiation/20 ppb NO 2 (7, 3, 0 lights)), no intact protein was found according to the analysis of HPLC-DAD. As proteins can efficiently be nitrated by O 3 and NO 2 in polluted air (Franze et al., 2005;Shiraiwa et al., 2012;Reinmuth-Selzle et al., 2014), the emission of HONO from light-induced decomposing nitrated proteins could play an important role in the HONO budget. As proteins are nitrated at their tyrosine residues (at the ortho position to the OH group on the aromatic ring) the underlying mechanism of this HONO formation should be very similar to the HONO formation by photolysis of ortho-nitrophenols described by Bejan et al. (2006). This starts with a photo-induced hydrogen transfer from the OH group to the vicinal NO 2 group (Fig. 1), which leads to an excited intermediate from which HONO is eliminated subsequently. Light dependency To investigate HONO formation on unmodified BSA coating (31.4 µg cm −2 ) dependent on light conditions, the radiation intensity (number of VIS lamps) was changed under otherwise constant conditions of exposure at 20 ppb NO 2 and 50 % RH. Decreasing light intensity revealed a linearly decreasing trend in HONO formation from about 1000 to 140 ppt (red symbols in Fig. 4). After re-illumination to the initial high light intensity the HONO formation was reduced by 32 % (blue symbol in Fig. 4). Stemmler et al. (2006) and Sosedova et al. (2011) also observed a similar saturation of HONO formation on humic, tannic and gentisic acid at higher light intensities. Stemmler et al. (2006) argued that surface sites activated for NO 2 heterogeneous conversion by light (Reaction R3) would become de-activated by competition with photo-induced oxidants (X * , Reactions R7-R8), e.g., primary chromophores or electron donors are oxidized by surface*, which is in line with the observed decomposition of the native protein presented above. In other studies the NO 2 uptake coefficient on soot, mineral dust, humic acid and other solid organic compounds similarly increased at increasing light intensities ( (Bejan et al., 2006). NO 2 dependency At about 50 % relative humidity and high illumination intensities (seven VIS lamps, ∼ 161 W m −2 ), heterogeneous formation of HONO strongly correlated with the applied NO 2 concentration (Fig. 5). On a BSA surface of about 16.1 µg cm −2 (Table 1) the produced HONO concentration increased from 56 ppt at 20 ppb NO 2 to 160 ppt at 100 ppb NO 2 . Only at a threshold NO 2 level well above those typically observed in natural environments ( 150 ppb) did this increasing trend slow down to some extent, indicative of saturation of active surface sites. A similar pattern of NO 2 dependence was also observed for light-induced HONO formation from humic acid (Stemmler et al., 2006) (Stemmler et al., 2006), while the blue triangles pointing down are the humic acid aerosol with 100 nm diameter and a surface of 0.151 m 2 m −3 at 26 % RH and 1 × 10 17 photons cm −2 s −1 (Stemmler et al., 2007). The black circles are gentisic acid coating (160-200 µg cm −2 ) at 40-45 % RH and light intensity similar to that in the humic acid aerosol study (Sosedova et al., 2011). Green diamonds are ortho-nitrophenol in gas phase (ppm level) illuminated with UV/VIS light. Dotted lines are exponential fittings of the measured data points and are meant to guide the eyes. (up to 40 ppb NO 2 ) was observed when NO 2 was applied additionally during the gas-phase photolysis of nitrophenols (Fig. 5;Bejan et al., 2006). Even though the matrix (nitro-phenols) and conditions (illuminated) of the latter is comparable to the experiment presented here, for BSA no clear indication of saturation was found up to 160 ppb of NO 2 , pointing to a highly reactive surface of BSA for NO 2 under illuminated conditions. As shown with Reactions (R7) and (R8), the concentration dependence depends on the competing channel (Reaction R8); therefore, this is strongly matrix dependent, both in terms of chemical and physical properties. Impact of coating thickness Strong differences in HONO concentrations were found for experiments with different coating thicknesses applying otherwise similar conditions (20 ppb of NO 2 , seven VIS lamps and 50 % RH). While only 55 ppt of HONO concentration was observed for a shallow homogeneous coating of 16.1 µg cm −2 (217.6 nm thickness, see below) applied on the whole length of the tube, up to 2 ppb was found for a thick (more uneven) coating of 31.44 µg cm −2 (435.2 nm thickness) covering only 50 % of the tube (Fig. 6). Potential explanations are that thicker coating leads to (1) more bulk reactions producing HONO or (2) different morphologies, e.g., higher effective reaction surfaces. Exposing (20 %) different coated surface areas in the flow tube, potentially introduced bias comparing different data sets. Emitted HONO might be re-adsorbed differently by proteins and glass surface. However, as the protein is slightly acidic, a low uptake efficiency of HONO by BSA can be anticipated, which should not differ too much from the uncovered glass tube surface . Accordingly, NO 2 uptake on glass is assumed to be significantly lower than on proteins. A strong increase in NO 2 uptake coefficients with increasing coating thickness was also observed for humic acid coatings (Han et al., 2016). However, they found an upper threshold value of 2 µg cm −2 of cover load (20 nm absolute thickness, assuming a humic acid density of 1 g cm −3 ), above which uptake coefficients were found to be constant. The authors also proposed that NO 2 can diffuse deeper into the coating and below 2 µg cm −2 the full cover depth would react with NO 2 , respectively. For proteins the number of molecules per monolayer depends on their orientation and respective layer thickness can vary accordingly. One (dry, crystalline) BSA molecule has a volume of about 154 nm 3 (Bujacz, 2012). In a flat orientation (4.4 nm layer height and a projecting area of 35 nm 2 molecule −1 ) 3.64 × 10 14 molecules (40.5 µg; 0.32 µg cm −2 ) of BSA are needed to form one complete monolayer in the flow tube (i.d. of 0.81, 50 cm length, 100 % surface coating). Hence, the thinnest BSA coating applied in the experiment (16.1 µg cm −2 ) would consist of 50 monolayers, revealing a total coating thickness of 217.6 nm, and the thickest BSA coating (31 µg cm −2 ) would have 99 monolayers and an absolute thickness of 435.1 nm. At the other extreme (non-flat) orientation, more BSA molecules are needed to sustain one monolayer. With 21.7 nm 2 of pro- Figure 6. HONO formation on three different BSA coating thicknesses, exposed to 20 ppb of NO 2 under illuminated conditions (seven VIS lamps). The HONO concentrations were scaled to reaction tube coverage (black: 100 % of reaction tube was covered with BSA; light blue: 70 % of tube was covered; red: 50 % of tube was covered with BSA). The middle thick coating (22.46 µg cm −2 ) was replicated and studied with different reaction times (cyan and blue triangle). Solid lines (with circles or triangles) present continuous measurements; when those are interrupted, other conditions (e.g., light intensity, NO 2 concentration) prevailed. Dotted lines show interpolations and are meant to guide the eyes. Arrows indicate the intervals in which the shown decay rates were determined. Error bars indicates SDs from 10 to 20 measuring points (5-10 min). jected area of one molecule and 7.1 nm monolayer height, 5.86 × 10 14 molecules of BSA are needed to form one complete monolayer in the flow tube. The coatings would consist of between 31 (thinnest) and 61 (thickest) monolayers of BSA. With a flat orientation 1-2 % (number or weight) of BSA molecules would build the uppermost surface monolayer, whereas in an upright molecule orientation 1.6-3.3 % would be in direct contact with surface ambient air. In the crystalline form several molecules of water stick tightly to BSA. As BSA is highly hygroscopic, more water molecules are adsorbed at higher relative humidity. At 35 % RH BSA is deliquesced (Mikhailov et al., 2004). Therefore the above described number of monolayers and the absolute layer thickness are a lower bound estimate. In conclusion, the thickness dependence on HONO formation is extremely complex. Activation and photolysis of nitrated Tyr occurs throughout the BSA layer. The heterogeneous reaction of NO 2 may or may be not limited to the surface depending on solubility and diffusivity of NO 2 . Also the release of HONO may be limited by diffusion. The observed dependence on the coating thickness suggests the involvement of the bulk reactions, but the reactions can happen in both surface and bulk phase. RH dependency The dependence of HONO emission on relative humidity is shown in Fig. 7. Here about 25 ppb of NO 2 was applied to a (not nitrated) BSA-coated flow tube (17.5 µg cm −2 ) both in dark and illuminated conditions (seven VIS lights). HONO formation scaled with relative humidity. Kleffmann et al. (1999) proposed that higher humidity inhibits the self-reaction of HONO (2 HONO (s,g) → NO 2 + NO + H 2 O), which leads to higher HONO yield from heterogeneous NO 2 conversion. The RH dependence of HONO formation on proteins is different to other surfaces. For example, no influence of RH has been observed for dark heterogeneous HONO formation on soot particles sampled on filters (Arens et al., 2001). No impact of humidity on NO 2 uptake coefficients on pyrene was detected (Brigante et al., 2008). For HONO formation on tannic acid coatings (both at dark and irradiated conditions) a linear but relatively weak dependence has been reported between 10 and 60 % RH, while below 10 % and above 60 % RH the correlation between HONO formation and RH was much stronger (Sosedova et al., 2011). Similar results were obtained for anthrarobin coatings by Arens et al. (2002). This type of dependence of HONO formation on phenolic surfaces on RH equals the HONO formation on glass, following the BET water uptake isotherm of water on polar surfaces (Finnlayson-Pitts et al., 2003;Summer et al., 2004). For humic acid surfaces the NO 2 uptake coefficients also weakly increased below 20 % RH and were found to be constant between 20 and 60 % (Stemmler et al., 2007). While on solid matter chemical reactions are essentially confined to the surface rather than in the bulk, proteins can adopt an amorphous solid or semisolid state, influencing the rate of heterogeneous reactions and multiphase processes. Molecular diffusion in the non-solid phase affects the gas uptake and respective chemical transformation. Shiraiwa et al. (2011) could show that the ozonolysis of amorphous protein is kinetically limited by bulk diffusion. The reactive gas uptake exhibits a pronounced increase with relative humidity, which can be explained by a decrease of viscosity and increase of diffusivity, as the uptake of water transforms the amorphous organic matrix from a glassy to a semisolid state (moisture-induced phase transition). The viscosity and diffusivity of proteins depend strongly on the ambient relative humidity because water can act as a plasticizer and increase the mobility of the protein matrix (for details see Shiraiwa et al., 2011, and references therein). Shiraiwa et al. (2011) further showed that the BSA phase changes from solid through semisolid to viscous liquid as RH increases, while trace gas diffusion coefficients increased about 10 orders of magnitude. This way, characteristic times for heterogeneous reaction rates can decrease from seconds to days as the rate of diffusion in semisolid phases can decrease by multiple orders of magnitude in response to both low temperature (not investigated in here) and/or low relative humidity. Accordingly, we propose that HONO formation rate depends on the condensed-phase diffusion coefficients of NO 2 diffusing into the protein bulk, HONO released from the bulk and mobility of excited intermediates. Long-term exposure with NO 2 under irradiated conditions To study long-term effects of irradiation on HONO formation from proteins, flow tubes were coated with 2 mg BSA (17.5 ± 0.4 µg cm −2 ; 90 % of total length) and exposed to 100 ppb NO 2 , at 80 % RH at illuminated conditions for a time period of up to 20 h (Fig. 8). Samples illuminated with VIS light only (red and orange colored lines in Fig. 8) showed persistent HONO emissions over the whole measurement period. For unknown reasons, and even though the observed HONO concentrations were within the expected range with regard to the applied NO 2 concentrations, RH and cover characteristics, one sample (orange in Fig. 8) showed a sharp short-term increase in the initial phase followed by respective decrease, not in line with all other samples (compare Fig. 6). However, after 4 h both VIS irradiated samples showed virtually constant HONO emissions (−3.8 and +1.6 ppt h −1 , respectively). The sample illuminated with UV and VIS light (three UV and four VIS lamps) showed a sustained sharp increase in the first 4 h, followed by persistent and very stable (decay rate as low as −0.5 ppt h −1 ) HONO emissions at an about 3-fold higher level compared to samples irradiated with VIS only. HONO formation by photolysis of (adsorbed) HNO 3 is assumed to be insignificant in this study. With N 2 as carrier gas, gas-phase reactions of NO 2 do not produce HNO 3 . Even when small amounts of HNO 3 would be formed by unknown heterogeneous reactions, photolysis of HNO 3 is only significant at wavelengths < 350 nm, which is close to the lowest limit of the UV wavelength applied in this study. Likewise, the respective photolysis frequency recently proposed by Laufs and Kleffmann (2016) of about 2.4 × 10 −7 s −1 is very low. If BSA acts like a catalytic surface as in a Langmuir-Hinshelwood reaction each BSA molecule can react several times with NO 2 to heterogeneously form HONO. As described in 3.1, BSA nitration is in competition with NO 2 surface reactions and only a limited number of NO 2 molecules could react with BSA forming HONO via nitration of proteins and subsequent decomposition of nitrated proteins. A BSA molecule contains 21 tyrosine residues, which could react with NO 2 . However, even a strong nitration agent such as TNM is not capable of nitrating all tyrosine residues and a mean ND of 19 % was found (Peterson et al., 2001;Yang et al., 2010); i.e., four tyrosine residues of one BSA molecule can be nitrated to form HONO. As 2 mg of BSA was applied for each flow tube coating, a total of 1.8 × 10 16 protein molecules can be inferred. In 20 h of irradiating with VIS light 13-22 % of the accessible Tyr residues (four Tyr per BSA molecule) would have been reacted. Irradiating with additional UV lights at least 56 % of the tyrosine residues would have been nitrated and decomposed. However, as NO 2 is a much weaker nitrating agent and nitration of only one tyrosine residue is probable (ND of BSA with O 3 / NO 2 6 %; Yang et al., 2010) up to 85 % BSA molecules would have been reacted when irradiated with VIS lights and even more HONO molecules as coated BSA molecules would have been generated under UV/VIS light conditions. Other amino acids of the protein like tryptophan or phenylalanine might also be nitrated but without formation of HONO (Goeschen et al., 2011). Hence, a contribution of heterogeneous conversion of NO 2 can be anticipated. Kinetic studies The experimental results (especially the stability over a long time) indicate that the formation of HONO from NO 2 on protein surfaces likely underlies the Langmuir-Hinshelwood mechanism in which the protein would act as a catalytic surface (Fig. 9). The first step is the fast, reversible physical adsorption of NO 2 (k 1 ) and water followed by the slow conversion into HONO. There are two possible processes for the HONO formation. HONO is formed by heterogeneous NO 2 conversion (k 2 ) but also via nitration and decomposition of nitrated proteins (k 4 , k 5 ). The final step of the mechanism is the release of the generated HONO into the air. Since proteins are in general slightly acidic, the desorption of HONO (k 3 ) should be fairly fast. Pseudo-first-order kinetics are assumed for the reaction of NO 2 to HONO (Stemmler et al., 2007) and the reaction can be described as follows (Eq. 1). with k eff the effective pseudo-first-order rate constant (for more detailed information check the Supplement). Figure 9. Schematic illustration of the underlying Langmuir-Hinshelwood mechanism of light-induced HONO formation on protein surface. Reaction constants for NO 2 uptake, direct NO 2 conversion, protein nitration, HONO formation from decomposing nitrated proteins and HONO release are indicated by k 1 , k 2 , k 4 , k 5 , and k 3 . In this study, neither HONO nor NO 2 photolysis is considered, as the overlap of the applied UV/VIS or VIS range (340-700 or 400-700 nm) and the HONO and NO 2 photolysis spectrum (< 400 nm) is low. Furthermore, the applied light intensity is lower compared to clear-sky irradiance and the respective UV light is partly absorbed by the reaction tube although quartz glass was used (transmission ∼ 90 %) and the photolysis frequency would decrease down to 10 −4 s −1 . Hence, the photolysis is assumed to be not significant. In the first 5-10 min of the long-term experiments, HONO increased ( Fig. 8 -zoomed in range). This slope was taken as d[HONO] g /dt in Eq. (6). Effective rate constants between 1.48×10 −6 s −1 (VISa) and 7.40×10 −6 s −1 (VISb) were calculated. When irradiating with VIS light only, the concentration of HONO was either constant or decreased for 2 h after this first 10 min. When irradiating with additional UV light, the HONO signal showed an enhancement in two steps. In the first 10 min it was strongly increasing (1327 ppt h −1 ) and then in the next hour it increased less with 170 ppt h −1 prior to stabilization. Therefore two rate constants of 4.10 × 10 −6 and 5.2 × 10 −7 s −1 were obtained, respectively. Reactive uptake coefficients for NO 2 were calculated according to Li et al. (2016). For both irradiation types the uptake coefficient γ was in the range of 7 × 10 −6 at the very beginning of each experiment. After a few minutes they decreased to a mean of 1 × 10 −7 . The calculated k eff values and uptake coefficient are in the same range and match the NO 2 uptake coefficients on irradiated humic acid surfaces (coatings) and aerosols obtained by Stemmler et al. (2006/07) which were in between 2×10 −6 and 2×10 −5 (coatings) and 1×10 −6 and 6×10 −6 (aerosols), depending on NO 2 concen-trations and light intensities. Similar NO 2 uptake coefficients on humic acid were observed by Han et al. (2016). George et al. (2005) reported about a 2-fold increased NO 2 uptake coefficients for irradiated organic substrates (benzophenone, catechol, anthracene) compared to dark conditions, in the order of (0.6-5) × 10 −6 . NO 2 uptake coefficients on gentisic acid and tannic acid were in between (3.3-4.8) × 10 −7 (Sosedova et al., 2011), still higher than on fresh soot or dust (about 1 × 10 −7 ; Monge et al., 2010;Ndour et al., 2008). The NO 2 uptake coefficients on BSA in the presence of O 3 (1 × 10 −5 , for 26 ppb NO 2 and 20 ppb O 3 ) published by Shiraiwa et al. (2012) were somewhat higher than the values calculated here without O 3 but with light. It was not possible to extract a set of parameters for a Langmuir-Hinshelwood mechanism (like Langmuir equilibrium constant, surface accommodation coefficient or second-order rate constant) from the presented data. The saturating behavior of photochemical HONO production may be due to either the adsorbed precursor on the surface or due to a photochemical competition process, which also leads to a Lindemann-Hinshelwood type kinetic expression (Minero, 1999). Summary and conclusion Photochemical nitration of proteins accompanied by formation of HONO by (i) heterogeneous conversion of NO 2 and (ii) decomposition of nitrated proteins was studied under relevant atmospheric conditions. NO 2 concentrations ranged from 20 ppb (typical for urban regions in Europe and USA) up to 100 ppb (representative for highly polluted industrial regions). The applied relative humidity of up to 80 % and light intensities of up to 161 W m −2 are common on cloudy days. Under illuminated conditions very low nitration of proteins or even no native protein was observed, indicating a light-induced decomposition of nitrated proteins to shorter peptides. These might still include nitrated residues of which potential health effects are not yet known. An average effective rate constant of the total NO 2 -HONO conversion of 3.3 × 10 −6 s −1 (for about 120 cm 2 of protein surface, layer thickness 240 nm and a layer volume of 0.003 cm 3 ; surface/volume ratio ∼ 40 000 cm −1 ) or 8.25 × 10 −8 s −1 cm −2 BSA layer was obtained. At 20 ppb NO 2 HONO formation of 19.8 ppb h −1 m −2 on a pure BSA surface could be estimated. While heterogeneous HONO formation of BSA exposed to NO 2 revealed light saturation at intensities higher than 161 W m −2 , the HONO formation from previously nitrated OVA was linearly increasing over the whole light intensity range investigated. The latter let assume even higher HONO formation under sunny (clear-sky) ambient atmospheric conditions. No data about representative protein surface areas on atmospheric aerosol particles are available. However, the number and mass concentration of primary biological aerosol particles such as pollen, fungal spores and bacteria, containing proteins, are in the range of 10-10 4 m −3 and 10 −3 -1 µg m −3 , respectively (Despres et al., 2012;Shiraiwa et al., 2012). Typical aerosol surface concentrations in rural regions are about 100 µm 2 cm −3 . Stemmler et al. (2007) estimated a HONO formation of 1.2 ppt h −1 on pure humic acid aerosols in environmental conditions. As NO 2 uptake coefficients and HONO formation rates on proteins are similar to humic acid, but only about 5 % of the aerosol mass can be assumed to consist of proteins, it can be anticipated that HONO formation on aerosol is not a significant HONO source in ambient environmental settings. However, proteins on ground surfaces (soil, plants, etc.) might play a more important role. Accordingly, Stemmler et al. (2006 and suggested that NO 2 conversion on soil covered with humic acid would be sufficient to explain missing HONO sources up to 700 ppt h −1 . Therefore it is difficult to estimate the importance of HONO formation on protein surface and its contribution to the HONO budget. In many studies the calculated unknown source strength of daytime HONO formation is within a range of about 200-800 ppt h −1 Acker et al., 2006;Li et al., 2012). Data availability. Please contact the corresponding authors Hang Su<EMAIL_ADDRESS>or Yafang Cheng<EMAIL_ADDRESS>for more information on data.
10,344.6
2017-10-06T00:00:00.000
[ "Chemistry", "Environmental Science" ]
Microscopic Mechanism of Cement Improving the Strength of Lime-Fly Ash-Stabilized Yellow River Alluvial Silt Silt is a kind of soil with poor engineering performance. Lime-fly ash(LF-) stabilized silt has the problem of low early strength. In this study, it is aimed to investigate the effect of cement on improving the strength of LF-stabilized silt and reveal the microscopic mechanism. A fixed percentage of LF (18%) plus different percentages of cement (0%, 2%, 4%, and 6%) were mixed with Yellow River alluvial silt (YRAS). Soil samples for tests were artificially made by compaction in the laboratory. Unconfined compressive strength (UCS) tests were performed on soil samples cured for 7 d, 28 d, 60 d, and 90 d. Scanning electron microscope (SEM) tests, energy dispersive X-ray spectroscopy (EDS) tests, and mercury intrusion porosimetry (MIP) tests were performed on soil samples cured for 7 d and 28 d. UCS results showed that the early strength of LF-stabilized YRAS developed significantly after adding cement. UCS also increased with the increase in cement content and curing time. SEM results revealed the differences in microstructure of LF-stabilized YRAS before and after adding cement. Before adding cement, the main microstructure characteristics included small soil particles, large number of pores, and loose particle arrangement. After adding cement, the main microstructure characteristics included large bonded particles, small number of pores, and dense particle arrangement. +e EDS results showed that, after curing for 28 d, the elements of gels in stabilized YRAS had changed, mainly including appearance of C and a significant increase of Ca. MIP results showed that the pores with a size of 1 μm∼10 μm accounted for the largest proportion in stabilized YRAS. +e product (mainly C-S-H) of cement hydration mainly filled the pores with a size larger than 10 μm at the early stage. Combining strength results and microresults, the micromechanism of cement improving the strength of LF-stabilized YRAS was discussed. Introduction Silt is a fine-grained soil or the fine-grained portion of soil, with a plasticity index less than 4 or if the plot of plasticity index versus liquid limit falls below the "A" line [1]. Silt can be seen in many areas of China, such as Jiangsu, Anhui, Hubei, Henan, Shandong, Shanxi, and many other provinces. Silt from different areas may have different engineering characteristics. Much silt can be seen in the Ancient Yellow River district of Jiangsu Province, China. Silt from this area is the product of alluvial action of the Yellow River. So, it is called the Yellow River alluvial silt (abbreviated as YRAS). is type of soil has poor engineering properties. It is difficult to compact in dry conditions, and it is easily liquefied under dynamic load. In addition, it has the disadvantages of low strength and low stiffness. Without effective treatment, this soil as a foundation may cause many problems, such as uneven settlement, excessive lateral deformation, and building instability [2]. ere are many ways to improve the engineering properties of soil, in which stabilization is economical, fast, and efficient. Soil stabilization means improving the engineering properties of soil by adding curing agent. After adding the curing agent into the soil, complex physical and chemical actions will occur in the mixture of soil and curing agent. Because of its good effect, soil stabilization has been widely used in civil engineering [3][4][5][6][7][8][9][10][11]. After stabilization, many engineering properties of soil can be improved. Pu et al. [7] carried out mechanical tests on silt stabilized by lime, lime-cement, and SEU-2 binder and found that adding curing agent could effetely improve the unconfined compressive strength and water stability of silt. Pu et al. [12] also conducted one-dimensional consolidation tests and consolidated undrained triaxial compression tests on silt stabilized by SEU-2 binder and revealed that the deformation properties and shear strength had been greatly improved after stabilization. e study of Wang et al. [13] also showed that cement and lime stabilization improved the undrained shear strength of sediment from Dunkirk Port in France. Dispersive and expansive soils could cause serious problems for many engineering structures. To solve this problem, Türköz et al. [14,15] innovatively used cement-natural zeolite mixtures and silica fume-lime mixtures to stabilize clay soil with dispersive and swell properties, respectively. A series of tests including swell percentage, swell pressure, crumb, pinhole, unconfined compressive strength, and unconsolidated-undrained triaxial compression tests were performed on stabilized clay soil. e results showed that cement-natural zeolite mixtures and silica fume-lime mixtures can not only significantly improve swell and dispersive characteristics of soil but also effectively increase the strength. Soil structure has a great influence on soil engineering performance, especially for undisturbed soil and stabilized soil. Wang et al. [16] studied the influence of cement/lime on the nonlinear stress-strain behaviour with relation to the constrained modulus. e study showed that cement and lime were important for the strength development of soil, and they could increase the compression index of soil. Similar conclusions were obtained in the study of the one-dimensional compression behaviour of cement-lime-stabilized soft clay [17]. In a word, stabilization can effectively improve the engineering performance of soil. Traditional curing agents used for soil stabilization mainly include cement [13,18,19], lime [20,21], and other calcium-based curing materials. Some new curing materials may also have a good curing effect. For example, carbonated reactive magnesia has a good ability to stabilize silt [22,23]. For some special soil, a curing agent alone does not work well. e mixtures of industrial by-products and cement/ lime may have a better curing effect and be more economical. e mixtures used include cement-fly ash [5,24], cement-lime [7,25], cement-zeolite [15], and silica fumelime [14]. As a kind of stabilizer, the mixture of lime and fly ash has the advantages of low cost, easy construction, and good stabilizing effect, and so it has been widely used in road engineering [26][27][28][29]. At present, some scholars have conducted relevant research on the engineering characteristics of lime-fly ash-stabilized silt. e study shows that lime-fly ash-stabilized silt has the advantages of high late strength, good integrity, high resistance to bending, and high modulus [30,31]. In view of the great engineering properties, lime-fly ash-stabilized YRAS (abbreviated as LF-YRAS) was originally intended to be used as subbase of a highway in Jiangsu Province of China. Preliminary tests showed that LF-YRAS has the advantages of good integrity and high late strength, but it also has the disadvantage of low early strength, which may affect construction progress. In order to improve the early strength of LF-YRAS, cement was chosen as an additional additive because of its characteristics of rapid hydration, short setting time, and high early strength [15,32,33]. e macroscopic mechanical properties of soil essentially depend on its microstructure, so studying soil microstructure characteristics helps to better understand the mechanism of strength improvement [34,35]. Some scholars have used different microscopic methods to study the microstructure characteristics of soil or stabilized soil, including scanning electron microscopy (SEM) [36][37][38][39][40], mercury intrusion porosimetry (MIP) [41][42][43], energy dispersive X-ray spectroscopy (EDS) [44][45][46], and X-ray diffraction [25,47]. Using microscopic technology to study soil microscopic properties has become quite popular and mature. In this study, three kinds of microscopic methods, i.e., SEM, EDS, and MIP were used to explore the microscopic properties of stabilized YRAS. In short, in this study, unconfined compression strength (UCS) tests, SEM tests, EDS tests, and MIP tests were carried out on stabilized silt samples. Based on the results of UCS tests, the effects of cement on improving the early strength of LF-YRAS will be explored, and the influence of additive amount and curing age on strength will be studied. Based on the results of SEM, EDS, and MIP tests, the microstructure, element composition, and pore characteristics of LF-YRAS before and after adding cement will be compared. Finally, the microscopic mechanism of cement improving the strength of LF-YRAS will be discussed. Test Materials. For the tested soil samples were artificially made in the laboratory, disturbed soil was collected by using a simple shovel in the shallow layer (about 0.5 m deep) from the waste Yellow River district in the northern part of Jiangsu Province, China. e specific location is 34°04′N and 119°48′E as shown in Figure 1. Soil grain size distribution measured using a laser particle size analyzer is shown in Figure 2. It can be seen from Figure 2 that the contents of clay-size (<5 μm) particles, siltsize (5∼75 μm) particles, and sand-size (>75 μm) particles are 8.78%, 58.79%, and 32.43%, respectively. e basic physical properties of soil are shown in Table 1. Soil chemical composition measured using an X-ray fluorescence spectrometer is shown in Table 2. It can be seen from Table 2 [49], the fly ash was tested, and the results showed that the total content of SiO 2 , Fe 2 O 3 , and Al 2 O 3 was 79.16%. e content of CaO in fly ash was 0.64%. e loss on ignition of fly ash was 7.79%. e specific surface area of fly ash was 2520 cm 2 /g. e cement used in the test was ordinary Portland cement P.O32.5 purchased from Nanjing City of China. e cement was off-white and powdery. According to the technical specification of Portland Cement for Road (GB 13693-2005) [50] of China, the cement was tested, and the results showed that the main components of cement were CaO (55.37%), SiO 2 (25.41%), and Al 2 O 3 (10.09%). e initial setting time and final setting time of the cement were 205 min and 260 min, respectively. Test Scheme. e test scheme is shown in Table 3. As shown in Table 3, the content of lime + fly ash is fixed at 18% (6% lime and 12% fly ash), and the content of cement is 2%, 4%, and 6%. For brevity, in Table 3, C2, C4, and C6 represent the soil added with 2%, 4%, and 6% cement, respectively. According to the test plan in Table 3, standard compaction tests were firstly performed on YRAS, LF-YRAS, C2, C4, and C6 in accordance with the procedure in Test Methods of Soils for Highway Engineering (JTG E40-2007) [48]. e test results are shown in Table 4. During compaction tests, the diameter of the hammer was 5 cm. e mass of the hammer was 2.5 kg. e drop height was 30 cm. A test tube with a diameter of 10 cm and a height of 12.7 cm was used. e soil was filled in 3 layers, and the number of hammering per layer was 27. e energy density of hammering was 598.2 (kJ/m 3 ). Test Methods. Soil samples for UCS tests were artificially made in the laboratory. e method of making samples was strictly in accordance with the Chinese specification of Test Methods of Soils for Highway Engineering (JTG E40-2007) [48]. First of all, according to the test plan in Table 3, silt and additive (lime-fly ash or cement-lime-fly ash) were mixed and stirred uniformly in a small mixer. Secondly, referring to the compaction test results in Table 4, optimal amount (by dry weight of silt-additive mixture) of distilled water was added to the soil. e water contents were 18.9%, 18.0%, Advances in Civil Engineering 17.7%, and 17.5% for LF-YRAS, C2, C4, and C6, respectively. After adding water, the soil was stirred again until completely homogeneous. And then, the wetted soil was immediately put into the sample maker and compacted. In order to be consistent with practical engineering, the degree of compaction was 96% by controlling sample weight. After compaction, a cylindrical sample with a diameter of 5 cm and a height of 10 cm was made. A study showed that delayed compaction had a significant effect on the strength of lime-stabilized clay soil [51]. In order to avoid the effect of delayed compaction, in this study, every sample was made within one hour after adding water. Next, the sample was sealed in a plastic bag and placed in a standard curing room (temperature 20 ± 3°C and relative humidity ≥95%) for curing. Temperature is a key factor affecting the speed of soil stabilization [52], so the curing temperature was the same and consistent for all samples. For standard curing, the curing time was 7 d, 28 [53]. First of all, the sample for the SEM test was carefully broken into small pieces, in which a small piece (about 1 cm 3 ) with a clear section was selected. e clear section should not be disturbed before testing. Secondly, the small piece was rapidly frozen in liquid nitrogen (−190°C) and then placed in a vacuum freeze-drying apparatus named XIANOU-18N for 24 h. Next, the small dry piece was plated with a thin layer of carbon to improve conductivity. Finally, SEM tests were performed on the small piece using Scanning Electron Microscopy S-3000N, which was produced by Hitachi Company of Japan. e EDS tests and SEM tests were performed simultaneously. In SEM views, some points (also called microareas) that can reflect the typical morphology of sample microstructure were selected, and then, the EDS tests were performed on these points one by one. By detecting and analyzing the characteristic X-rays from these points, the types and contents of the element could be obtained in EDS. e EDS tests were carried out in accordance with the procedures in Microbeam Analysis-Quantitative Analysis using Energy Dispersive Spectrometry (GB/T 17359-2012) [54]. Except for plating a thin layer of conductive material on the surface, sample preparation for MIP tests was the same as that for SEM tests. After freeze-drying, MIP tests were directly performed on the dried samples using an automatic mercury porosimeter called AutoPore IV9500 that was produced by Micromeritics Instruments Corporation. e tests were carried out in strict accordance with the procedures in Pore Size Distribution and Porosity of Solid Materials by Mercury Porosimetry and Gas Adsorption-Part 1: Mercury Porosimetry (GB/T 21650. [55]. UCS Test. e UCS test results are shown in Figure 3. It can be seen from Figure 3 that LF-YRAS has low early UCS. e UCS only reaches 73.5 kPa after 7 d standard curing and 224.2 kPa after 28 d standard curing, which does not meet the requirements for subsequent construction. It can also be seen from Figure 3 that with the increase of curing time, UCS increases, and adding cement can effectively improve the UCS of LF-YRAS, especially at the early stage. By adding 4% cement, the UCS of LF-YRAS increases to 285.2 kPa (7 d standard curing) and 575.3 kPa (28 d standard curing), which meets the requirements for subsequent construction. In addition, it can be found that adding cement can improve Advances in Civil Engineering the water stability of LF-YRAS. For LF-YRAS, the UCS of the water curing sample is lower than that of the standard curing sample, while after adding cement, the situation is reversed. In order to further reflect cement contribution to improve the UCS of LF-YRAS at different curing stages, the term "UCS growth rate" is introduced. UCS growth rate refers to the ratio of net strength increase after adding cement to the strength before adding cement of LF-YRAS. In brief, the physical implication of this term is net strength increase (%) of LF-YRLS after adding cement, and this term is helpful to clearly see how much cement has contributed to strength increase and at which curing stage the cement is more effective. UCS growth rates at different curing times are shown in Figure 4. Figure 4 shows that the UCS growth rates have higher values at the early stage (curing for 7 d) and then gradually decrease as curing time increases, and finally, they tend to be stable after 28 d curing. is means that cement is more effective for the early strength development of LF-YRAS. Many scholars have studied the relationship between strength and curing time of stabilized soil. Horpibulsuk et al. [18] proposed the relationship between UCS and curing time of cement-stabilized soil. e relationship is as follows: where t is curing time (days), A is a constant, B is coefficient, UCS t is the UCS of stabilized soil after curing for t, and UCS 28d is the UCS of stabilized soil after curing for 28 d. Referring to the method of Horpibulsuk, the UCS of stabilized YRAS is normalized similarly in this study. e normalized result is shown in Figure 5. From Figure 5, it can be seen that the correlation between normalized UCS and curing time is quite good (R 2 � 0.97). e fitting relationship between normalized UCS and curing time is as follows: where t is curing time (days), UCS t is the UCS of stabilized YRAS after curing for t, and UCS 28d is the UCS of stabilized YRAS after curing for 28 d. It can be seen from equation (1) that, when curing time t is equal to 0, the equation has no meaning. is means the soil samples that have just been made have infinitely low negative strength, which is inconsistent with the actual situation. Unlike equation 1, equation 2 has no such defect. In equation (2), the relationship between normalized UCS and curing time is linear, and the equation has meaning at Advances in Civil Engineering the point of t � 0. According to equation (2), once the UCS of 28 d curing is known, the UCS of other curing times can be predicted, which has a practical value for optimizing YRAS stabilization design and shortening the construction period. It can be inferred from the difference between equations (1) and (2) that these two equations are not universal for all stabilized soils. Equation (1) is for high water content clay stabilized by cement, while equation (2) is for silt stabilized by a mixture of cement, lime, and fly ash. Different stabilized soil may have different equations. Whether equation (2) can be applied to other types of soil still needs further study. Figure 6, in which Figure 6(a) shows the SEM views of LF-YRAS and Figure 6(b) shows the SEM views of C2. As marked in Figures 6(a) and 6(b), the bright parts represent soil particles and the dark parts represent pores between particles. From Figure 6(a), some obvious microstructure features of LF-YRAS can be seen. First, it can be seen that the sizes of soil particles are quite small (mainly 10 μm∼100 μm). is size distribution is nearly consistent with silt in Figure 2, in which silt-size (5∼75 μm) particles account for 58.79%. Second, by comparing with Figure 6(b), it can be seen that there are more and larger pores in LF-YRAS. Considering the same sample preparation, the reason for more and larger pores is that there are few gels to coat soil particles and fill pores. ird, it can be seen that the arrangement of soil particles is quite loose. ere are few connections between soil particles, leading to unstable microstructure. Once loaded, the soil particles are easier to turn and slide, which is macroscopically low in strength. In summary, the main microstructure features of LF-YRAS include small soil particles, large number of pores, and loose particle arrangement. e reason why LF-YRAS has such microstructure features is that there are few gels. At this early stage, the hydration reaction of CaO, ion exchange reaction of Ca 2+ , and crystallization reaction of Ca(OH) 2 proceed slowly, which produces small amounts of gels such as Ca(OH) 2 and Ca(OH) 2 •nH 2 O. In addition, carbonation reaction of Ca (OH) 2 and pozzolanic reaction proceed much more slowly, so not enough gels are produced at this stage. For stabilized soil, the quantity and strength of gels are the most important factors for improving soil strength, so few gels mean low strength. SEM Test. SEM test results are shown in From Figure 6(b), some microstructure features of C2 can be seen. First, it can be seen that the particles in C2 are coated and bonded by gels, which are quite different from LF-YRAS. e coated and bonded particles have larger sizes. Second, it can be seen that compared with LF-YRAS, C2 has fewer and smaller pores. e reason is that many gels have filled the pores between particles. ird, it can be seen that the arrangement of soil particles is very dense. Because of the coating, bonding, and filling of gels, C2 has a stable microstructure, and the particles are not easy to turn and slide when subjected to load, which is macroscopically high in strength. In summary, after adding 2% cement to LF-YRAS, the changes of microstructure mainly include large bonded particles, small number of pores, and dense particle arrangement. e reason why C2 has such microstructure changes is that the hydration of cement is fast. Many gels (mainly CaO•SiO 2 •YH 2 O) are produced from cement hydration at this early curing stage [15]. e gels coat soil particles, bond soil particles, and fill pores between soil particles, which change the microstructure and improve the strength of soil. Based on the differences of microstructure between LF-YRAS and C2, how the gels change microstructure and improve strength can be summarized. e main functions of the gels include coating, bonding, and filling. First of all, a small amount of produced gels attach themselves to soil particles as shown in Figure 6(a). Secondly, as the amount increases, enough gels can completely coat the soil particles. Of course, gels continuously fill the pores between particles while coating. Finally, as more pores are filled with gels, the coated particles are gradually bonded together firmly. e gels have great bonding strength and cohesion once they harden, so the bonding between coated particles is strong. e strong bonding makes the soil particles not easy to turn and slide when subjected to load, so the soil has higher strength. To see the gels in C2 more clearly, SEM tests with larger magnification were performed, and the results are shown in Figures 6(c) and 6(d). It is known that the main product of cement hydration reaction is CaO•SiO 2 •YH 2 O (abbreviated as C-S-H), which accounts for about 70%. As evidence of hydration, lots of fibrous C-S-H (Figure 6(c)) and reticulate C-S-H (Figure 6(d)) can be found in C2. By analyzing the composition of cement and lime-fly ash, it can be known that both cement hydration and fly ash-lime reaction can produce C-S-H gels. However, by comparing Figures 6(a) and 6(b), it can be found that there are few gels in LF-YRAS, while there are many gels in C2. erefore, it can be inferred that the main source of C-S-H gels in C2 is cement hydration at this early stage (curing for 7 d). EDS Test. e EDS test results are shown in Figure 7. In each SEM view, three test points are selected. e points are selected on gels attached to soil particles or gels between soil particles. On each point, an energy spectrum can be obtained. e function of Spectrum 2 and Spectrum 3 is to check the correctness and representativeness of Spectrum 1. In Figure 7, Spectrum 2 and Spectrum 3 are both nearly the same as Spectrum 1, which proves the representativeness of Spectrum 1. For brevity and clarity, Spectrum 2 and Spectrum 3 are not shown. Based on the EDS test results, the contents of different elements are shown in Figure 8. Figure 8. It can be seen from Figure 8 that, with the increase of curing time, the changes of element in C2 include appearance of C (new element), significant increase of Ca, increase of O, decrease of Mg, Al, Si, K, and Fe, and disappearance of Na and Ti. First, an important change is the appearance of C, by which it can be predicted that carbonization occurs in C2. By analyzing the composition, it can be known that the source of carbonization is Ca(OH) 2 . As a product of carbonization, CaCO 3 has the properties of being insoluble in water, high strength, and good water stability, which contributes to increasing soil strength. Second, another important change is the significant increase in Ca. After curing for 7 d, the atom content of Ca is 9.56%, while after curing for 28 d the atom content increases to 34.18%. e reason for the significant increase in Ca includes not only carbonization but also ion exchange. With the increase of Ca 2+ in pore water, Ca 2+ continuously exchanges with Na + , K + , and other ions attached to soil particles, which Advances in Civil Engineering leads to the gradual increase of Ca 2+ and a decrease of other ions on soil particles. After the replacement of univalent ion (Na + and K + ) by divalent ion (Ca 2+ ), the thickness of bound water of soil particles becomes thinner, and thus, the spacing between soil particles becomes smaller, which reduces soil plasticity and increases soil strength. Finally, although the absolute amounts of Mg, Al, Fe, and Si almost no change, their relative amounts (%) must decrease as the absolute amounts of O and Ca increase significantly. MIP Test. According to the relationship between injection pressure and injection amount in the MIP test, the cumulative pore volume percentage curve of stabilized YRAS is obtained, as shown in Figure 9. It can be seen from Figure 9 that the curves show a significant change at the point where the cumulative pore volume percentage equals 90%, while the corresponding pore diameters at this point (d 90 ) are different. e d 90 of LF-YRAS is 10-20 μm, while the d 90 of C2 is about 5 μm. In addition, it can be seen from Figure 9 that the curve of LF-YRAS is steeper and more biased to the right side than the curve of C2. Steeper curve means that LF-YRAS has more concentrated pore size distribution, and a curve more biased to the right side means LF-YRAS has more pores with larger size, which is consistent with the results of the SEM test. Horpibulsuk classified the pores of stabilized soil into 5 types according to pore size, which includes pore with a size smaller than 0.01 μm (pore in soil aggregate), pore with 0.01-0.1 μm size (pore between soil aggregate), pore with 0.1-1 μm size, pore with 1-10 μm size, and pore with a size larger than 10 μm [20,24]. Referring to the method of Horpibulsuk, the pores of stabilized YRAS are classified, and the result is shown in Figure 10. It can be seen from Figure 10 that, in stabilized YRAS, the pores with 1-10 μm size have the largest percentage: 54.45% in LF-YRAS (7 d), 61.1% in C2 (7 d), and 47% in C2 (28 d). In addition, it can be clearly found that LF-YRAS has more large pores than C2. In LF-YRAS, the pores with a size larger than 10 μm account for 37.17%, whereas in C2 the pores with a size larger than 10 μm only account for 9.64% (7 d) and 11.27% (28 d), which is consistent with the qualitative analysis of the previous SEM test. Horpibulsuk believes that cement hydration products mainly fill pores with 0.1-10 μm size [20], and Du believes that cement hydration products mainly fill pores with 1-10 μm size [56]. However, it can be seen from Figure 10 that, for stabilized YRAS, the cement hydration products mainly fill the pores with a size larger than 10 μm at the early stage (before 7 d) and fill the pores with 1-10 μm size at the later stage (7 d-28 d). At the early stage (before 7 d), the pores with a size larger than 10 μm account for 37.17% in LF-YRAS, while the percentage is reduced to 9.64% in C2. At the later stage (7 d-28 d), the percentage of pores with 1-10 μm size is reduced from 61.1% to 47% in C2. In summary, the microscopic mechanism of cement improving the strength of LF-YRAS can be ascertained based on the test results of SEM, EDS, and MIP. At the early stage (before 7 d), many gels (mainly C-S-H) are produced due to the rapid hydration reaction of cement, and the gels coat soil particles, bond soil particles, and fill the pores between soil particles, which change the microstructure of LF-YRAS. e microstructure changes include large bonded particles, small number of pores, and dense particle arrangement. e percentage of pores with large size (larger than 10 μm) decreases from 37.17% to 9.64%. ese changes work together to help improve the early strength of LF-YRAS. At the later stage (7 d-28 d), with the increase of curing time, there are more hydration reactions and pozzolanic reactions in stabilized YRAS, which further changes soil microstructure. e percentage of pores with 1-10 μm size is reduced from 61.1% to 47% in C2. In addition, carbonization and ion exchange occur in C2, which produces CaCO 3 and thins bound water of soil particles. In a word, hydration reaction, pozzolanic reaction, carbonation, and ion exchange have worked together to improve the later strength of stabilized YRAS. Conclusions In this paper, the effect of cement on improving the strength of LF-YRAS and its microscopic mechanism has been experimentally studied. e main conclusions are listed as follows: (1) LF-YRAS has low early UCS, which does not meet the requirements for subsequent construction. Adding cement can effectively improve the UCS of LF-YRAS, especially at the early stage. Meanwhile, cement can improve the water stability of LF-YRAS. (2) For stabilized YRAS, there is a good linear correlation between normalized UCS and curing time, which can be used to predict the strength of different curing times based on the strength of 28 d. (3) After curing for 7 d, LF-YRAS has the microscopic characteristics of small soil particles, large number of pores, and loose particle arrangement. Gels produced from cement hydration coat soil particles, bond soil particles, and fill the pores between soil particles, which change the microstructure of LF-YRAS. e main changes include large bonded particles, small number of pores, and dense particle arrangement. e changes contribute to improving the early strength of LF-YRAS. (4) With the increase of curing time, carbonation and ion exchange reaction occur in stabilized YRAS. e output of CaCO 3 and the thinning of soil-bound water help to improve the later strength of soil. (5) In stabilized YRAS, the pores with 1-10 μm size have the largest proportion. After adding cement, the proportion of pores with a size larger than 10 μm has decreased significantly. Cement hydration products mainly fill the pores with a size larger than 10 μm at the early stage (before 7 d) and fill the pores with 1-10 μm size at the later stage (7 d-28 d). Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this paper.
7,165.2
2020-12-10T00:00:00.000
[ "Materials Science" ]
Bianchi Type - III Charged Fluid Universe in Brans-Dicke Theory of Gravitation – We investigate the spatially homogeneous Bianchi Type-III space time with electromagnetic field tensor and relativistic charged perfect fluid in Brans-Dicke (B-D) theory of gravity. Solutions have been obtained by using a general approach of solving the partial differential equations. It is observed that the convergent and isotropic solution of the metric function can be derived with the components of the vector potentials I. INTRODUCTION In recent years there has been a lot of interest in several alternative theories of gravitation; out of which the most important among them is scalar-tensor theory of gravitation formulated by Brans-Dicke [1].This theory of gravity is one of the most competent theory due to its vast cosmological implications [2].In this theory, the scalar field has the dimensions of universe of the gravitational constant and its role is confined to its effect on gravitational field equations.This theory of gravity is mediated by a scalar field  in addition to the usual metric tensor field ij g present in Einstein's theory.Among the various modifications of general relativity, the B-D theory of gravity is well known example of a scalar tensor theory in which the gravitational interaction involves a scalar field and the metric tensor.In recent years, the study of Bianchi type models in the context of B-D theory has attracted many authors Pawar et.al [3], Sharif et.al [4], Kandalkar et.al [5], Raut et.al [6], Katore et.al [7].A detailed discussion of B-D cosmology is given by Singh et al. [8].Lorenz-Petzold [9] studied exact Bianchi type-III solutions in the presence of electromagnetic field.Bianchi type-I space-time in scalartensor theory have been investigated by Kumar et al. [10].Adhav et al. [11] studied LRS Bianchi type-II cosmological model with anisotropic dark energy, Katore et al. [12,13] explored Bianchi type-V and plane symmetric space-time filled with dark energy models in B-D theory.Bianchi type -III dark energy model in scalar tensor theory of gravitation explained by Naidu et al. [14].Adhav et al. [15] explored Bianchi type-III cosmological model with negative constant deceleration parameter in B-D theory of gravity in presence of perfect fluid.Shamir et al. [16] have studied anisotropic dark energy Bianchi type-III cosmological models in B-D theory of gravity.The Brans-Dicke field equations are given by where  is a dimensionless coupling constant.The function  is known as B-D scalar field.Karade and Solanke [17] investigated Bianchi type-III universe field with the perfect fluid and scalar field coupled with electromagnetic fields in ) , ( T R f theory of gravity.Recently Bhoyar et al. [18] discussed the Bianchi type-III and Kantowski Sachs cosmological model containing magnetic field with variable cosmological constant.This motivates us to investigate Bianchi type-III charged fluid universe in B-D Theory of gravitation.The paper is organized as follows: Section II, deals with the derivation and solutions of the field equations.A brief summary is given is section III. II.THE METRIC AND FIELD EQUATIONS Here, we consider a spatially homogeneous Bianchi Type-III space time in the form , where Electromagnetic field The energy momentum tensor for electromagnetic field is given by , 4 Here the electromagnetic field tensor where i V is a four potential vector. To achieve the compatibility with space time (1), we assume electromagnetic vector potential as Noting ( 4) and ( 5) we can deduce easily the following , , , , From equations ( 4), ( 5) and ( 6), we can deduce , 2 Using (3) we can deduce the components of energy momentum tensors , 2 , The stress energy tensor of a perfect fluid with density  , pressure p and four velocity i u is given by   where 1 This equation with different combination of i and j , gives following equations , 0 , 0 From the vanishing components of Einstein tensor, using equations ( 2) and (4), we deduce , 0 where D is an unknown function of t Integrating this with respect to t , we get where With the aid of equation ( 12), we can write the equation ( 10) as, , 0 From equations (15a), (15b)and (15c) ,we have Integrating with respect to t , we get , , , Now, considering the non-vanishing component of Einstein tensor, from equation ( 2), we derive Integrating (19e) with respect to t , we get where 7 k is constant.From equations (19a) and (19b), we get From equations (19b) and (19c), we get , 0 Using equations (19c) and (19a), we obtain , 0 Upon integration of (20a) and (20e), yields We can express the values of Using equations (15) III. CONCLUSION In this present paper, we have presented Bianchi Type-III space time with electromagnetic field tensor and relativistic charged perfect fluid in the context of Brans-Dicke theory of gravity.We have derived and solved the gravitational field equations corresponding to B-D theory.It is observed that the convergent, non-singular, isotropic solutions can be obtained along with the components of vector potential.It is also interesting to note that the investigated models are free from singularity. jT is energy momentum tensor for perfect fluid with conservation equation.and notations have their conventional meanings.
1,192.8
2018-10-31T00:00:00.000
[ "Physics" ]
Pharmacological Inhibition of Amyloidogenic APP Processing and Knock-Down of APP in Primary Human Macrophages Impairs the Secretion of Cytokines It has been previously shown that the amyloid precursor protein (APP) support the innate immune defense as an immune receptor. Amyloid β (Aβ) peptides seem to have properties of an antimicrobial peptide and can act as opsonines. In APP-deficient mouse models, a reduced secretion of cytokines has been observed. Still, it is unclear whether this can be attributed to the lack of APP or to the missing secretion of Aβ peptides. We inhibited the secretion of Aβ peptides in primary human monocyte derived macrophages with the γ-secretase inhibitor N-[N-(3,5-Difluorophenacetyl)-L-alanyl]-S-phenylglycine-t-butyl-ester (DAPT) or the β-secretase inhibitor GL-189. Alternatively, we knocked down APP by transfection with siRNA. We measured tumor necrosis factor α (TNFα), interleukin 6 (IL-6) and interleukin (IL-10) by enzyme linked immunosorbent assay (ELISA) and evaluated the phagocytotic activity by flow cytometry. We observed reduced concentrations of TNFα and IL-6 in the media of APPk/d macrophages and after inhibition of the β-, or γ-secretase, especially after additional immunological activation with lipopolysaccharide (LPS). Secretion of IL-10 was increased after pharmacological inhibition of APP processing when the macrophages were not immunologically activated but was decreased during LPS-induced inflammation in APPk/d macrophages. No changes of the phagocytotic activity were observed. We conclude that macrophage APP and Aβ peptides support the initiation of an immune response and are involved in the regulation of TNFα, IL-6, and IL-10 secretion by human monocyte-derived macrophages. It has been previously shown that the amyloid precursor protein (APP) support the innate immune defense as an immune receptor. Amyloid β (Aβ) peptides seem to have properties of an antimicrobial peptide and can act as opsonines. In APP-deficient mouse models, a reduced secretion of cytokines has been observed. Still, it is unclear whether this can be attributed to the lack of APP or to the missing secretion of Aβ peptides. We inhibited the secretion of Aβ peptides in primary human monocyte derived macrophages with the γ-secretase inhibitor N-[N-(3,5-Difluorophenacetyl)-L-alanyl]-Sphenylglycine-t-butyl-ester (DAPT) or the β-secretase inhibitor GL-189. Alternatively, we knocked down APP by transfection with siRNA. We measured tumor necrosis factor α (TNFα), interleukin 6 (IL-6) and interleukin (IL-10) by enzyme linked immunosorbent assay (ELISA) and evaluated the phagocytotic activity by flow cytometry. We observed reduced concentrations of TNFα and IL-6 in the media of APP k/d macrophages and after inhibition of the β-, or γ-secretase, especially after additional immunological activation with lipopolysaccharide (LPS). Secretion of IL-10 was increased after pharmacological inhibition of APP processing when the macrophages were not immunologically activated but was decreased during LPS-induced inflammation in APP k/d macrophages. No changes of the phagocytotic activity were observed. We conclude that macrophage APP and Aβ peptides support the initiation of an immune response and are involved in the regulation of TNFα, IL-6, and IL-10 secretion by human monocyte-derived macrophages. Keywords: amyloid precursor protein, amyloid, Abeta, Alzheimer, cytokine, immune system, secretase, BACE BACKGROUND The amyloid precursor protein (APP) is expressed on nearly every cell type and the amyloid β (Aβ) peptides, which are generated by sequential cleavage of APP by the β-and γ-secretase, are known to aggregate to plaques in the brains of patients with Alzheimer's disease (AD) (1). However, there are individuals with a considerable amount of amyloid plaques who do not show signs of dementia. Furthermore, preventing the agglutination of Aβ peptides in plaques by Aβ-specific antibodies does not stop the progress of dementia (2). Therefore, the causal association of Aβ peptides and Alzheimer's disease may not be as immediate as assumed for the last decades. Although APP and its cleavage products have been intensely investigated in the context of AD, little is known about their physiological functions and their role within the immune system. Inflammatory processes such as the activation of microglia and peripheral macrophages are increasingly considered in the research of AD pathophysiology (3)(4)(5). However, it is still not clear, whether neuroinflammation is the cause or the consequence of AD and whether it is harmful or beneficial (3,6,7). The anti-amyloid antibody Aducanumab was associated with an increased incidence of urinary tract and lung infections in the group with the highest dosage of 10 mg/kg (8). Also, a knock-out of APP or the β-site amyloid cleaving enzyme (BACE-1) in mice was associated with a reduced activity of microglia and a reduced secretion of pro-inflammatory cytokines (9)(10)(11)(12)(13). Likewise, reduced concentrations of Aβ peptides in cerebrospinal fluid (CSF) were also found during brain infections (14,15). One reason for this finding might be that Aβ peptides bind and agglutinate microorganisms and are therefore no longer measurable in the CSF. Astrocytes express higher amounts of the APP processing enzymes BACE-1 and presenilin 1 upon infection with C. pneumoniae (16). Therefore, an immunological function of APP and Aβ peptides can be assumed. The question arises, whether Aβ peptides only support the immune system as opsonin and antimicrobial agent or if they have additional functions as co-stimulatory factors that induce a pro-inflammatory immune response. During inflammation, macrophages secrete a plethora of cytokines (20). Key cytokines indicating a pro-inflammatory reaction are besides others interleukin (IL)-1β, IL-12A, IL-12B, and IL-23, IL-6 and tumor necrosis factor α (TNFα). One of the most important antiinflammatory cytokines of macrophages is IL-10 (20). We tested, whether the autologous Aβ peptides, secreted by macrophages during inflammatory processes support the immune defense by increasing the secretion of IL-6 and TNFα and by improving the phagocytosis of polystyrene particles. Preparation and Cultivation of Monocytes Monocytes were isolated from buffy coats of anonymous healthy erythrocyte donors (Transfusionsmedizin, Suhl, Germany) by density gradient centrifugation and adhesion to polystyrene cell culture dishes in Dulbecco's modified minimal essential medium (DMEM, Pierce biotechnology, Rockford, USA) without serum. As the buffy coats were bought at the blood bank, no ethics approval was necessary. Nine Mio PBMC per well were seeded in a 12-well plate and allowed to adhere for 90 min. Lymphocytes were removed by thorough washing with 4 • C Dulbecco's modified phosphate buffered saline (PBS). Cultures only included monocytes of a single donor. All experiments were replicated with the indicated number of donors (biological replicates). Monocytes were then cultivated at 37 • C and 5% CO 2 in Roswell Park Memorial Institute (RPMI) medium (Promocell, Heidelberg, Germany) containing 10% fetal calf serum (FCS, Biochrome, Berlin, Germany) and differentiated into macrophages by adding 40 ng/ml granulocyte-monocyte colony stimulating factor GM-CSF (Immunotools, Friesoythe, Germany). 50% of the medium was exchanged after four days. To avoid interference of endogenous Aβ peptides with those contained in FCS, the medium was changed to serum-free AIM-V medium (Thermo scientific, Dreieich, Germany) at the 7th day in vitro (div). An inflammatory reaction was induced either by adding 1 µm polystyrene particles (7/cell) (Polysciences, Hirschberg, Germany) or 10 ng/ml lipopolysaccharide (LPS, Sigma-Aldrich, Munich, Germany) to the cell culture medium at the 9th div (secretase inhibitors) or 8th div (siRNA transfection), 24 h before measuring cytokine secretion or phagocytotic activity. For a timeline of the experimental procedures see Transfection of Macrophages Macrophages were transfected with validated silencer R select siRNA directed toward APP (ID s1500, Thermo Scientific, Dreieich, Germany) using the viromere blue transfection system (Lipocalyx, Halle, Germany) according to the manufacurer's instructions. On the 7th div. the medium was exchanged with serum free AIM-V medium. APP siRNA was diluted to 2.8 µM with buffer BLUE. Viromer R BLUE was mixed with buffer BLUE at a ratio of 1:90 and added to the siRNA dilution. After 15 min of incubation, 100 µl of the siRNA mix was added to 1 ml of cell culture medium resulting in a final siRNA concentration of 0.14 µM. Non-silencing silencer R select negative control No. 1 siRNA (Thermo Scientific, Dreieich, Germany) served as control (mock). All experiments were carried out in duplicates. Enzyme Linked Immunosorbent Assay (ELISA) of TNFα, IL-6, IL-10 Phagocytosis-Assay-Flow Cytometry The concentrations of TNFα, IL-6, and IL-10 in the conditioned macrophage media were quantified 48 h after transfecting the macrophages with APP siRNA or 72 h after adding the secretase inhibitors by commercially available antibody sets (Catalog numbers: IL-6: 31670069, IL-10: 31670109, TNFα: 31673019; all Immunotools, Friesoythe, Germany). Optimized working concentrations of the respective antibodies were established before the experiments. All measurements were run in duplicates. The samples were diluted to be measured within the detection range of the assays and the coefficient of variation of all measurements was below 20%. Immunoprecipitation, Sodium Dodecyl Sulfate Polyacrylamide Gel Electrophoresis (SDS-PAGE), and Immunoblot The concentrations of APP and Aβ peptides in macrophage cultures were assessed with SDS-PAGE followed by immunoblotting. For the measurement of APP, cells were lysed with the radioimmunoprecipitation assay (RIPA)-buffer (50 mM 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid (HEPES), 150 mM NaCl, 1%(v/v) Igepal, 0.5%(w/v) sodiumdeoxycholate, 0.1% SDS and 1 tablet Complete Mini protease inhibitor cocktail (Roche, Germany) per 10 ml. The protein content of cell lysates was assessed with the bicinchoninic acid (BCA)-assay (Pierce Biotechnology, Rockford, USA) and a standardized amount of protein was boiled with sample buffer and loaded on 7,5 % SDS-pages according to Laemmli et al. (33). The consecutive immunoblot on polyvinylidene difluoride (PVDF) membranes was performed according to the method described by Towbin et al. (34). The immunolabeling was carried out with the anti-APP antibody 22C11 (Merck-Millipore, Darmstadt, Germany) followed by incubation with the horseradish peroxidase labeled goat-anti-mouse antibody (Merck-Millipore, Darmstadt, Germany). Membranes were developed with ECL R advance (GE-Healthcare, Freiburg, Germany) and recorded with the Amersham Imager 600 (GE-Healthcare, Freiburg, Germany). A quantification of the blots was performed on the bases of band intensity normalized to the density of the glyceraldehyde 3-phosphate dehydrogenase (GAPDH) band with the quantity one software (Bio-Rad, Munich, Germany). The concentrations of Aβ peptides in cell culture medium were evaluated according to Wiltfang et al. Aβ peptides were immunoprecipitated with the N-terminal anti-Aβ peptide antibody 1E8 and separated on Tris/Bicine SDS-Pages containing 8 M urea (35). Peptides were transferred to PVDF membranes using a semi-dry westernblot with a discontinuous buffer-system (35). Immunolabeling was performed with the anti-Aβ antibody clone 1E8 and the signal was enhanced by a two-step labeling with a biotinylated goat-anti-mouse antibody and streptavidine conjugated horseradish peroxidase. Finally, membranes were developed with ECL R advance (GE-Healthcare, Freiburg, Germany) and recorded with the Amersham Imager 600 (GE-Healthcare, Freiburg, Germany). Quantification of the blots was carried out with the quantity one software (Bio-Rad, Munich, Germany). Statistical Analysis Statistical analysis was carried out using Prism 6.0 (GraphPad Software Inc., La Jolla, CA, USA). As each experiment was carried out with cells from the same donor, pairwise comparisons were calculated with the ratio paired t-test. Results are presented as mean with standard deviations and were considered to be significant at a p < 0.05. A p-value between 0.05 and 0.1 was referred to as a trend. Reduced Secretion of TNFα and IL-6 After Inhibition of APP Processing Primary human monocyte derived macrophages were cultivated in serum-free media. The secretion of Aβ peptides was inhibited either by addition of the tripartite β-secretase inhibitor T GL−189 in a concentration of 500 nM or 10 µM of the γ-secretase inhibitor DAPT. As expected, both treatments reduced the secretion of Aβ 1−40 and Aβ 1−42 considerably (Figures 1A,B). The western blot also suggests, that 2.5 µM DAPT does not sufficiently reduce the secretion of Aβ peptides. The amount of Aβ −3−40 /Aβ 2−40 , which co-migrate in the same lane, remained unchanged as recently described by Oberstein et al. (36). The viability of the cells was not compromised as assessed by measurement of the lactate dehydrogenase (LDH) release into the conditioned media and the reduction of MTT by vital cells. TNFα, IL-6, and IL-10 were determined by ELISA 24 h after the macrophages were immunologically activated either by 10 ng/ml LPS or 1 µm polystyrene particles in a concentration of 7 particles/cell. Reduced concentrations of IL-6 were found 72 h after inhibition of Aβ peptide secretion by T GL−189 and DAPT in macrophage cultures without immunological activation as well as in those activated with polystyrene particles or LPS (Figure 2). In cultures activated by LPS T GL−189 and DAPT also reduced the concentration of TNFα (Figure 2). Without stimulation and after addition of polystyrene particles, the reduced secretion of TNFα was not statistically significant (p = 0.18 and p = 0.09, respectively). Interestingly, IL-10 was found elevated after inhibition of APP processing, but only in cultures without an immunological activation (Figure 2). A summary of the results is presented in Table 1. Reduced Secretion of IL-6 and IL-10 After Inhibition of APP Expression To discriminate the impact of the APP from Aβ peptides, the expression of APP was inhibited by a siRNA knock-down of APP in the same macrophage cultures. Transfection with a nonbinding siRNA (mock) served as control and viability was tested as indicated above (Supplementary Figure 2). The reduced concentration of APP in cell lysates 72 h after the transfection is shown in Figures 1C,D. The medium remained on the cells for 24 h, 48 h after the transfection giving 72 h of incubation with siRNA. The knock-down of APP reduced the concentration of IL-6 and TNFα (trend) in the media of LPS activated macrophages (Figure 3). Unexpectedly, the secretion of IL-6 and IL-10 was also reduced after transfection with non-binding siRNA. However, FIGURE 2 | Reduced secretion of IL-6 and TNFα after inhibition of APP processing. IL-6, TNFα, and IL-10 were determined by ELISA in cultures of primary human monocyte derived macrophages (n = 5). Cultures were unstimulated (upper row), stimulated with 1 µm polystyrene particles (seven particles/cell) (middle row) or stimulated with 10 ng/ml LPS (bottom row). The secretion of Aβ peptides was inhibited with 500 nM of the β-secretase inhibitor T GL−189 or 10 µM of the γ-secretase inhibitor DAPT. Results are presented as mean with standard deviation. ELISA were carried out in duplicates. Each point represents a biological replicate and is the mean value of the duplicates. The significance of the differences was evaluated with the ratio-paired t-test between cultures treated with secretase inhibitors and those without. (*) p < 0.1 (trend); *p < 0.05; **p < 0.01. the effect of the transfection with siRNA directed toward APP was significantly stronger than that of the transfection with non-binding siRNA. The transfection with APP siRNA did not change the cytokine secretion in cells that were unchallenged or activated by phagocytosis of polystyrene particles (Figure 3). While the pharmacological inhibition of APP processing resulted in increased concentrations of IL-10 in LPS activated cultures, the knock down of APP reduced the concentration of IL-10 in the medium (Figure 3). Again, no change of IL-10 was found in unchallenged or particle-challenged cultures of APP k/d macrophages (Figure 3). A summary of the results can be found in Table 1. No Change in the Phagocytic Activity of Monocytes After Inhibition of APP Processing or Knock-Down of APP To evaluate the impact of APP expression and Aβ peptide secretion on phagocytosis, APP processing was either pharmacologically inhibited or APP was knocked down by siRNA as detailed above. After establishing the optimal concentration of fluorescent particles and time of measurement, phagocytosis was determined by flow cytometry 240 min after TABLE 1 | Impact of β-/γ-secretase inhibition and APP knockdown on cytokine secretion-summary. IL-6 TNFα IL-10 The table summarizes the data presented in Figures 2, 3. ↓ significant reduction of cytokine secretion; ↑ significant increase of cytokine secretion; (↓) trend for a reduced secretion of cytokine; ↔ no change of cytokine secretion. adding fluorescent 1 µm microparticles (20 particles/cell) to the cultures (Supplementary Figure 4). However, neither the inhibition of the β-or γ-secretase nor the APP knock-down affected the amount of intracellular particles as indicated by the mean fluorescent intensity (MFI) or the fraction of macrophages that is associated with at least one fluorescent particle (Figure 4). DISCUSSION We showed that the pharmacological inhibition of APP processing by the tripartite β-secretase inhibitor T GL−189 and an established γ-secretase inhibitor (DAPT) reduced the secretion of IL-6 and increased the anti-inflammatory IL-10 in primary human monocyte-derived macrophages of healthy donors. During LPS induced inflammation, reduced concentrations of IL-6 and TNFα were observed. After an APP knock/down, IL-6 and IL-10 were reduced in macrophages which were activated by LPS. Interestingly, the changes of cytokine expression induced by APP knockdown and Inhibition of APP processing differ from each other. APP knockdown lead to reduced cytokine secretion only after stimulation with LPS. After pharmacological inhibition of the generation of amyloidogenic Aβ peptides, the secretion of IL-6 and TNFα was reduced under all conditions, however, for TNFα the effect was only significant after stimulation with LPS. Therefore, it seems that IL-6 is stronger affected by alterations in the APP metabolism than TNFα and IL-10. Additionally, the effects produced by β-/γsecretase inhibition and APP knockdown seem to be increased under inflammatory conditions induced by LPS. This could explain why we do only see non-significant reductions of TNFα after β-/γ-secretase inhibition under control conditions and stimulation with polystyrene particles. And it could also explain, why we see changes of cytokine secretion after APP knockdown only after stimulation with LPS. A very interesting finding is the increased secretion of IL-10 in unstimulated cultures after the inhibition of APP processing. This increase is not visible after stimulation with polystyrene particles and LPS, probably, because the anti-inflammatory effect by lowering the Aβ peptide production is superimposed by the phagocytic and inflammatory challenge. In contrast, IL-10 is reduced after APP knockdown and stimulation with LPS. This indicates that the reduction of Aβ peptides has an anti-inflammatory effect whereas the reduction of APP expression reduces the secretion of pro-and anti-inflammatory cytokines. An explanation for this difference could be that APP has functions as an immune receptor (11). Therefore, the knockdown of APP does inhibit the generation of Aβ peptides but also reduces the expression of a cellular receptor for immunological signals. The consequence is that the macrophage cannot adequately react to the pro-inflammatory activation FIGURE 4 | No change of the phagocytotic activity after inhibition of APP processing or knock-down of APP. Macrophages were treated with 500 nM of the β-secretase inhibitor T GL−189 or 10 µM of the γ-secretase inhibitor, DAPT (n = 9). Alternatively, APP was knocked-down by siRNA (n = 5). Phagocytotic activity of the macrophages was determined by flow cytometry 240 min after addition of fluorescent 1 µm polystyrene particles. Results are presented as mean with standard deviation of the measured mean fluorescence intensity (MFI) of the macrophages after phagocytosis. Measurements were carried out in duplicates. Each point represents a biological replicate and is the mean value of the duplicates. Phagocytosis was also evaluated by comparing the fraction of macrophages that contained at least one fluorescent particle normalized in the same way. with LPS, resulting in an impaired secretion of all cytokines including IL-10. A major limitation of this work is the incomplete inhibition of APP expression and processing. Neither the pharmacological inhibition of APP cleavage nor the knockdown of APP resulted in a complete absence of Aβ peptides or APP, respectively. This probably leads to a considerable underestimation of the effects. The reasons for this are a limitation of inhibitor concentrations by toxicity and unwanted side-effects as well as the existence of additional β-and γ-secretases not inhibited by the applied substances such as meprin-β or cathepsin B (36)(37)(38). Primary human macrophages are, besides neurons, probably the most difficult cells for transfection experiments. Therefore, several different techniques including lipofection and electroporation have been tested and rejected, before the transfection with viromers lead to acceptable results. A rate of transfected cells of about 80% was measured by transfection with fluorescent siRNA and stealth siRNA. The transfection with siRNA directed at APP reduced the expression of APP to ∼25% in our experiments. Interestingly, the transfection with non-binding siRNA, meant as a control, did reduce the secretion of IL-6 and IL-10 in macrophage cultures activated with LPS. This effect was reproducible with a second nonbinding siRNA and was not caused by reduced viability of the cells. However, we are currently not able to explain this finding. To increase the probability of our reported findings not being due to pharmacological side effects, we used two different substances (T GL−189 and DAPT) with two different mechanisms (inhibition of β-and γ-secretase). It was described previously that the application of GL-189 as a tripartite substance (T GL−189 ) reduces unspecific side effects by directing the pharmacophore to the catalytic center of the β-secretase (31,32,36). The reported reduction of IL-6 and TNFα as well as the increased secretion of IL-10 are therefore very probable induced by the reduced production of Aβ peptides. Blockage of the β-secretase pathway normally increases processing via the α-secretase pathway, resulting in increased concentrations of sAPPα (31). While we have not measured sAPPα, our results still suggest that macrophage sAPPα is not able to replace the missing Aβ peptides. This is opposing earlier publications, which found that sAPPα activates microglia (39)(40)(41). However, this discrepancy might be an issue of concentration and the impact of amyloid peptides was not assessed in former experiments. It is long known that Aβ fibrils and oligomers activate macrophages and microglia (41,42). However, our data suggests that not only external Aβ but also the Aβ peptides produced by macrophages themselves have an activating effect on the secretion of pro-inflammatory cytokines. As a consequence, the missing ability to produce Aβ peptides impaired the pro-inflammatory reaction induced by LPS. We and others previously reported that the expression of APP and the secretion of Aβ peptides by monocytes/macrophages depends on their immunological activation (22,24,43). Expression of APP and secretion of Aβ peptides was increased during phagocytosis and LPS-induced inflammation. In this context it seems possible that the Aβ peptides are part of a self-energizing circuit initiating an immune response. Further functions of Aβ peptides within the immune defense as antimicrobial agent and opsonine have been shown (26,27,30). The reason, why an inhibition of Aβ peptide generation had no impact on phagocytosis although it changed the concentrations of IL-6, TNFα, and IL-10 in this study might be that phagocytosis is strongly affected by opsonines and the expression of receptors involved in phagocytosis but poorly by the investigated cytokines (44). Furthermore, the observed changes in cytokine levels after stimulation with polystyrene particles are in a 10-20% range. Probably the error of measurement in the phagocytosis assay is too high to detect such subtle changes in macrophage activation. Effects caused by Aβ peptides as an opsonine could probably not be seen in this study because the changes in Aβ peptide concentrations were too small to induce a measurable effect. When describing an opsonizing activity of Aβ peptides, Condic et al. used Aβ peptide concentrations of 1 mg/ml for the opsonization (25). The change in Aβ peptide concentration in our experiments was below 1 µg/ml. Kumar and his colleagues demonstrated that APP knockout mice had a reduced survival, while mice transgenic for APP had an improved survival in a model of infectious meningitis (30). Fitting into this hypothesis, an increased expression of APP, an accumulation of Aβ peptides in the brain and reduced concentrations of Aβ peptides in the CSF were not only observed in patients with Alzheimer's disease but also with meningitis and other inflammatory diseases (14,15,(45)(46)(47). Regarding AD this would indicate, that the Aβ peptide deposition could be the consequence and not the cause of neuroinflammation. This idea is supported by epidemiological data showing a reduced risk of AD in patients using non-steroidal anti-inflammatory drugs (48). TNFα antagonists also seem to improve cognitive performance in AD patients (49,50). Some even hypothesize an infectious agent as the cause of AD (51)(52)(53)(54)(55). Pharmacological inhibition of Aβ peptide generation reduced Aβ 1−x but not N-terminal modified Aβ peptides. This indicates, that the Aβ 1−x species are responsible for the observed differences. As we did not analyze the aggregation state of the Aβ peptides in our cultures we are not able to differentiate whether Aβ monomers, oligomers or fibrils are responsible for the observed effects. However, Aβ aggregation takes place within few hours and aggregation of Aβ peptides in cultures of macrophages has been shown (56). Therefore, it seems very likely, that at least part of the secreted Aβ peptides aggregate to oligomers and fibrils. Several receptors expressed by macrophages have been shown to bind Aβ peptide fibrils or oligomeres, [e.g., CD14, CD36, macrophage scavenger receptor 1, N-formylpeptide receptor like-1 and APP (11,57)]. Binding of these receptors triggers downstream thyrosin kinases, release of Ca ++ and ultimately activation of NFkB and CREB (41,(57)(58)(59)(60)(61)(62)(63)(64). In microglial cultures of APP knock-out mice as well as in brains and intestines of these mice a reduced motility of macrophages as well as reduced concentrations of several cytokines, including IL-6, TNFα, and IL-10 were observed which is in accordance to our findings (9)(10)(11). However, due to their methodology, the authors could not discriminate between the effects caused by APP and those caused by Aβ peptides. Consequently, they discuss the role of APP as a receptor for LPS or a transcription factor. The different effects of the APP knock-down and pharmacological inhibition of APP processing concerning the IL-10 concentrations after stimulation with LPS support this assumption. Pro-and anti-inflammatory activities are reduced in APP k/o /APP k/d macrophages. When APP as a cell bound protein remains intact, the pharmacological inhibition of Aβ peptide generation removes a pro-inflammatory peptide and might result in a more anti-inflammatory state of the macrophages with reduced secretion of IL-6 and increased secretion of IL-10. CONCLUSION Taken together, the presented data supports the hypothesis that APP and Aβ peptides expressed and secreted by macrophages are involved in initiating and regulating immune responses in healthy donors. Further studies are necessary to see if this is also the case for individuals suffering from Alzheimer's disease. In clinical trials testing Aβ lowering therapies, dysfunctions of the immune system should be closely monitored. DATA AVAILABILITY STATEMENT The datasets generated for this study are available on request to the corresponding author. ACKNOWLEDGMENTS We gratefully thank Janine Utz for language editing. The present work was performed in partial fulfillment of the requirements for obtaining the degree Dr. med of MW and CG. This manuscript has been released as a pre-print at Research Square (5).
6,083
2020-09-03T00:00:00.000
[ "Biology", "Chemistry", "Medicine" ]
On The Process Of Thematic Consolidation Of Legal Terms This article is dedicated to the process of thematic consolidation of legal terms. It is evident that terms are commonly utilized in many realms including technology, science, art, literature and law. They have been studied for many years by plenty of scholars since they own primary and secondary meanings. The advancement of cutting-edge technologies has brought about many changes in terminology as well and various terms have appeared in every field. Indeed, law is a very broad area with its differently used words, phrases and word combinations. Nowadays, new terms have also been created in this sphere and they have to be learnt very thoroughly. Presently, lots of presidential decrees and rules have been enacted in a bid to develop law system; therefore, studying legal terms is a must for every linguist working in this field. As W.Shuy stated the main duty of linguists is to help untangle the language confusion (p. 12). Hence, we attempted to present some peculiarities of legal words and differences in their usage. INTRODUCTION One-word clauses and combinations, indicating the exact names of concepts related to science, technology, agriculture, art and literature are called terms. A word has many meanings, and it can have a primary and secondary meaning. The term does not have this feature. The term is a word, but it differs from ordinary words in the accuracy of its meaning and in its uniqueness. The term is used in a specific special field of science and in the speech of its representatives. For example, terms such as crime, punishment, trial, accused, aggression, recidivism, genocide, terrorism, custody, the Senate, speaker are found only in jurisprudence. The set of terms and the area that studies the terms is called terminology. The term originates from the Latin word terminus (limit, border). The vocabulary of the language of law consists mainly of terms, but in order to express and form legal concepts, some norms of such a language process are faced with certain requirements. This is due to the specifics of the term. The term performs a nominative function -its meaning is equal to a concept within the framework of the norms of a literary language. Since the term is mono semantic in the terminological system, its meaning is equivalent to the concept. That means, terms are special formal unified words that express the same meaning as a word. A single word with multiple meanings represents several concepts. The term expresses clear, concrete concepts and is devoid of emotional meanings. The meaning of the word can be complex, in which the concept acquires additional meaning and methodological features of the application. Therefore, synonyms meaning the same concept in a language differ from each other in different semantic aspects or application in oral and written speech. The same word can be a term for several branches of science. However, such words become the name of individual (different) concepts in a particular branch of science. For example, the www.psychologyandeducation.net term "task" in pedagogical terminology means the task assigned to pupils or students, in legal terminology, the term "task" means an important task assigned to an employee working in the law enforcement system. The term "operation" refers to a concept related to surgery in medicine, in legal terminology it refers to the name of an event performed for a purpose or for a specific secret task. Thus, the terms are different from commonly used words. Words are used in a specific field of science and technology and become terms when used in a limited, narrow sense. To date, the system of terms has already included a number of generally meaningful words expressing the scientific and formal meanings of special fields of science. In linguistics, especially in lexicology, it is emphasized that lexemes, which are the main lexical unit of a language, exist not in isolation from each other, but in interconnection, in various semantic connections. This principle has led to the creation of different lexical strata. According to a certain pattern, lexical layers are grouped. It is important that, depending on the purpose of the study, one or another principle of grouping terms is used. MATERIAL AND METHODS Thematic unification of lexical layers and terminological systems. It should be noted that the thematic classification is based, firstly, on the classification of objects and events themselves in real events (extralinguistic factor), and secondly, on the hyponymic relationship of lexemes (interpersonal factor). Based on the analysis of the collected materials and generalized classifications of terminological groups proposed by M. Kasimova and Sh. Kochimov, it was found that the legal terms of the Uzbek language should be classified as follows: I. Terms denoting individuals. This thematic group, in turn, can be divided into smaller subgroups: a) terms denoting persons working in the field of jurisprudence: lawyer, prosecutor, judge, representative, lawyer, shareholder, prosecutor general, guarantor, criminologist, financier, notary, convoy, criminologist, judge, secretary of state; b) terms for punishable persons: accused, bully, aggressor, accused, criminal, immoral, bandit, aggressor, poacher, butler, gangster, drug addict, killer, hijacker, accused, suspect. II. The terms designating "types of crimes": murder, theft, bribery, rape, robbery, intimidation, slander, extortion and killer. III. Terms for criminal proceedings: interrogation, investigation, search, sentencing, trial, court, guilty plea, testimony, etc. ΙV. The terms denoting the concept of "judicial authorities" are as follows: court (supreme court, local court), prosecutor's office, college of advocates, tribunal, police, etc. V. Terms designating the concept of "place of punishment": cell, prison, guardhouse, colony, zone. VI. Terms designating the concept of "legal documents": statement, constitution, certificate, power of attorney, indictment, petition, order, will, statute, contract, receipt. VII. The terms designating the concept of "court decision": a) arrest, deportation, imprisonment, transfer to disciplinary unit, confiscation of property, execution, fine; b) release, parole, sentence replacement, pardon, amnesty. Jurisprudence consists of several industries, each of which has its own peculiarity and specificity. The meaning of the terms in each area of jurisprudence is invaluable. Legal field terms can also be grouped in relation to the object they represent. These include: a) terms expressing concepts related to civil matters -individual, legal entity, entrepreneur, creditor, guardianship, trusteeship, property rights, pledge, gift, limited liability company, limited liability company; b) terms related to the state system and management activities -state law, constitutional law, official power, state court, public ISSN:00333077 4944 www.psychologyandeducation.net associations, public organizations, OliyMajlis, confederation, ministry, monopolistic activity; c) terms related to crime -recidivist, fine, imprisonment, drug addiction, murder, terrorism, extortion, looting, espionage, poaching; d) terms related to economic issues -economic court, procedural inheritance, state duty, economic contract, economic company, commercial law; e) terms related to housing issues -ownership, lease, municipal fund, share fee, storage certificate (reservation), compensation; f) terms related to issues of administrative responsibility -administrative responsibility, administrative penalties, environmental impact assessment, confiscation, administrative coercion; g) terms related to tax issues -income tax, tax object, royalties, subsidies, discounts, declaration, value added tax, invoice, excise tax, land tax, environmental tax; h) terms related to issues related to labor legislation -labor agreement, collective agreement, collective agreement, right to employment, work book, employer, disciplinary action, labor standards. DISCUSSION In connection with the development of society and language, some legal terms should serve as a new direct symbol of the state, a specific event, thing, subject, reality, etc., therefore they enter into grammatical relations in the language of legislation. In this context, they can be grouped as follows: 1) legal terms for persons associated with the legal field: prosecutor, recidivist, drug addict, killer, spy, thief, extortionist, heir, investigator, judge, lawyer; 2) terms describing the event, the course of events in the legal field: court proceedings, amnesty, coercion, confiscation, search, appeal, imprisonment, release, proceedings, conviction, investigation, interrogation, etc .; 3) legal terms denoting the name of legal documents: order, act, subsidy, certificate, license, sanction, declaration; 4)terms denoting socio-political processes related to law: agreement, referendum, elections, monitoring; 5) terms indicating cases directly related to financial means: fine, tax, duty, pension, alimony, salary, fee, scholarship, allowance, bonus; 6) terms defining the status and system of punishment: imprisonment, death penalty, colony. The terms with a negative emotional connotation in the legislation are mainly words that determine the degree of crime: they describe crimes related to the state of an individual, event, incident, process: -crimes against the person; -crimes against peace and security; -economic crimes; -crimes in the field of ecology; -crimes against military service procedures, etc. RESULTS In systemic lexicology, the issue of determining the relationship of lexical units in the center of lexical-semantic groups occupies an important place. It should be noted that the types of semantic connection are different. These include synonymy, antonymy, hyponymy, partonymy, degrees, functionalism, hierarchy. Therefore, the terms in the legal terminological system have a hyponymic relationship to each other. The definition of thematic groups of legal terms gives a clear idea of the special terminology of any field of science. Terms in thematic groups, being interconnected and in different semantic relationships, prove linguistic reality. The definition of hyponymic (gender) relations of legal terms confirms that the terminology in this area is systemic. In legal terminology, synonymy has also spread, consisting of two or more terms to express a single concept. This situation complicates the process of exchanging information. CONCLUSION In legal terminology, polysemy is characterized by abundance. It became clear that to limit polysemy, a thesaurus is needed, which is considered a negative phenomenon. One of the linguistic realities confirming the interconnection of terms is antonymy. Naturally, there are antonyms in legal terminology. The specificity of jurisprudence requires this.
2,212
2021-02-08T00:00:00.000
[ "Linguistics" ]
Highly Response and Sensitivity Chitosan-Polyvinyl alcohol Based Hexanal Sensors This work is to study the sensing properties of chitosanpolyvinyl alcohol film sensors upon hexanal gas exposure using vapour sensing technique. The sensor fabrication process was done using electrochemical deposition method by deposit the sensing materials on the gold patterned electrode with chitosan/PVA. The response value of chitosan-PVA film sensors towards hexanal was taken as an output voltage using electrical testing method. In this study, 1.75% of chitosan with 5% of PVA were mixed with the ratio of (95:5) was tested upon exposure to hexanal gas. The concentration of the hexanal was varied as 10 ppm, 20 ppm, 30 ppm. It was found that the chitosan-PVA film sensors showed fast response, stable, good stability, good recovery, repeatable and good sensitivity towards hexanal exposure. The morphology of the pure chitosan and chitosan-PVA was analyzed by scanning electron microscope (SEM) and the interaction between chitosan and PVA was examined by Fourier Transform Infrared Spectroscopy (FTIR). The FTIR results indicate the changes in characteristics of the spectral peaks due to the formation of the intermolecular bonds between chitosan and PVA. The SEM morphology of the composites showed flat smooth surface that be a sign of uniform distribution of chitosan and PVA mixture throughout the films. Introduction Food deterioration that occurred effect the quality and decrease the shelf life of the food itself.This is caused by the lipid oxidation that occurred during food storage.This process results in altereration of flavour, texture and nutrition value in the food [1].In addition, the autooxidation in wood pellets will released hexanal as a major emitted gas during storage [2].Besides that, a study reported that they develop hexanal as an odour reference standard for sensory analysis of drinking water.The off flavour in drinking water will reduce intake drinking water among peoples and less of confidence in the water utilities.The off-flavour may be caused by contaminants in water sources, corrosion of metal pipes and new plumbing materials.Hexanal was chosen as good candidates for odor reference standard because it was stable during the sampling process and reproducible results [3].Hexanal also used as food additive and it was detected in lung cancer blood [4,5].So, it is important to have a device to detect hexanal gas in order to be applied in any fields that required to monitor the concentration level of hexanal.Current equipment to detect the concentration of hexanal was gas chromatography (GC) that has drawbacks such as expensive, large space used, required specialist to operate the equipment and high in maintenance utilities.Thus, a device with low cost, easy to operate and user friedly needed to overcome the drawbacks of GC.On the other hand, shrimps, lobster, crabs or krills were exoskeleton creatures that have chitin, one of the most plentiful renewable organic resources in nature.Derivation of chitin will produce polyaminosaccharide compound that also known as chitosan [6].Chitosan was second most abundant polymer in nature after cellulose.Chitosan is a polysaccharide composed of B-(1,4)-linked D-glucosamine residues (deacetylated unit) with a variable number of randomly located N-acetyl-glucosamine groups (acetylated unit) [7].It consist of two types of reactive functional groups amino (NH2) at C-2 and hydroxyl (OH) at C-3 and C-6 position on the backbone along with interdispersed acetamido groups [8].Chitosan has excellent characteristics including good chemical inertness, high mechanical strength, biodegradability, biocompatibility, highquality film forming properties, and low cost [9].Polyvinyl alcohol (PVA) is a non-toxic, water soluble polymer having a strong film forming ability with very high dielectric strength, good charge storage capability, high mechanical tensile strength and dopant dependent electrical and optical properties [10].PVA is an inexpensive hydrophilic semicrystalline polymer.This polymer has excellent film forming properties and can produce effective electrolytes after doping with potassium hydroxide (KOH) solution [11].During to the good properties of PVA, it was believed that it has ability to influence the sensing performance of chitosan film sensor.In this study, we investigated the capability of Chitosan-PVA compared to pure Chitosan as sensing materials upon different of concentration hexanal.The electrical testing was done in order to check the performance of the sensor which including response, stability, sensitivity, repeatability and recovery. Experimental Chitosan solution was prepared by dissolving 1.75 g of chitosan powder (low molecular weight, Sigma Aldrich)) in 100 ml of 2% aqueous Acetic Acid (99.9% purity, HmBG).The mixture was stirred continuously at room temperature for 24 hours.Polyvinyl alcohol (Average MW = 200000, Sigma Aldrich) was dissolved in water and stirred for 1 hour at temperature 80°C.The PVA solution was added dropwise in the chitosan solution that prepared from previous procedures.The mixture was stirred continuously at room temperature for 1 hour.Chitosan-PVA solution will form into film by electrochemical deposition process by a piece of patterned electrode was dipped into chitosan/PVA solution and was subjected to required deposition voltage and deposition time.The deposited chitosan-PVA film were hard baked on vacuum oven with temperature of 100°C to remove excess water that existed on the film.Then, the sample was cooled at room temperature.The electrical testing was tested in order to check the performance of the sensor which including stability, sensitivity, output voltage, repeatability and recovery.The chitosan-PVA film sensor was placed in a chamber and connected with power supply and the output was recorded by Digital multimeter (Model : Sanwa CD771).The sensor was exposed to hexanal and dry air alternately.The output voltage was recorded for every 30 seconds for 5 minutes to hexanal vapor and 5 minutes to dry air to complete one cycle. Results and Discussion Fig. 1 shows graph response of CS-PVA film sensor towards hexanal exposure.The response of the sensors during hexanal exposure was taken as an output voltage.As recorded, the response value of CS-PVA increased when exposed to the hexanal.The response of CS-PVA film sensor increased fastly within 5 minutes for 10 ppm, 20, ppm and 30 ppm respectively.CS-PVA film sensors gave good response during 5 cycles of measurement when exposed to hexanal vapour.In addition, the Chitosan-PVA composite film sensors yielded repeatable readings at when hexanal vapour was exposed within 5 cycles.The recovery of CS-PVA film sensor was satisfactory since the output response recovered gradually when hexanal was replaced by dry air.Fig. 2 shows the root mean square (R2) of CS-PVA was fit to regression line which is 0.9979.This indicates that the response of CS/PVA towards each hexanal concentration was around the average value. Sensing Mechanism The sensing mechanisms of the polymer sensor films occurred when the composite polymer is exposed to water vapour.The water vapor will be chemically adsorbed (chemisorption) at the activated sites of the polymer composites.The water molecules will be dissociated at the vapor stage as shown in Eq. 1. The hydroxyl group (as shown in Eq. 2) of each water molecule will be adsorbed at the chitosan cations and providing mobile protons.This interaction will form a layer surrounding the chitosan cations.The subsequent water vapor will form a second layer on the first layer.This process will form a number of stack layers.This will lead the vapor molecules have more mobile protons resulting the increasing in the conductivity values as well as increase the output voltage during the electrical testing [12].During recovery process, oxygen molecules will be chemisorbed onto the surface of the chitosan particles when the sensor is exposed to normal air.Under certain input voltage, the amount of free electrons moving randomly in the conduction band is reduced as the chemisorbed oxygen traps them and transfers them from one particle to another.This results in an increase in resistance and a decrease in output voltage of the chitosan film until the oxygen species are saturated [13]. ( When hexanal gas vapour is exposed on the surface of the CS-PVA film sensor, the gas molecules of hexanal will make a contact with chemisorbed oxygen on the film sensor surface as shown in Eq. 3.This will make the electrons release freely within the CS-PVA surface.The equation involved as follows: The contact reaction between the analyte gas and sensor surface will release electrons and water molecules.These free electrons that occurred in conduction band will influence the increment of the response of the sensor [2].Based on the surface morphology obtained from SEM, chitosan film shows a smooth surface with some straps existed as depicted in Fig. 6.Meanwhile, the surface of the composite film made of chitosan and PVA is more homogenous and flat surfaces that indicates the uniform distribution of chitosan and PVA in the films.This uniformity characteristics may caused by the interaction of functional groups of -NH2 in chitosan and -OH in PVA [15]. Summary Chitosan-PVA film was found as a good sensing material to sense various concentration of hexanal.chitosan-PVA film sensors showed good sensing properties which include good response, stability, sensitivity, repeatability and recovery properties toward hexanal vapor exposure.The low cost production and ease of fabrication process make chitosan-PVA sensors become advantageous to be applied in various fields. Fig 3 and Fig.1shows graph response of CS-PVA film sensor towards hexanal exposure.The response of the sensors during hexanal exposure was taken as an output voltage.As recorded, the response value of CS-PVA increased when exposed to the hexanal.The response of CS-PVA film sensor increased fastly within 5 minutes for 10 ppm, 20, ppm and 30 ppm respectively.CS-PVA film sensors gave good response during 5 cycles of measurement when exposed to hexanal vapour.In addition, the Chitosan-PVA composite film sensors yielded repeatable readings at when hexanal vapour was exposed within 5 cycles.The recovery of CS-PVA film sensor was satisfactory since the output response recovered gradually when hexanal was replaced by dry air.Fig.2shows the root mean square (R2) of CS-PVA was fit to regression line which is 0.9979.This indicates that the response of CS/PVA towards each hexanal concentration was around the average value.Fig 3 and Fig 4 shows the operating temperature (°C) and humidity (%) values that were taken during the measurement of the CS-PVA film sensor at room temperature.The range of the operating temperature was between 26°C-33°C during the measurement was collected.This indicates that CS/PVA film sensors able to operate at room temperature without any fluctuation readings. Fig. 5 .Fig. 6 . Fig. 5. FTIR Spectra of Chitosan Film and Chitosan-PVA Film.Pure Chitosan films showed peak at 3367.60 cm-1 that indicated for the -OH and NH2 stretching.Measurement spectra of 2890.70 cm-1 for -CH stetching, 1027 cm-1 was noted for C-O-C stretching, 1540.84 cm-1 for -NH bending of NH2 (amide II) and 1403.70 cm-1 for -CH wagging coupled with -OH groups of chitosan.The addition of PVA in the chitosan caused a decrease in the intensity of the band at 1540.84 cm-1 of chitosan.CS-PVA composite exhibits an absorption band around 3255.54 cm-1 and 2897.40 cm-1 due to the -OH stretching and -CH2 asymmetric stretching from PVA [14].
2,461.2
2016-01-01T00:00:00.000
[ "Materials Science", "Chemistry" ]